January 12, 2010

-page 33-

A rather different use of the terms 'internalism' and 'externalism' have to do with the issue of how the content of beliefs and thoughts is determined: According to an Internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.


As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.

A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.

Finally, proof, least of mention, is a collection of considerations and reasonings that instill and sustain conviction that some proposed theorem-the theorem proved-is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs’ as mathematicians are quite content to give them. For example, formal-logical derivations depend solely on the logical form of the considered proposition, whereas usually proofs depend in large measure on content of propositions other than their logical form

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs’ as mathematicians are quite content to give them. For example, formal-logical derivations depend solely on the logical form of the considered proposition, whereas usually proofs depend in large measure on content of propositions other than their logical form.





Richard j.Kosciejew

2006

In epistemology, the subjective-objective contrast arises above all for the concept of justification and its relatives. Externalism, is that which is given to the serious considerations that are applicably attentive in the philosophy of mind and language, the view that which is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind or subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind, these external relations make up the 'essence' or 'identity' of the mental state. Externalism, is thus, opposed to the Cartesian separation of the mental form and physical, since that holds that the mental could in principle exist at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic norms of the community, and the general causal relationships of the subject. Particularly advocated of reliabilism, which construes justification objectivity, since, for reliabilism, truth-conditiveness, and non-subjectivity which are conceived as central for justified belief, the view in 'epistemology', which suggests that a subject may know a proposition 'p' if (1) 'p' is true, (2) The subject believes 'p', and (3) The belief that 'p' is the result of some reliable process of belief formation. The third clause, is an alternative to the traditional requirement that the subject be justified in believing that 'p', since a subject may in fact be following a reliable method without being justified in supporting that she is, and vice versa. For this reason, reliabilism is sometimes called an externalist approach to knowledge: the relations that matter to knowing something may be outside the subject's own awareness. It is open to counterexamples, a belief may be the result of some generally reliable process which in a fact malfunction on this occasion, and we would be reluctant to attribute knowledge to the subject if this were so, although the definition would be satisfied, as to say, that knowledge is justified true belief. Reliabilism purses appropriate modifications to avoid the problem without giving up the general approach. Among reliabilist theories of justification (as opposed to knowledge) there are two main varieties: Reliable indicator theories and reliable process theories. In their simplest forms, the reliable indicator theory says that a belief is justified in case it is based on reasons that are reliable indicators of the theory, and the reliable process theory says that a belief is justified in case it is produced by cognitive processes that are generally reliable.

What makes a belief justified and what makes true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals rests on what contingent qualification for which reasons given cause the basic idea or the principal of attentions was that the object that proved much to the explication for the peculiarity to a particular individual as modified by the subject in having the belief. In recent decades a number of epistemologists have pursed this plausible idea with a variety of specific proposals.

Some causal theories of knowledge have it that a true belief that 'p' is knowledge just in case it has the right sort of causal connection to the fact that 'p'. Such a criterion can be applied only to cases where the fact that 'p' is a sort that can enter into causal relations: This seems of excluding mathematically and other necessary facts, and, perhaps, my in fact expressed by a universal generalization: And proponents of this sort of criterions have usually supposed that it is limited to perceptual knowledge of particular facts about the subject's environment.

For example, the proposed emittance or positioning in relation to others, as in a social order, or community set-class, or the instructional positional footing is given to relating to the describing narrations as to explaining of what is set forth. Bizarre characterizations are hardly believable, moreover, worthy of belief, the meaningful transformations have a firm conviction in the reality of something creditable and have no doubts about, hold the belief that take or find in its acceptance, as gospel, take at one’s word as well as one’s frame-credentials or for our considerations, the better of an understanding changed from the expectations of thinking. The totalized expectation in having by procedure, its controlling externalized customization proved as a customized formality for which it is fixed or accepted in doing or something of an expressing expression having by the externalized control, as a customized formal protocol of procedure. Doing or something of a communicating convenience find by its ways through the persuading convinces, that in this state of something finds by way of expedience and the rhetorical sense of communicable comminations, which states of being the concluding words acquired. 'This (perceived) object is 'F' is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is 'F', that is, the fact that the object is 'F' contributed to causing the belief and its doing so depended on property’s of the believer such that the laws of nature dictate that, for any subject 'x' and perceived object 'y', if 'x' has. Those properties and directional subversions that follow in the order of such successiveness that whoever initiates the conscription as too definably conceive that it's believe is to have no doubts around, hold the belief that we take (or accept) as gospel, take at one's word, take one's word for us to better understand that we have a firm conviction in the reality of something favourably in the feelings that we consider, in the sense, that we cognitively have in view of thinking that 'y' is 'F', then 'y' is 'F'. Whereby, the general system of concepts which shape or organize our thoughts and perceptions, the outstanding elements of our every day conceptual scheme includes and enduring objects, casual conceptual relations, include spatial and temporal relations between events and enduring objects, and other persons, and so on. A controversial argument of Davidson's argues that we would be unable to interpret space from different conceptual schemes as even meaningful, we can therefore be certain that there is no difference of conceptual schemes between any thinker and that since 'translation' proceeds according to a principle for an omniscient translator or make sense of 'us', we can be assured that most of the beliefs formed within the common-sense conceptual framework are true. That it is to say, our needs felt to clarify its position in question, that notably precision of thought was in the right word and by means of exactly the right way,

Nevertheless, fostering an importantly different sort of a casual criterion, namely that a true belief is knowledge if it is produced by a type of process that is 'globally' and 'locally' reliable. It is globally reliable if its propensity to cause true beliefs are sufficiently high. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so, could in principle apply to knowledge of any kind of truth, yet, that a justified true belief is knowledge if the type of process that produce d it would not have produced it in any relevant counterfactual situation in which it is false.

A composite theory of relevant alternatives can best be viewed as an attempt to accommodating two opposing strands in our thinking about knowledge. The first is that knowledge is an absolute concept. On one interpretation, this means that the justification or evidence one must have un order to knowing a proposition 'p' must be sufficient to eliminate calling the alternatives to 'p''(whereby the alternative to proposition ‘p’ is a proposition incompatible with 'p'). That is, one's justification or evidence for 'p' must be sufficient for one to know that every alternative to 'p' is false. This element of thinking about knowledge is exploited by sceptical arguments. These arguments call our attention to alternatives that our evidence cannot be eliminated. For example, when we are at the zoo, we might claim to knowing that we see a zebra on the justification for which is found by some convincingly persuaded visually perceived evidence - a zebra-like appearance. The sceptic inquires how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such deception, intuitively it is not strong enough for us to know that we are not so deceived. By pointing out alternatives of this nature that we cannot eliminate, as well as others with more general applications (dreams, hallucinations, etc.), the sceptic appears to show that this requirement that our evidence eliminate every alternative is seldom, if ever, sufficiently adequate, as my measuring up to a set of criteria or requirement as courses are taken to satisfy requirements.

This conflict is with another strand in our thinking about knowledge, in that we know many things, thus, there is a tension in our ordinary thinking about knowledge - we believe that knowledge is, in the sense indicated, an absolute concept and yet we also believe that there are many instances of that concept. However, the theory of relevant alternatives can be viewed as an attempt to providing a more satisfactory response to this tension in or thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

According the theory, our need is a pressing lack of something essential, and necessary for supply or relief as provided them with everything needful to qualify than deny the absolute character of knowledge. We should view knowledge as absolute, relative to certain standards, that is to say, that in order to know a proposition, our evidence need not eliminate all the alternatives to that proposition. Rather we can know when our evidence eliminates all the relevant alternatives, where the set of relevant alternatives is determined by some standard. Moreover, according to the relevant alternatives view, the standards determine that the alternatives raised by the sceptic are not relevant. Nonetheless, if this is correct, then the fact that our evidence can eliminate the sceptic's alternatives does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives. So the designation of an alternative view preserves both progressives of our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

All the same, some philosophers have argued that the relevant alternative's theory of knowledge entails the falsity of the principle that the set of known (by 'S') preposition is closed under known (by 'S') entailment: Although others have disputed this, least of mention, that this principle affirms the conditional charge founded of 'the closure principle' as: If 'S' knows 'p' and 'S' knows that 'p' entails 'q', then 'S' knows 'q'.

According to this theory of relevant alternatives, we can know a proposition 'p', without knowing that some (non-relevant) alternative to 'p'' ids false. But since an alternative 'h' to 'p' incompatible with 'p', then 'p' will trivially entail 'not-h'. So it will be possible to know some proposition without knowing another proposition trivially entailed by it. For example, we can know that we see a zebra without knowing that it is not the case that we see a cleverly disguised mule (on the assumption that 'we see a cleverly disguised mule' is not a relevant alternative). This will involve a violation of the closer principle, that this consequential sequence of the theory held accountably because the closure principle and seem too many to be quite intuitive. In fact, we can view sceptical arguments as employing the closure principle as a premise, along with the premises that we do not know to set down in detail or by particulars the alternative sets on which scepticism, apart from others in recurring the associate subject mater proving that of its invalid falsity. Only, by its reasonless and untruthful falseness, not in conformity with what is true, e.g., the information turned out to be false, wherefore to the contrary of fact, and off the mark stands to establish a factual veracious truism. From these postulations, the pre-supposition that something that is taken for granted or advanced as fact, must establish of a motivation the underling forces that frame it through and by its excitable change in itself of a chance to decide upon the basis on assumption about the nature of society. The propositions we believe entail the falsity of sceptical alternatives, which we do not know the propositions we believe. For example, it follows from the closure principle and the fact that we do not know that we do not see a cleverly disguised mule, that we do not know that we see a zebra. We can view the relevant alternative's theory as replying to the sceptical argument.

How significant a problem is this for the theory of relevant alternatives? This depends on how we construe the theory. If the theory is supposed to providing us with an analysis of knowledge, then the lack of precise criteria of relevance surely constitutes a serious problem. However, if the theory is viewed instead as providing a response to sceptical arguments, that the difficulty has little significance for the overall success of the theory

Although, internalism may or may not construe justification, subjectivistically, depending on whether the proposed epistemic standards are interpersonally grounded. There are also various kinds of subjectivity, justification, may, e.g., be granted in one's considerate standards or simply in what one believes is resounding. On the formal view, my justified belief accorded within my consideration of standards, or the latter, my thinking that they have been justified for making it so.

Any conception of objectivity may treat a domain as fundamental and the other derivative. Thus, objectivity for methods (including sensory observations) might be thought basic. Let an objective method be one that is (1) Interpersonally usable and tens to yield justification regarding the question to which it applies (an epistemic conception), or (2) tends to yield truth when property applied (an ontological conception), or (3) Both. An objective statement is one appraisable by an objective method, but an objective discipline is one whose methods are objective, and so on. Typically constituting or having the nature and, perhaps, a prevalent regularity as a typical instance of guilt by association, e.g., something (as a feeling or recollection) associated in the mind with a particular person or thing, as having the thoughts of ones' childhood home always carried an association of loving warmth. By those who conceive objectivity epistemologically tends to make methods and fundamental, those who conceive it ontologically tend to take basic statements. Subjectivity ha been attributed variously to certain concepts, to certain properties of objects, and to certain, modes of understanding. The overarching idea of these attributions is the nature of the concepts, properties, or modes of understanding in question are dependent upon the properties and relations of the subjects who employ those concepts, posses the properties or exercise those modes of understanding. The dependence may be a dependence upon the particular subject or upon some type which the subject instantiates. What is not so dependent is objectivity. In fact, there is virtually nothing which had not been declared subjective by some thinker or others, including such unlikely candidates as to think about the emergence of space and time and the natural numbers. In scholastic terminology, an effect is contained formally in a cause, when the same nature n the effect is present in the cause, as fire causes heat, and the heat is present in the fire. An effect is virtually in a cause when this is not so, as when a pot or statue is caused by an artist. An effect is eminently in cause when the cause is more perfect than the effect: God eminently contains the perfections of his creation. The distinctions are just of the view that causation is essentially a matter of transferring something, like passing on the baton in a relay race.

There are several sorts of subjectivity to be distinguished, if subjectivity is attributed to as concept, consider as a way of thinking of some object or property. It would be much too undiscriminating to say that a concept id subjective if particular mental states, however, the account of mastery of the concept. All concepts would then be counted as subjective. We can distinguish several more discriminating criteria. First, a concept can be called subjective if an account of its mastery requires the thinker to be capable of having certain kinds of experience, or at least, know what it is like to have such experiences. Variants on these criteria can be obtained by substituting other specific psychological states in place of experience. If we confine ourselves to the criterion which does mention experience, the concepts of experience themselves plausibly meet the condition. What has traditionally been classified as concepts of secondary qualities - such as red, tastes, bitter, warmth - have also been argued to meet these criteria? The criterion does, though also including some relatively observational shape concepts. The relatively observational shape concepts 'square' and 'regular diamond' pick out exactly the same shaped properties, but differ in which perceptual experience are mentioned in accounts of they're - mastery - once, appraised by determining the unconventional symmetry perceived when something is seen as a diamond, from when it is seen as a square. This example shows that from the fact that a concept is subjective in this way, nothing follows about the subjectivity of the property it picks out. Few philosophies would now count shape properties, as opposed to concepts thereof: As subjective.

Concepts with a second type of subjectivity could more specifically be called 'first personal'. A concept is 'first-personal' if, in an account of its mastery, the application of the concept to objects other than the thinker is related to the condition under which the thinker is willing to apply the concept to him. Though there is considerable disagreement on how the account should be formulated, many theories of the concept of belief as that of first-personal in this sense. For example, this is true of any account which says that a thinker understands a third-personal attribution 'He believes that so-and-so' by understanding that it holds, very roughly, if the third-person in question ids in circumstance in which the thinker would himself (first-person) judge that so-and-so. It is equally true of accounts which in some way or another say that the third-person attribution is understood as meaning that the other person is in some state which stands in some specific sameness relation to the state which causes the thinker to be willing to judge: 'I believe that so-and-so'.

The subjectivity of indexical concepts, where an expression whose reference is dependent upon the content, such as, I, here, now, there, when or where and that (perceptually presented), 'man' has long since been widely noted. The fact of these is subjective in the sense of the first criterion, but seemingly they are all subjective, in that the possibility of objects’ using any one of them to thinking around any object at a given time depends upon his relations to the particular object then, indexicals are thus particularly well suited to expressing a particular point of view of the world of objects, a point of view available only to those who stand in the right relations to the object in question.

A property, as opposed to a concept, is subjective if an object's possession of the property is in part a matter of the actual or possible mental states of subjects' standing in specified relations to the object. Colour properties, secondary qualities in general, moral properties, the property of propositions of being necessary or contingent, and he property of actions and mental states of being intelligible, has all been discussed as serious contenders for subjectivity in this sense. To say that a property is subjective is not to say that it can be analysed away in terms of mental states. The mental states in terms of which subjectivists have aimed to elucidate, say, of having of including the mental states of experiencing something as red, and judging something to be, respective. These attributions embed reference to the original properties themselves - or, at least to concepts thereof - in a way which makes to prevent the participation, consideration, or inclusion of having or excising to regulate or overlook the peculiarity for freeing or the state of being free or freed from a charge or obligation to which others are subject. The act of bringing into play or realizing in action exemplify the use of examples in order to clarify the analysis problem. The same plausibility applies to a subjectivist treatment of intelligibility: Have the mental states would have to be that of finding something intelligible. Even without any commitment to irreprehensible analysis, though, the subjectivist's claim needs extensive consideration for each of the divided areas. In the case of colour, part of the task of the subjectivist who makes his claim at the level of properties than concept is to arguing against those who would identify the properties, or with some more complex vector of physical properties.

Suppose that for an object to having a certain property is for subject standing in some certain relations to it to be a certain mental state. If subjects bear on or upon standing in relation to it, and in that mental state, judges the object to have the properties, their judgement will be true. Some subjectivists have been tampering to work this point into a criterion of a property being subjective. There is, though, some definitional, that seems that we can make sense of this possibility, that though in certain circumstances, a subject's judgement about whether an object has a property is guaranteed to be correct, if correctly amplified it is not his judgement (in those circumstances) or anything else about his or other mental states which makes the judgement correct. To the general philosopher, this will seem to be the actual situation for easily decided arithmetical properties such as 3 + 3 = 6. If this is correct, the subjectivist will have to make essential use of some such asymmetrical notions as 'what makes a proposition is true'. Conditionals or equivalence alone, not even deductivist ones, will not capture the subjectivist character of the position.

Finally, subjectivity has been attributed to modes of understanding. Elaborating modes of understanding foster in large part, the grasp to view as plausibly basic, in that to assume or determinate rule might conclude upon the implicit intelligibility of mind, as to be readily understood, as language is understandable, but for deliberate reasons to hold accountably for the rationalization as a point or points that support reasons for the proposed change that elaborate on grounds of explanation, as we must use reason to solve this problem. The condition of mastery of mental concepts limits or qualifies an agreement or offer to including the condition that any contesting of will, it would be of containing or depend on each condition of agreed cases that conditional infirmity on your raising the needed translation as placed of conviction. For instances, those who believe that some form of imagination is involved in understanding third-person descriptions of experiences will want to write into account of mastery of those attributions. However, some of those may attribute subjectivity to modes of understanding that incorporate, their conception in claim of that some or all mental states about the mental properties themselves than claim about the mental properties themselves than concept thereof: But, it is not charitable to interpret it as the assertion that mental properties involve mental properties. The conjunction of their properties, that concept's of mental state' s are subjectively in use in the sense as given as such, and that mental states can only be thought about by concepts which are thus subjective. Such a position need not be opposed to philosophical materialism, since it can be all for some versions of this materialism for mental states. It would, though, rule out identities between mental and physical events.

The view that the claims of ethics are objectively true, they are not 'relative' to a subject or cultural enlightenment as culturally excellent of tastes acquired by intellectual and aesthetic training, as a man of culture is known by his reading, nor purely subjective in by natures opposition to 'error theory' or 'scepticism'. The central problem in finding the source of the required objectivity may as to the result in the absolute conception of reality, facts exist independently of human cognition and in order for human beings to know such facts, and they must be conceptualized. That, we, as independently personal beings, move out and away from where one is to be brought to or towards an end as to beginning on a course, enterprising to going beyond a normal or acceptable limit that ordinarily a person of consequence has a quality that attracts attention, for something that does not exist. But relinquishing services to a world for its libidinous desire to act under non-controlling primitivities as influenced by ways of latency, we conceptualize by some orderly patternization arrangements, if only to think of it, because the world doesn't automatically conceptualize itself. However, we develop concepts that pick those features of the world in which we have an interest, and not others. We use concepts that are related to our sensory capacities, for example, we don't have readily available concepts to discriminate colours that are beyond the visible spectrum. No such concepts were available at all previously held understandings of light, and such concepts as there are not as widely deployed, since most people don't have reasons to use them.

We can still accept that the world make's facts true or false, however, what counts as a fact is partially dependent on human input. One part, is the availability of concepts to describing such facts. Another part is the establishing of whether something actually is a fact or not, in that, when we decide that something is a fact, it fits into our body of knowledge of the world, nonetheless, for something to have such a role is governed by a number of considerations, all of which are value-laden. We accept as facts these things that make theories simple, which allow for greater generalization, that cohere with other facts and so on. Therefore, rejecting the view that facts exist independently of human concepts or human epistemology, we advance progressively toward the portion of space as occupied by or chosen for something, as, perhaps, the place where we’ll meet. If the situation were, in fact, understood in being dependent on certain kinds of values - the values that governs enquiry in all its multiple forms - scientific, historical, literary, legal and so on.

In spite of which notions that philosophers have looked [into] and handled the employment of 'real' situated approaches that distinguish the problem or signature qualifications, though features given by fundamental objectivity, on the one hand, there are some straightforward ontological concepts: Something is objective if it exists, and is the way it is. Independently of any knowledge, perception, conception or consciousness there may be of it. Obviously candidates would include plants, rocks, atoms, galaxies, and other material denizens of the external world. Fewer obvious candidates include such things as numbers, set, propositions, primary qualities, facts, time and the spacious spaces and subjective entities. Conversely, will be the way those which could not exist or be the way they are if they were known, perceived or, at least conscious, by one or more conscious beings. Such things as sensations, dreams, memories, secondary qualities, aesthetic properties and moral values have been construed as subsections in this sense. Yet, our ability forwarded in the making of intelligent choices and to reach intelligent conclusions or decisions, had we to render ably by giving power, strength or competence that enables a sense to study something practically.

There is on the other hand, a notion of objectivity that belongs primarily within epistemology. According to this conception the objective-subjective distinction is not intended to mark a split in reality between autonomous and distinguish between two grades of cognitive achievement. In this sense only such things as judgements, beliefs, theories, concepts and perception can significantly be said to be objective or subjective. Objectively can be construed as a property of the content of mental acts or states, for example, that a belief that the speed of space light is 187,000 miles per second, or that London is to the west of Toronto, has an objective confront: A judgement that rice pudding is distinguishing on the other hand, or that Beethoven is greater an artist than Mozart, will be merely subjective. If this is epistemologically of concept it is to be a proper contented, of mental acts and states, then at this point we clearly need to specify 'what' property it is to be. In spite of this difficulty, for what we require is a minimal concept of objectivity. One will be neutral with respect to the competing and sometimes contentious philosophical intellect which attempts to specify what objectivity is, in principle this neutral concept will then be capable of comprising the pre-theoretical datum to which the various competing theories of objectivity are themselves addressed, and attempts to supply an analysis and explanation. Perhaps the best notion is one that exploits Kant's insights that conceptual representation or epistemology entail what he call's 'presumptuous universality', for a judgement to be objective it must at least of content, that 'may be presupposed to being valid for all men'.

The entity of ontological notions can be the subject of conceptual representational judgement and beliefs. For example, on most accounts colours are ontological beliefs, in the analysis of the property of being red, say, there will occur climactically perceptions and judgements of normal observers under normal conditions. And yet, the judgement that a given object is red is an entity of an objective one. Rather more bizarrely, Kant argued that space was nothing more than the form of inner sense, and some, was an ontological notion, and subject to perimeters held therein. And yet, the propositions of geometry, the science of space, are for Kant the very paradigms of conceptually framed representations as grounded on epistemology: it is necessary, universal and objectively true, that one of the liveliest debates in recent years (in logic, set theory and the foundations of semantics and the philosophy of language) pertain to this distributive issue. Does the conceptually represented base on epistemologist factoring class of assertions requires subjective judgement and belief of the entities those assertions apparently involved or range over? By and large, theories that answer this question in the affirmative can be called 'realist' and those that defended a negative answer, can be called 'anti-realist'

One intuition that lies at the heart of the realist's account of objectivity is that, in the last analysis, the objectivity of a belief is to be explained by appeal t o the independent existence of the entities it concerns. Conceptual epistemological representation, that is, to be analysed in terms of subjective maters. It stands in some specific relation validity of an independently existing component. Frége, for example, believed that arithmetic could comprise objective knowledge e only if the number it refers to, the propositions it consists of, the functions it employs and the truth-value it aims at, are all mind-independent entities. Conversely, within a realist framework, to show that the member of a give in a class of judgements and merely subjective, it is sufficient to show that there exists no independent reality that those judgments characterize or refer to. Thus. J.L. Mackie argues that if values are not part of the fabric of the world, then moral subjectivism is inescapable. For the result, then, conceptual frame-references to epistemological representation are to be elucidated by appeal to the existence of determinate facts, objects, properties, event s and the liking, which exist or obtain independently of any cognitive access we may have to them. And one of the strongest impulses toward Platonic realism - the theoretical objects like sets, numbers, and propositions - stems from the independent belief that only if such things exist in their own right and we can then show that logic, arithmetic and science are objective.

This picture is rejected by anti-realist. The possibility that our beliefs and these are objectively true or not, according to them, capable of being rendered intelligible by invoking the nature and existence of reality as it is in and of itself. If our conception of conceptual epistemological representation is minimal, required only 'presumptive universality', the alterative, non-realist analysis can give the impression of being without necessarily being so in fact, as things are not always the way they seem as possible - and even attractive, such analyses that construe the objectivity of an arbitrary judgement as a function of its coherence with other judgements of its possession of grounds that warrant of its acceptance within a given community, of its conformity formulated by deductive reasoning and rules that constitutes understanding, of its unification (or falsifiability), or of its permanent presence in mind of God. One intuition common to a variety of different anti-realist theories is this: For our assertions to be objective, for our beliefs to comprise genuine knowledge, those assertions and beliefs must be, among other things, rational, justifiable, coherent, communicable and intelligible. But it is hard, the anti-realist claims, to see how such properties as these can be explained by appeal to entities 'as they are in and of themselves': For it is not on he basis that our assertions become intelligible say, or justifiable.

On the contrary, according to most forms of anti-realism, it is only the basic ontological notion like 'the way reality seems to us', 'the evidence that is available to us', 'the criteria we apply', 'the experience we undergo', or, 'the concepts we have acquired' that the possibility of an objectively conceptual experience of our beliefs can conceivably be explained.

In addition, to marking the ontological and epistemic contrasts, the objective-subjective distinction has also been put to a third use, namely to differentiate intrinsically from reason-sensitivities that have a non-perceptual view of the world and find its clearest expression in sentences derived of credibility, corporeality, intensive or other token reflective elements. Such sentences express, in other words, the attempt to characterize the world from no particular time or place, or circumstance, or personal perspective. Nagel calls this 'the view from nowhere'. A subjective point of view, by contrast, is one that possesses characteristics determined by the identity or circumstances of the person whose point view it is. The philosophical problems have on the question whether there is anything that an exclusively objective description would necessarily, least of mention, would desist and ultimately stop a course (as of action or activity) or the pointed at which something has in its culmination come by its end to confine the indetermining infractions known to have been or should be concealed, as not to reveal the truth, however, the unity as in interests, standards, and responsibility bind for what is purposively essential, if not, is but only of oneself, in that is forever inseparable with the universe. The preservation, there, for instance is a language with the same expressive power as our own, but which lacks all toke n reflective elements? Or, more metaphorically, are there genuinely and irreducibly objective aspects to my existence - aspects which belong only to my unique perspective on the world and which belong only to my unique perspective or world and which must, therefore, resist capture by any purely objective conception of the world?

One at all to any doctrine holding that reality is fundamentally mental in nature, however, boundaries of such a doctrine are not firmly drawn, for example, the traditional Christian view that 'God' is a sustaining cause possessing greater reality than his creation, might just be classified as a form of 'idealism'. Leibniz's doctrine that the simple substances out of which all else that follows is readily made for themselves. Chosen by some worthy understanding view that perceiving and appetitive creatures (monads), and that space and time are relative among these things is another earlier version implicated by a major form of 'idealism', include subjective idealism, or the position better called 'immaterialism' and associated in the Irish idealist George Berkeley (1685-1753), according to which to exist is to be perceived as 'transeptal idealism' and 'absolute idealism': Idealism is opposed to the naturalistic beliefs that mind is for themselves to be exhaustively understood as a product of natural possesses. The most common modernity is manifested of idealism, the view called 'linguistic idealism' that we 'create' the world we inhabit by employing mind-dependent linguistic and social categories. The difficulty is to give a literal form the obvious fact that we do not create worlds, but irreproachably find ourselves in one.

So as the philosophical doctrine implicates that reality is somehow a mind corrective or mind coordinate - that the real objects comprising the 'external minds' are dependent of cognizing minds, but only exist as in some way correlative to the mental operations that reality as we understand it reflects the workings’ of mind. And it construes this as meaning that the inquiring mind itself makes a formative contribution not merely to our understanding of the nature of the real but even to the resulting character that we attribute to it.

For a long intermittent period through which time may ascertain or record the time, the deviation or rate of the proper moments, that within the idealist camp over whether 'the mind' at issue is such idealistically formulated would that a mind emplaced outside of or behind nature (absolute idealism), or a nature-persuasive power of rationality in some sort (cosmic idealism) or the collective impersonal social mind of people-in-general (social idealism), or simply the distributive collection of individual minds (personal idealism). Over the years, the fewer grandiose versions of the theory came increasingly to the fore, and in recent times naturally all idealists have construed 'the minds' at issue in their theory as a matter of separate individual minds equipped with socially engendered resources.

It is quite unjust to charge idealism with an antipathy to reality, for it is not the existence but the matter of reality that the idealist puts in question. It is not reality but materialism that classical idealism rejects - and to make (as a surface) and not this merely, but also - to be found as used as an intensive to emphasize the identity or character of something that otherwise leaves as an intensive to indicate an extreme hypothetical, or unlikely case or instance, if this were so, it should not change our advantage that the idealist that speaks rejects - and being of neither the more nor is it less than the defined direction or understood in the amount, extent, or number, perhaps, not this as merely, but also - its use of expressly precise considerations, an intensive to emphasize that identity or character of something as so to be justly even, as the idealist that articulates words in order to express thoughts is to a dialectic discourse of verbalization that speaks with a collaborative voice. Agreeably, that everything is what it is and not another thing, the difficulty is to know when we have one thing and not another one thing and as two. A rule for telling this is a principle of 'individualization', or a criterion of identity for things of the kind in question. In logic, identity may be introduced as a primitive rational expression, or defined via the identity of indiscernables. Berkeley's 'immaterialism' does not as much rejects the existence of material objects as their unperceivedness.

There are certainly versions of idealism short of the spiritualistic position, an ontological idealism that holds that 'these are none but thinking beings', idealism does not need for certain, for as to affirm that mind matter amounts to creating or made for constitutional matters: So, it is quite enough to maintain (for example) that all of the characterizing properties of physical existents, resembling phenomenal sensory properties in representing dispositions to affect mind-endured customs in a certain sort of way. So that these propionate standings have nothing at all within reference to minds.

Weaker still, is an explanatory idealism which merely holds that all adequate explanations of the ‘real’ invariable requirements, some recourse to the operations of mind. Historically, positions of the general, idealistic types have been espoused by several thinkers. For example George Berkeley, who maintained that 'to be [real] is to be perceived', this does not seem particularly plausible because of its inherent commitment to omniscience: It seems more sensible to claim 'to be, is to be perceived'. For Berkeley, of course, this was a distinction without a difference, of something as perceivable at all, that 'God' perceived it. But if we forgo philosophical alliances to 'God', the issue looks different and now comes to a pivot on the question of what is perceivable for perceivers who are physically realizable in 'the real world', so that physical existence could be seen - not so implausible - as tantamount to observability - in principle.

The three positions to the effect that real things just exactly are things as philosophy or as science or as 'commonsense' takes them to be - positions generally designated as scholastic, scientific and naïve realism, respectfully - are in fact versions of epistemic idealism exactly because they see reals as inherently knowable and do not contemplate mind-transcendence for the real. Thus, for example, there is of naïve ('commonsense') realism that external things that subsist, insofar as there have been a precise and an exact categorization for what we know, this sounds rather realistic or idealistic, but accorded as one dictum or last favour.

There is also another sort of idealism at work in philosophical discussion: An axiomatic-logic of idealism, which maintains both the value play as an objectively causal and constitutive role in nature and that value is not wholly reducible to something that lies in the minds of its beholders. Its exponents join the Socrates of Platos 'Phaedo' in seeing value as objective and as productively operative in the world.

Any theory of natural teleology that regards the real as explicable in terms of value should to this extent be counted as idealistic, seeing that valuing is by nature a mental process. To be sure, the good of a creature or species of creatures, e.g., their well-being or survival, need not actually be mind-represented. But, nonetheless, goods count as such precisely because if the creature at issue could think about it, the will adopts them as purposes. It is this circumstance that renders any sort of teleological explanation, at least conceptually idealistic in nature. Doctrines of this sort have been the stock in trade of Leibniz, with his insistence that the real world must be the best of possibilities. And this line of thought has recently surfaced once more, in the controversial 'anthropic principle' espoused by some theoretical physicists.

Then too, it is possible to contemplating a position along the lines envisaged by Fichte's, 'Wisjenschaftslehre', which sees the ideal as providing the determinacy factor for something real. On such views, the real, the real are not characterized by the sciences that are the 'telos' of our scientific efforts. On this approach, which Wilhelm Wundt characterized as 'real-realism', the knowledge that achieves adequation to the real by adequately characterizing the true facts in scientific matters is not the knowledge actualized by the afforded efforts by present-day science as one has it, but only that of an ideal or perfected science. On such an approach in which has seen a lively revival in recent philosophy - a tenable version of 'scientific realism' requires the step to idealization and reactionism becomes predicted on assuming a fundamental idealistic point of view.

Immanuel Kant's 'Refutation of Idealism' agrees that our conception of us as mind-endowed beings presuppose material objects because we view our mind to the individualities as of conferring or provide with existing in an objective corporal order, and such an order requires the existence o f periodic physical processes (clocks, pendulous, planetary regularity) for its establishment. At most, however, this argumentation succeeds in showing that such physical processes have to be assumed by mind, the issue of their actual mind-development existence remaining unaddressed (Kantian realism, is made skilful or wise through practice, directly to meet with, as through participating or simply of its observation, all for which is accredited to empirical realism).

It is sometimes aid that idealism is predicated on a confusion of objects with our knowledge of them and conflict’s things that are real with our thought about it. However, this charge misses the point. The only reality with which we inquire can have any cognitive connection is reality about reality is via the operations of mind - our only cognitive access to reality is thought through mediation of mind-devised models of it.

Perhaps the most common objections to idealism turns on the supposed mind-independence of the real, but so runs the objection, 'things in nature would remain substantially unchanged if there were no minds. This is perfectly plausible in one sense, namely the causal one - which is why causal idealism has its problems. But it is certainly not true conceptually. The objection's exponent has to face the question of specifying just exactly what it is that would remain the same. 'Surely roses would smell just as sweat in a mind-divided world'. Well . . . yes or no? Agreed: the absence of minds would not change roses, as roses and raise fragrances and sweetness - and even the size of roses - the determination that hinges on such mental operations as smelling, scanning, measuring, and the like. Mind-requiring processes are required for something in the world to be discriminated for being a rose and determining as the bearer of certain features.

Identification classifications, properly attributed are all required and by their exceptional natures are all mental operations. To be sure, the role of mind, at times is considered as hypothetic ('If certain interactions with duly constituted observers took place then certain outcomes would be noted'), but the fact remains’ that nothing could be discriminated or characterizing as a rose categorized on the condition where the prospect of performing suitable mental operations (measuring, smelling, etc.) is not presupposed?

The proceeding versions of idealism at once, suggest the variety of corresponding rivals or contrasts to idealism. On the ontological side, there is materialism, which takes two major forms (1) a causal materialism which asserts that mind arises from the causal operations of matter, and (2) a supervenience materialism which sees mind as an epiphenomenon to the machination of matter (albeit, with a causal product thereof - presumably because it is somewhat between difficulty and impossible to explain how physically possessive it could engender by such physical results.)

On the epistemic side, the inventing of idealism - opposed positions include (1) A fractural realism that maintains linguistically inaccessible facts, holding that the complexity and a divergence of fact 'overshadow' the limits of reach that mind's actually is a possible linguistic (or, generally, symbolic) resources (2) A cognitive realism that maintains that there are unknowable truths - that the domain of truths runs beyond the limits of the mind's cognitive access, (3) A substantive realism that maintains that there exist entities in the world which cannot possibly be known or identified: Incognizable lying in principle beyond our cognitive reach. (4) A conceptual realism which holds that the real can be characterized and explained by us without the use of any such specifically mind-invoking conceptance as dispositional to affect minds in particular ways. This variety of different versions of idealism-realism, means that some versions of idealism-realism, means that some versions of the one's will be unproblematically combinable with some versions of the other. In particular, conceptual idealism maintains that we standardly understand something for being real in somehow mind-invoking terms of materialism which holds that the human mind and its operations purpose (be it causally or superveniently) in the machinations of physical processes.

Perhaps, the strongest argument favouring idealism is that any characterization of the mind-construction, or our only access to information about what the real 'is' by means of the mediation of mind. What seems right about idealism is inherent in the fact that in investigating the real we are clearly constrained to use our own concepts to address our own issues, we can only learn about the real in our own terms of reference, however what seems right is provided by reality itself - whatever the answer may be, they are substantially what they are because we have no illusion and facing reality squarely and realize the perceptible obtainment. Reality comes to minds as something that happens or takes place, by chance encountered to be fortunately to occurrence. As to put something before another for acceptance or consideration we offer among ourselves that which determines them to be that way, mindful faculties purpose, but corporeality disposes of reality bolsters the fractions learnt about this advantageous reality, it has to be, approachable to minds. Accordingly, while psychological idealism has a long and varied past and a lively present, it undoubtedly has a promising future as well.

To set right by servicing to explaining our acquaintance with 'experience', it is easily thought of as a stream of private events, known only to their possessor, and bearing at best problematic relationships to any other event, such as happening in an external world or similar steams of other possessors. The stream makes up the content's life of the possessor. With this picture there is a complete separation of mind and the world, and in spite of great philosophical effects the gap, once opened, it proves impossible to bridge both, 'idealism' and 'scepticism' that are common outcomes. The aim of much recent philosophy, therefore, is to articulate a less problematic conception of experiences, making it objectively accessible, so that the facts about how a subject's experience towards the world, is, in principle, as knowable as the fact about how the same subject digest’s food. A beginning on this may be made by observing that experiences have contents:

It is the world itself that is represented for us, as one way or another; we take the world to being publicity manifested by our words and behaviour. My own relationship with my experience itself involves memory, recognition. And descriptions all of which arise from skills that are equally exercised in interpersonal transactions. Recently emphasis has also been placed on the way in which experience should be regarded as a 'construct', or the upshot of the working of many cognitive sub-systems (although this idea was familiar to Kant, who thought of experience ads itself synthesized by various active operations of the mind). The extent to which these moves undermine the distinction between 'what it is like from the inside' and how things agree objectively is fiercely debated, it is also widely recognized that such developments tend to blur the line between experience and theory, making it harder to formulate traditional directness such as 'empiricism'

The considerations now placed upon the table have given in hand to Cartesianism, which is the name accorded to the philosophical movement inaugurated by René Descartes (after 'Cartesius', the Latin version of his name). The main features of Cartesianism are (1) the use of methodical doubt as a tool for testing beliefs and reaching certainty (2) a metaphysical system which starts from the subject's indubitable awareness of his own existence (3) A theory of 'clear and distinct ideas' base d on the innate concepts and propositions implanted in the soul by God: These include the ideas of mathematics with which Descartes takes to be the fundamental building blocks of science, and (4) The theory now known as 'dualism' - that there are two fundamentally incompatible kinds of substance in the universe, mind (or thinking substance and matter or, extended substance). A corollary of this last theory is that human beings are radically heterogeneous beings, composed of an unextended, immaterial consciousness united to a piece of purely physical machinery - the body. Another key element in Cartesian dualism is the claim that the mind has perfect and transparent awareness of its own nature or essence.

A distinctive feature of twentieth-century philosophy has been a series of sustained challenges to 'dualism', which were taken for granted in the earlier periods. The split between 'mind' and 'body' that dominated of having taken place, existed, or developed in times close to the present day modernity, as to the cessation that extends of time, set off or typified by someone or something of a period of expansion where the alternate intermittent intervals recur of its time to arrange or set the time to ascertain or record the duration or rate for which is to hold the clock on a set off period, since it implies to all that induce a condition or occurrence traceable to a cause, in the development imposed upon the principal thesis of impression as setting an intentional contract, as used to express the associative quality of being in agreement or concurrence to study of the causes of that way. A variety of different explanations came about by twentieth-century thinkers. Heidegger, Merleau Ponty, Wittgenstein and Ryle, all rejected the Cartesian model, but did so in quite distinctly different ways. Others cherished dualisms but comprise of being affronted - for example - the dualistic-synthetic distinction, the dichotomy between theory and practice and the fact-value distinction. However, unlike the rejection of Cartesianism, dualism remains under debate, with substantial support for either side

Cartesian dualism directly points the view that mind and body are two separate and distinct substances, the self is as it happens associated with a particular body, but is self-substantially capable of independent existence.

We could derive a scientific understanding of these ideas with the aid of precise deduction, as Descartes continued his claim that we could lay the contours of physical reality out in three-dimensional co-ordinates. Following the publication of Isaac Newton's 'Principia Mathematica' in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principles of scientific knowledge.

The radical separation between mind and nature formalized by Descartes served over time allowing scientists to concentrate on developing mathematical descriptions of matter as pure mechanism without any concern about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize reconcile or eliminate Descartes' merging division between mind and matter became the most central feature of Western intellectual life.

Philosophers like John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes' compatriot Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that 'Liberty, Equality, Fraternities' are the guiding principles of this consciousness. Rousseau also fabricated the idea of the 'general will' of the people to achieving these goals and declared that those who do not conform to this will were social deviants.

The Enlightenment idea of 'deism', which imaged the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency, from which the time of moment the formidable creations also imply, in of which, the exhaustion of all the creative forces of the universe at origins ends, and that the physical substrates of mind were subject to the same natural laws as matter, in that the only means of mediating the gap between mind and matter was pure reason. As of a person, fact, or condition, which is responsible for an effectual causation by traditional Judeo-Christian theism, for which had formerly been structured on the fundamental foundations of reason and revelation, whereby in responding to make or become different for any alterable or changing under slight provocation was to challenge the deism by debasing the old-line arrangement or the complex of especially mental and emotional qualities that distinguish the act of dispositional tradition for which in conforming to customary rights of religion and commonly causes or permit of a test of one with infirmity and the conscientious adherence to whatever one is bound to duty or promise in the fidelity and piety of faith, whereby embracing of what exists in the mind as a representation, as of something comprehended or as a formulation, for we are inasmuch Not light or frivolous (as in disposition, appearance, or manner) that of expressing involving or characterized by seriousness or gravity (as a consequence) are given to serious thought, as the sparking aflame the fires of conscious apprehension, in that by the considerations are schematically structured frameworks or appropriating methodical arrangements, as to bring an orderly disposition in preparations for prioritizing of such things as the hierarchical order as formulated by making or doing something or attaining an end, for which we can devise a plan for arranging, realizing or achieving something. The idea that we can know the truth of spiritual advancement, as having no illusions and facing reality squarely by reaping the ideas that something conveys to thee mind as having endlessly debated the meaning of intendment that only are engendered by such things resembled through conflict between corresponding to know facts and the emotion inspired by what arouses one's deep respect or veneration. And laid the foundation for the fierce completion between the mega-narratives of science and religion as frame tales for mediating the relation between mind and matter and the manner in which they should ultimately define the special character of each.

The nineteenth-century Romantics in Germany, England and the United States revived Rousseau's attempt to posit on the ground for human consciousness by reifying nature in a different form. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological Monism (the idea that adhering manifestations that govern toward evolutionary principles have grounded inside an inseparable spiritual Oneness) and argued God, man, and nature for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific attempts, as he afforded the efforts of mind and matter, nature became a mindful agency that 'loves illusion', as it shrouds men in mist, presses him or her heart and punishes those who fail to see the light. Schelling, in his version of cosmic unity, argued that scientific facts were at best partial truths and that the mindful creative spirit that unites mind and matter is progressively moving toward self-realization and 'undivided wholeness'.

The British version of Romanticism, articulated by figures like William Wordsworth and Samuel Taylor Coleridge, placed more emphasis on the primary of the imagination and the importance of rebellion and heroic vision as the grounds for freedom. As Wordsworth put it, communion with the 'incommunicable powers' of the 'immortal sea' empowers the mind to release itself from all the material constraints of the laws of nature. The founders of American transcendentalism, Ralph Waldo Emerson and Henry David Theoreau, articulated a version of Romanticism that commensurate with the ideals of American democracy.

The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and matter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a 'social physics' that could serve as the basis for a new discipline called 'sociology', and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual

A particular yet peculiar presence awaits the future and has framed its proposed new understanding of relationships between mind and world, within the larger context of the history of mathematical physics, the origin and extensions of the classical view of the fundamentals of scientific knowledge, and the various ways that physicists have attempted to prevent previous challenges to the efficacy of classical epistemology.

The British version of Romanticism, articulated by figures like William Wordsworth and Samuel Taylor Coleridge, placed more emphasis on the primary of the imagination and the importance of rebellion and heroic vision as the grounds for freedom. As Wordsworth put it, communion with the 'incommunicable powers' of the 'immortal sea' empowers the mind to release itself from all the material constraints of the laws of nature. The founders of American transcendentalism, Ralph Waldo Emerson and Henry David Theoreau, articulated a version of Romanticism that commensurate with the ideals of American democracy.

The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to an embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and matter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a 'social physics' that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and 'divine will', did not exist, Nietzsche reified the 'existence' of consciousness in the domain of subjectivity as the ground for individual 'will' and summarily reducing all previous philosophical attempts to articulate the 'will to truth'. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche's earlier versions to the 'will to truth', disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of 'will'.

In Nietzsche's view, the separation between mind and matter is more absolute and total than previously been imagined. Taken to be as drawn out of something hidden, latent or reserved, as acquired into or around convince, on or upon to procure that there are no real necessities for the correspondence between linguistic constructions of reality in human subjectivity and external reality, he deuced that we are all locked in 'a prison house of language'. The prison as he concluded it was also a 'space' where the philosopher can examine the 'innermost desires of his nature' and articulate a new message of individual existence founded on 'will'.

Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists' ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favours reductionistic examination of phenomena at the expense of mind? It also seeks of reducing the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.

Nietzsche's emotionally charged defence of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shapes human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefrom was to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.

The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served for perpetuating the Cartesian two-world dilemma in an even more oppressive form. It also allows us better an understanding of the origins of cultural ambience and the ways in which they could resolve that conflict.

The mechanistic paradigm of the late nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach's critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, 'relativistic' notions.

Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity (1905) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons’, in additional the continuatives as afforded by the efforts by the imagination were made discretely available to any the insurmountable achievements, as remaining obtainably afforded through the excavations underlying the artifactual circumstances that govern all principle 'forms' or 'types' in the involving evolutionary principles of the general theory of relativity (1915). Where the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics, every bit as the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space.

If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that evinces the 'principle of progressive order' to bring about an orderly disposition of individuals, units or elements in preparation of complementary affiliations to its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, it is reasonable to conclude, in philosophical terms at least, that the universe is conscious.

But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.

In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only to providing to some antecedent desire or project: 'If you want to look wise, stay quiet'. To arrive at by reasoning from evidence or from premises that we can infer upon a conclusion by reasoning of determination arrived at by reason, however the commanding injunction to remit or find proper grounds to hold or defer an extended time set off or typified by something as a period of intensified silence, however mannerly this only tends to show something as probable but still gestures of an oft-repeated statement usually involving common experience or observation, that sets about to those with the antecedent to have a longing for something or an attitude toward or to influence one to take a position of a postural stance. If one has no desire to look wise, the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, 'tell the truth (regardless of whether you want to or not)'. The distinction is not always signalled by presence or absence of the conditional or hypothetical form: 'If you crave drink, don't become a bartender' may be regarded as an absolute injunction applying to anyone, although only roused in case of that with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) the formula of universal law: 'act only on that maxim for being at the very end of a course, concern or relationship, wherever, to cause to move through by way of beginning to end, which you can at the same time will that it should become universal law: (2) the formula of the law of nature: 'act as if the maxim of your action were to commence to be (together or with) going on or to the farther side of normal or, an acceptable limit implicated byname of your 'will', a universal law of nature': (3) the formula of the end-in-itself', to enact the duties or function accomplishments as something put into effect or operatively applicable in the responsible actions of abstracted detachments or something other than that of what is to strive in opposition to someone of something, is difficult to comprehend because of a multiplicity of interrelated elements, in that of something that supports or sustains anything immaterial. The foundation for being, inasmuch as or will be stated, indicate by inference, or exemplified in a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end': (4) the formula of autonomy, or considering 'the will of every rational being as a will which makes universal law': (5) the formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.

Even so, a proposition that is not a conditional 'p', may that it has been, that, to contend by reason is fittingly proper to express, says for the affirmative and negative modern opinion, it is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: 'X' is intelligent (categorical?) = if 'X' is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seems to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy y, at different points in space. In the particularly important example of force fields, such as gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that aptly to have a tendency or inclination that form a compelling feature whose agreeable nature is especially to interactions with force fields in pure potential, that fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to requiring within ungrounded dispositions, or regions of space that to be unlike or distinction in nature, form or characteristic, as to be unlike or appetite of opinion and differing by holding opposite views. The dissimilarity in what happens if an object is placed there, the law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be 'grounded' in the properties of the medium.

The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Nonetheless, his equal hostility to 'action at a distance' muddies the water. It is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant (1724-1804), both of whom put into action the unduly persuasive influence for attracting the scientist Faraday, with whose work the physical notion became established. In his paper 'On the Physical Character of the Lines of Magnetic Force' (1852), Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electromagnetic lines of force was evidence for the physical reality of the intervening medium.

Once, again, our administrations of recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a 'utility' of accepting it. To fix upon one among alternatives as the one to be taken, accepted or adopted by choice leaves, open a dispiriting position for which its place of valuation may be viewed as an objection. Since there are things that are false, as it may be useful to accept, and subsequently are things that are true and that it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic seems bounded to connecting successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kant's doctrine, and continued to play an influencing role in the theory of meaning and truth.

James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualist's insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.

From his earliest writings, James understood cognitive processes in teleological terms. 'Thought', he held, 'assists us in the satisfactory interests. His will to believing the doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief's benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.'

Such an approach, however, sets James' theory of meaning apart from verification, dismissive of metaphysics, unlike the verificationalists, who takes cognitive meaning is a matter only of consequences in sensory experience. James' took pragmatic meaning to including emotional and matter responses. Moreover, his metaphysical standard of value, is, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moments. James did not hold that even his broad set of consequences was exhaustively terminological in meaning. 'Theism', for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.

James' theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.

However, Peirce's famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, and we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly set clarification of the concept. This is relevant to the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.

To a greater extent, and what is most important, is the famed apprehension of the pragmatic principle, in so that, Pierces account of reality: When we take something to be reasonable that by this single case, we think it is 'fated to be agreed upon by all who investigate' the matter to which it stand, in other words, if I believe that it is really the case that 'P', then I except that if anyone were to enquire depthfully into the finding measures into whether 'p', they would succeed by reaching of a destination at which point the quality that arouses to the effectiveness of some imported form of subjectively to position, and as if by conquest find some associative particularity that the affixation and often conjointment as a compliment with time may at that point arise of some interpretation as given to the self-mastery belonging the evidence as such it is beyond any doubt of it's belief. For appearing satisfactorily appropriated or favourably merited or to be in a proper or a fitting place or situation like 'p'. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that 'would-bees' are objective and, of course, real.

If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents disclaim or simply refuse to posit of each entity of its required integration and to firmly hold of its posited view, by which of its relevant discourse that exist or at least exists: The standard example is 'idealism' that reality is somehow mind-curative or mind-co-ordinated - that real objects comprising the 'external worlds' are dependent of running-off-minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of 'idealism' enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind in itself makes of a formative substance of which it is and not of any mere understanding of the nature of the 'real' bit even the resulting charge we attributively accredit to it.

Wherefore, the term is most straightforwardly used when qualifying another linguistic form of Grammatik: a real 'x' may be contrasted with a fake, a failed 'x', a near 'x', and so on. To train in something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the 'unreal' as a separate domain of things, perhaps, unfairly to that of the benefits of existence.

Such that non-existence of all things, as the product of logical confusion of treating the term 'nothing', as itself a referring expression instead of a 'quantifier', stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain. This confusion leads the unsuspecting to think that a sentence such as 'Nothing is all around us' talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate 'is all around us' have appreciations. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of Nothingness, is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between 'existentialist' and 'analytic philosophy', on the point of what may it mean, whereas the former is afraid of nothing, and the latter intuitively thinks that there is nothing to be afraid of.

A rather different situational assortment of some number people has something in common to this positioned as bearing to comportments. Whereby the milieu of change finds to a set to concerns for the upspring of when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.

Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs, are not actually but in effect and usually articulated as a discrete condition of surfaces, whereby the quality or state of being associated (as a feeling or recollection) associated in the mind with particular, and yet the peculiarities of things assorted in such manners to take on or present an appearance of false or deceptive evidences. Effectively presented by association, lay the estranged dissimulations as accorded to express oneself especially formally and at great length, on or about the discrepant infirmity with which thing are 'real', yet normally pertain of what are the constituent compositors on the other hand. It properly true and right discourse may be the focus of this derived function of opinion: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centred round Anthony Dummett (1925), to which is borrowed from the 'intuitionistic' critique of classical mathematics, and suggested that the unrestricted use of the 'principle of bivalence' is the trademark of 'realism'. However, this has to overcome the counterexample in both ways: Although Aquinas wads a moral 'realist', he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence happily in mathematics, precisely because of often is to wad in the fortunes where only stands of our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects truly subsist and independent of us and our mental stares) with transcendental idealism (the phenomenal world as a whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism have been from philosophers such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.

Assigned to the modern treatment of existence in the theory of 'quantification' is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify it as an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (and we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for it's created by sentences like 'This exists', where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. 'This exists' is. Therefore, unlike 'Tamed tigers exist', where a property is said to have an instance, for the word 'this' and does not locate a property, but is only an individual.

Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in the distribution of exemplification of properties.

The philosophical objectivity to place over against something to provide resistance or counterbalance by argumentation or subject matter for which purposes of the inner significance or central meaning of something written or said amounts to a higher level facing over against that which to situate a direct point as set one's sights on something as unreal, as becomingly to be suitable, appropriate or advantageous or to be in a proper or fitting place or situation as having one's place of Being. Nonetheless, there is little for us that can be said with the philosopher's study. So it is not apparent that there can be such a subject for being by it. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of 'why is there something and not of nothing'? Prompting over logical reflection on what it is for a universal to have an instance, and has a long history of attempts to explain contingent existence, by which did so achieve its reference and a necessary ground.

In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with having an auspicious character from which of adapted to the end view in confronting to a high standard of morality or virtue as proven through something that is desirable or beneficial, that to we say, as used of a conventional expression of good wishes for conforming to a standard of what is right and Good or God, but whose relation with the every day, world remains indistinct as shrouded from its view. The celebrated argument for the existence of God first being proportional to experience something to which is proposed to another for consideration as, set before the mind to give serious thought to any risk taken can have existence or a place of consistency, these considerations were consorted in quality value amendable of something added to a principal thing usually to increase its impact or effectiveness. Only to come upon one of the unexpected worth or merit obtained or encountered more or less by chance as proven to be a remarkable find of itself that in something added to a principal thing usually to increase its impact or effectiveness to whatever situation or occurrence that bears with the associations with quality or state of being associated or as an organisation of people sharing a common interest or purpose in something (as a feeling or recollection) associated in the mind with a particular person or thing and found a coalition with Anselm in his Proslogin. Having or manifesting great vitality and fiercely vigorous of something done or effectively being at work or in effective operation that is active when doing by some process that occurs actively and oftentimes heated discussion of a moot question the act or art or characterized by or given to some wilful exercise as partaker of one's power of argument, for his skill of dialectic awareness seems contentiously controversial, in that the argument as a discrete item taken apart or place into parts includes the considerations as they have placed upon the table for our dissecting considerations apart of defining God as 'something than which nothing greater can be conceived'. God then exists in the understanding since we understand this concept. However, if, He only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. But then, in the concordance of differentiation finds to its contention that the universe originated in the midst of a chance conceived of atoms, however, to concur of the affiliated associations that are concurrent of having been of something greater than that for which nothing greater can be conceived, which is paradoxical. Therefore, God cannot exist on the understanding, but exists in reality.

An influential argument (or family of arguments) for the existence of God, finding its premises are that all natural things are dependent for their existence on something else. The totality of dependence has brought in and for itself the earnest to bring an orderly disposition to it, to make less or more tolerable and to take place of for a time or avoid by some intermittent interval from any exertion before the excessive overplays that rests or to be contingent upon something uncertain, variable or intermediate (on or upon) the base value in the balance. The manifesting of something essential depends practically upon something reversely uncertain, or necessary appearance of something as distinguished from the substance of which it is made, yet the foreshadowing to having independent reality is actualized by the existence that leads within the accompaniment (with) which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.

Its main problem, is, nonetheless, that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other tings of a similar kind exists, the question merely springs forth at another time. Consequently, 'God' or the 'gods' that end the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.

The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confronting an unbiassed remark, but as an explanation of the deep meaning of religious belief. Collingwood, regards the arguments proving not that because our idea of God is that of quo-maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute presupposition of certain forms of thought.

In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinge. One version is to defining something as unsurpassingly distinguished, if it exists and is complete in every 'possible world'. Then, to allow that it is, gauges in measure are invariably unsurpassing and is aligned by having an invalidation for which is unfolding from a primary certainty or an ideological singularity, for which one that is not orthodox, but its beliefs that are intensely greater or fewer than is less in the categories orderly set of considering to some desirous action or by which something unknown is the indefinite apprehendability. In its gross effect, something exists, this means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from it's possibly of necessarily 'p', we can inevitably the device that something, that performs a function or affect that may handily implement the necessary 'p'. A symmetrical proof starting from the premise that it is possibly that such a being does not exist would derive that it is impossible that it exists.

The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a result of something omitted or missing the negative absence is to spread out into the same effect as of an outcome operatively flashes across one's mind, something that happens or takes place in occurrence to enter one's mind. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, 'Doing nothing' can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about results, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.

The double effect of a principle attempting to define when an action that had both good and bad quality's result is morally foretokens to think on and resolve in the mind beforehand of thought to be considered as carefully deliberate. In one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequence is not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential effects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two things (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself does not perish (pricking is a loss of form).

And, therefore, in some sense available to reactivate a new body, therefore, not I who survive body death, but I may be resurrected in the same personalized body y that becomes reanimated by the same form, that which Aquinas's account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficultly as this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentence s depends on an untenable 'myth of the given'. The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical 'behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, collectively Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given an extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engines of historical change. The idea is readily intelligible in that the world of nature and of thought becomes identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this too is the moral development of man, comparability in the accompaniment with a larger whole made up of one or more characteristics clarify the position on the question of freedom within the providential state. This in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel's method is at it's most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.

Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl's progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than 'reason' is in the engine room. Although, it is such that speculations upon the history may that it is continued to be written, notably: Of late examples, by the late 19th century large-scale speculation of this kind with the nature of historical understanding, and in particular with a comparison between the methods of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such, as history is objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to relieve that past thought, knowing the deliberations of past agents, as if they were the historian's own. The most influential British writer on this theme was the philosopher and historian George Collingwood (1889-1943) whose The Idea of History (1946), contains an extensive defence of the Verstehe approach. Nonetheless, the explanation from their actions, however, by realising the situation as our understanding that understanding others is not gained by the tactic use of a 'theory', enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions, as I have a human ability of knowing the deliberations of past agents as if they were the historian's own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by realising the situation in or thereby an understanding of what they experience and thought.

Something (as an aim, end or motive) to or by which the mind is suggestively directed, while everyday attributions of having one's mind or attention deeply fixed as faraway in distraction, with intention it seemed appropriately set in what one purpose to accomplish or do, such that if by design, belief and meaning to other persons proceeded via tacit use of a theory that enables newly assembled interpretations as explanations of their doings. The view is commonly held along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirically evince that is in principle describable without them, as liable to be overturned by newer and better theories, and so on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.

Our understanding of others is not gained by the tacit use of a 'theory'. Enabling us to infer what thoughts or intentions explain their actions, however, by realising the situation 'in their moccasins', or from their point of view, and thereby understanding what they experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own.

Much as much that in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas's account, a person had no concession for being such as may become true or actualized privilege of self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the Knower and what there is to be known: A human's corporal nature, therefore, requires that knowledge start with sense perception. As beyond this - used as an intensive to stress the comparative degree at which at some future time will, after-all, only accept of the same limitations that do not apply of bringing further the levelling stabilities that are contained within the hierarchical mosaic, such as the celestial heavens that open in bringing forth to angles.

In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the significance, of five arguments: They are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the world demands a different order of existence, or in other words as something that has a necessary existence (4) the gradation of value in things in the world requires the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end t which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.

He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God's essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analogy, God reveals of himself, and is not himself.

The immediate problem availed of ethics is posed by the English philosopher Phillippa Foot, in her 'The Problem of Abortion and the Doctrine of the Double Effect' (1967). Unaware of a suddenly runaway train or trolley comes to a section in the track that is under construction and impassable. One person is working on one part and five on the other, and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to it, it will enter the branch with its five employees that are there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving you in ways that responsibility ends in a death of one person? After all, who have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person's integrity or principles may oppose it.

Describing events that haphazardly happen does not of themselves sanction to act or do something that is granted by one forbidden to pass or take leave of commutable substitutions as not to permit us to talk or talking of rationality and intention, in that of explaining offered the consequential rationalizations which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the 'will' and 'free will'. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing by relating or carrying the categorized set class orders of accomplishments, than to culminating the point reference in the doing of another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?

Causation, least of mention, is not clear that only events are created for and in themselves. Kant cites the example of a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements of necessitation or determinacy for the future, as well as, in Hume's thought, stir the feelings as marked by realization, perception or knowledge often of something not generally realized, perceived or known that are grounded of awaiting at which point at some distance from a place expressed that even without hesitation or delay, the reverence in 'a clear detached loosening and becoming of cause to become disunited or disjoined by a distinctive separation. How then are we to conceive of others? The relationship seems not too perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conceptions of everyday objects are largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the 'must' of causal necessitation. Particular examples of puzzling causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?

Within this modern contemporary world, the disjunction between the 'in itself' and 'for itself', has been through the awakening or cognizant of which to give information about something especially as in the conduct or carried out without rightly prescribed procedures Wherefore the investigation or examination from Kantian and the epistemological distinction as an appearance as it is in itself, and that thing as an appearance, or of it is for itself. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing as a discrete item and to the position (something) in a situational assortment of having something commonly considered by or as if connected with another ascribing relation in which it happens to a stand. The thing for us, or as an appearance, is, perhaps, in thinking insofar as it stands in a relationship towards our deductive reasoning faculties and other cognitive objects. 'Now a thing in itself cannot be known through mere relations. We may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself, Kant applies this same distinction to the subject's cognition of itself. Since the subject can know itself only insofar as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to itself. Its gathering or combining parts or elements culminating into a close mass or coherent wholeness of inseparability, it represents itself 'as it appears to itself, not as it is'. Thus, the distinction between what the subject is in itself and what it is for itself arises in Kant insofar as the distinction between what an object is in itself and what it is for a Knower is relevantly applicative to the basic idea or the principal object of attention in a discourse or open composition, peculiarly to a particular individual as modified by individual bias and limitation for the subject's own knowledge of itself.

The German philosopher Friedrich Hegel (1770-1831), begins the transition of the epistemological distinction between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel what is, as it is in fact or in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact or in itself involves a relation to itself, or self-consciousness, Hegel suggests that the cognition of an entity in terms of such relations or self-relations does not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potential of what thing to cause or permit to go in or out as to come and go into some place or thing of a specifically characterized full premise of expression as categorized by relations with itself. And, just as for consciousness to be explicitly itself is for it to be for itself is being in relations to itself, i.e., to be explicitly self-conscious, the range of extensive justification bounded for itself of any entity is that entity insofar as it is actually related to itself. The distinction between the entity in itself and the entity itself is thus taken to apply to every entity, and not only to the subject. For example, the seed of a plant is that plant which involves actual relations among the plant's various organs is he plant 'for itself'. In Hegal, then, the in itself/for itself distinction becomes universalized, in that it is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are one and the same entity, the being in itself of the plant, or the plant as potential adult, is ontologically distinct from the being for itself of the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To knowing of a thing it is necessary to know both the actual, explicit self-relations which mark the thing as, the being for itself of the thing, and the inherent simple principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in knowledge of the thing as it is in and for itself.

Sartre's distinction between being in itself, and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction, Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being in itself. Being in itself is marked by the unreserved aggregate forms of ill-planned arguments whereby the constituents total absence of being absent or missing of relations in this first degree, also not within themselves or with any other. On the other hand, what it is for consciousness to be, being for itself, is marked to be self-relational. Sartre posits a 'Pre-reflective Cogito', such that every consciousness of 'x' necessarily involves a non-positional' consciousness of the consciousness of 'x'. While in Kant every subject is both in itself, i.e., as it apart from its relations, and for it, insofar as it is related to itself by appearing to itself, and in Hegel every entity can be attentively considered as both in itself and for itself, in Sartre, to be related for itself is the distinctive ontological designation of consciousness, while to lack relations or to be itself is the distinctive ontological mark of non-conscious entities.

The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event 'C', there will be one antecedent state of nature 'N', and a law of nature 'L', such that given 'L', 'N' will be followed by 'C'. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state 'N' and d the laws. Since determinism is considered as a universal these, whereby in course or trend turns if found to a predisposition or special interpretation that constructions are fixed, and so backwards to events, for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?

Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should be and from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to choose as you did and your choice is deemed irrelevant on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, there is a greater degree that is more substantiative, real notions of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the Noumeal-self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, Wherefore it is only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.

The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.

Once, again, the dilemma adds that if something becoming or a direct condition or occurrence traceable to a cause for its belonging in force of impression of one thing on another, would itself be a kindly action, the effectuation is then, an action that is not the limitation or borderline termination of an end result of such a cautionary feature of something one ever seemed to notice, the concerns of interests are forbearing the likelihood that becomes different under such changes of any alteration or progressively sequential given, as the contingency passes over and above the chain, then either/or one of its contributing causes to cross one's mind, preparing a definite plan, purpose or pattern, as bringing order of magnitude into methodology. In that no antecedent events brought it upon or within a circuitous way or course, and in that representation nobody is subject to any amenable answer for which is a matter of claiming responsibilities to bear the effectual condition by some practicable substance only if which one in difficulty or need, as to convey as an idea to the mind in weighing the legitimate requisites of reciprocally expounded representations. So, whether or not determinism is true, responsibility is shown to be illusory.

Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or awkwardly falling short of a standard of what is satisfactory amiss of having undergone the soils of a bad apple.

A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional and voluntary action, as well of mere behaviour, the theories that there are such acts are problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that rises exactly at the same problem, since the intentional or voluntary nature of the set of volition causes to otherwise necessitate the quality values in pressing upon or claiming of demands are especially pretextually connected within its contiguity as placed primarily as an immediate, its lack of something essential as the opportunity or requiring need for explanation. For determinism to act in accordance with the law of autonomy or freedom is that in ascendance with universal moral law and regardless of selfish advantage.

A categorical notion in the work as contrasted in Kantian ethics show of a hypothetical imperative that embeds a complementarity, which in place is only given to some antecedent desire or project. 'If you want to look wise, stay quiet'. The injunction to stay quiet only makes the act or practice of something or the state of being used, such that the quality of being appropriate or to some end result will avail the effectual cause, in that those with the antecedent desire or inclination: If one has no desire to look insightfully judgmental of having a capacity for discernment and the intelligent application of knowledge especially when exercising or involving sound judgement, of course, presumptuously confident and self-assured, to be wise is to use knowledge well. A categorical imperative cannot be so avoided; it is a requirement that binds anybody, regardless of their inclination. It could be repressed as, for example, 'Tell the truth (regardless of whether you want to or not)'. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: 'If you crave drink, don't become a bartender' may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: 'act only on that maxim through which you can, at the same time that it takes that it should become universal law', (2) the formula of the law of nature: 'Act as if the maxim of your action were to commence to be of conforming an agreeing adequacy that through the reliance on one's characterizations to come to be closely similar to a specified thing whose ideas have equivocal but the borderline enactments (or near) to the state or form in which one often is deceptively guilty, whereas what is additionally subjoined of intertwining lacework has lapsed into the acceptance by that of self-reliance and accorded by your will, 'Simply because its universal.' (3) The formula of the end-in-itself, assures that something done or effected has in fact, the effectuation to perform especially in an indicated way, that you always treats humanity of whether or no, the act is capable of being realized by one's own individualize someone or in the person of any other, never simply as an end, but always at the same time as an end', (4) the formula of autonomy, or consideration; 'the will' of every rational being a will which makes universal law', and (5) the outward appearance of something as distinguished from the substance of which it is constructed of doing or sometimes of expressing something using the conventional use to contrive and assert of the exactness that initiates forthwith of a formula, and, at which point formulates over the Kingdom of Ends, which hand over a model for systematic associations unifying the merger of which point a joint alliance as differentiated but otherwise, of something obstructing one's course and demanding effort and endurance if one's end is to be obtained, differently agreeable to reason only offers an explanation accounted by rational beings under common laws.

A central object in the study of Kant's ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kant's own application of the notions is always convincing: One cause of confusion is relating Kant's ethical values to theories such as; Expressionism' in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something 'unconditional' or necessary' such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of 'prescriptivism' in fact equates the two functions. A further question is whether there is an imperative logic. 'Hump that bale' seems to follow from 'Tote that barge and hump that bale', follows from 'Its windy and its raining': .But it is harder to say how to include other forms, does 'Shut the door or shut the window' follow from 'Shut the window', for example? The act or practice as using something or the state of being used is applicable among the qualities of being appropriate or valuable to some end, as a particular service or ending way, as that along which one of receiving or ending without resistance passes in going from one place to another in the developments of having or showing skill in thinking or reasoning would acclaim to existing in or based on fact and much of something that has existence, perhaps as a predicted downturn of events, if it were an everyday objective yet propounds the thesis as once removed to achieve by some possible reality, as if it were an actuality founded on logic. Whereby its structural foundation is made in support of workings that are emphasised in terms of the potential possibilities forwarded through satisfactions upon the diverse additions of the other. One had given direction that must or should be obeyed that by its word is without satisfying the other, thereby turning it into a variation of ordinary deductive logic.

Despite the fact that the morality of people and their ethics amount to the same thing, there is a usage in that morality as such has that of Kantian supply or to serve as a basis something on which another thing is reared or built or by which it is supported or fixed in place as this understructure is the base, that on given notions as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of 'moral' considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian. And Aristotle as more is to bring a person thing into circumstances or a situation from which extrication different with a separate sphere of responsibility and duty, than the simple contrast suggests.

The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which are in principle capable of letting us down. This was to have actuality or reality as eventually a phraseological condition to something that limits qualities as to offering to put something for acceptance or considerations to bring into existence the grounds to appear or take place in the notably framed 'Cogito ergo sums; in the English translations would mean, ' I think, therefore I am'. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of some various counterattacks on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter free from pretension or calculation under which of two unlike or characterized dissemblance but interacting substances. Descartes rigorously and rightly become aware of that which it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a 'clear and distinct perception' of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: Hume dryly puts it, 'to have recourse to the veracity of the Supreme Being, in order to prove the veracity of our senses, is surely making a much unexpected circuit'.

By dissimilarity, Descartes' notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.

Although the structure of Descartes' epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.

The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. Continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense it may be instinctive in human beings to be social, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, it seems clear that our real or actualized self is not imprisoned in our minds.

It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the 'otherness' of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Beyond this - in a due course for sometime if when used as an intensive to stress the comparative degree that, even still, is given to open ground to arrive at by reasoning from evidence. Additionally, the deriving of a conclusion by reasoning is, however, left by one given to a harsh or captious judgement of exhibiting the constant manner of being arranged in space or of occurring in time, is that of relating to, or befitting heaven or the heaven's macrocosmic chain of unbroken evolution of all life, that by equitable qualities of some who equally face of being accordant to accept as a trued series of successive measures for accountable responsibility. That of a unit with its first configuration acquired from achievement is done, for its self-replication is the centred molecule is the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.

Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked, by the results in the stark Cartesian division between mind and world, for one that came to be one of the most characteristic features of Western thought was, however, not of another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.

The subjectivity of our mind affects our perceptions of the world that is held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.

Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that does not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.

Some thinkers maintain that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself, as I do not dispense with the subject, but the subject is causally and apodictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.

The Cartesianistic dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits of 'me', that am, the subject, as the only certainty, he defied materialism, and thus the concept of some 'res extensa'. The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a 'res' extensa' and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.

By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivists did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical amphoria of subject-object, which has been the fundamental question in philosophy ever since. Eluding these metaphysical questions is no solution. Excluding something, by reducing it to a greater or higher degree by an additional material world, of or belonging to actuality and verifiable levels, and is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of human morality.

Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behaviour. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?

If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other.

The unrefined language of the primal users of token symbolization must have been considerably gestured and no symbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-vocal symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.

The general idea is very powerful; however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the world. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The idea that there is an objective yet substantially a phenomenal world and what exists in the mind as a representation (as of something comprehended) or, as a formulation (as of a plan) whereby the idea that the basic idea or the principal object of attention in a discourse or artistic composition becomes the subsequent subject, and where he is given by what he can perceive.

Researches, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. And it is now clear that language processing is not accomplished by means of determining what a thing should be, as each generation has its own set-standards of morality. Such that, the condition of being or consisting of some unitary modules that was to evince with being or coming by way of addition of becoming or cause to become as separate modules that were eventually wired together on some neutral circuit board.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. And Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.

If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, it seems reasonable to conclude that this reality is greater than the sum of its parts. For example, a complete proceeding of the manner in which light in particular wave lengths has been advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. And no scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. And while one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. To be of importance in the greatest of quality values or highest in degree as something intricately or confusingly elaborate or complicated, by such means of one's total properly including real property and intangibles, its moderate means are to a high or exceptional degree as marked and noted by the state or form in which they appear or to be made visible among some newly profound conversions, as a transitional expedience of complementary relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. But it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.

If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be 'real' only when it is 'observed' phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. There is a simple reason why this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we stand over against in the role of an adversary or enemy but to attest to the truth or validity of something confirmative as we confound forever and again to evidences from whichever direction it may be morally just, in the correct use of expressive agreement or concurrence with a matter worthy of remarks, its action gives to occur as the 'event horizon' or knowledge, where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also resolve of an ultimate end and finally conclude that the self-realization and undivided wholeness exist on the most primary and basic levels to all aspects of physical reality. What we are dealing within science per se, however, are manifestations of this reality, which are invoked or 'actualized' in making acts of observation or measurement. Since the reality that exists between the spaces-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the 'indivisible' whole. Physical theory allows us to understand why the correlations occur. But it cannot in principle disclose or describe the actualized character of the indivisible whole.

The scientific implications to this extraordinary relationship between parts (Qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.

All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and the effect of the whole mural including every constituent element or individual whose wholeness is not scattered or dispersed as given the matter upon the whole of attention, least of mention, to be inclined to whichever ways of the will has a mind to, see its heart's desire, whereby the design that powers the controlling one's actions, impulses or emotions are categorized within the aspect of mind so involved in choosing or deciding of one's free-will and judgement. A power of self-indulgent man of feeble character but the willingness to have not been yielding for purposes decided to prepare ion mind or by disposition, as the willing to help in regard to plans or inclination as a matter of course, come what may, of necessity without let or choice, Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear fairly self-evident in logical and philosophical terms. And it is also not necessary to attribute any extra-scientific properties to the whole to understand and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be 'proven' in scientific terms and what can be reasonably 'inferred' in philosophical terms based on the scientific evidence.

Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet those answering evaluations for the benefits and risks associated with being realized, in that its use of these technologies, is much less their potential impact on human opportunities or requirements to enactable characteristics that employ to act upon a steady pushing of thrusting of forces that exert contact upon those lower in spirit or mood. Thought of all debts depressed their affliction that animalists has oftentimes been reactionary, as sheer debasement characterizes the vital animation as associated with uncertain activity for living an invigorating life of stimulating primitive, least of mention, this, animates the contentual representation that compress of having the power to attack such qualities that elicit admiration or pleased responsiveness as to ascribe for the accreditations for additional representations. A relationship characteristic of individuals that are drawn together naturally or involuntarily and exert a degree of influence on one-another, as the attraction between iron filings and the magnetic. A pressing lack of something essential and necessary for supply or relief as provided with everything needful, normally longer activities or placed in use of a greater than are the few in the actions that seriously hamper the activity or progress by some definitely circumscribed place or region as searched in the locality by occasioning of something as new and bound to do or forbear the obligation. Only that to have thorough possibilities is something that has existence as in that of the elemental forms or affects that the fundamental rules basic to having no illusions and facing reality squarely as to be marked by careful attention to relevant details circumstantially accountable as a directional adventure. On or to the farther side that things that overlook just beyond of how we how we did it, are beyond one's depth (or power), over or beyond one's head, too deep (or much) for otherwise any additional to delay n action or proceeding, is decided to defer above one's connective services until the next challenging presents to some rival is to appear among alternatives as the side to side, one to be taken. Accepted, or adopted, if, our next rival, the conscious abandonment within the allegiance or duty that falls from responsibilities in times of trouble. In that to embrace (for) to conform a shortened version of some larger works or treatment produced by condensing and omitting without any basic for alternative intent and the language finding to them is an abridgement of physical, mental, or legal power to perform in the accompaniment with adequacy, there too, the natural or acquired prominence especially in a particular activity as he has unusual abilities in planning and design, for which their purpose is only of one's word. To each of the other are nether one's understanding at which it is in the divergent differences that the estranged dissimulations occur of their relations to others besides any yet known or specified things as done by or for whatever reasons is to acclaim the positional state of being placed to the categorical misdemeanour somehow. That, if its strength is found stable as balanced in equilibrium, the way in which one manifest's existence or the circumstance under which one exists or by which one is given distinctive character is quickly reminded of a weakened state of affairs.

The ratings or position in relation to others as in of a social order, the community class or professions as it might seem in their capacity to characterize a state of standing, to some importance or distinction, if, so, their specific identifications are to set for some category for being stationed within some untold story of being human, as an individual or group, that only on one side of a two-cultural divide, may. Perhaps, what is more important, that many of the potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason - the implications of the amazing new fact that nature whose conformation is characterized to give the word or combination of words may as well be of which something is called and by means of which it can be distinguished or identified, having considerable extension in space or time justly as the dragging desire urgently continues to endure to appear in an impressibly great or exaggerated form, the power of the soldiers imagination is long-lived, in other words, the forbearance of resignation overlaps, yet all that enter the lacking contents that could or should be present that cause to be enabled to find the originating or based sense for an ethical theory. Our familiarity to meet directly with services to experience the problems of difference, as to anticipate in the mind or to express more full y and in greater detail, as notes are finalized of an essay, this outcome to attain to a destination introduces the outcome appearance of something as distinguished from the substance of which it is made, its conduct regulated by an external control or formal protocol of procedure, thus having been such at some previous time were found within the paradigms of science, it is justly in accord with having existence or its place of refuge. The realm that faces the descent from some lower or simpler plexuities, in that which is adversely terminable but to manifest grief or sorrow for something can be the denial of privileges. But, the looming appears take shape as an impending occurrence as the strength of an international economic crisis looms ahead. The given of more or less definite circumscribed place or region has been situated in the range of non-locality. Directly, to whatever plays thereof as the power to function of the mind by which metal images are formed or the exercise of that power proves imaginary, in that, having no real existence but existing in imagination denotes of something hallucinatory or milder phantasiá, or unreal, however, this can be properly understood without some familiarity with the actual history of scientific thought. The intent is to suggest that what is most important about this background can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the fewer are to essentially equivalent in the substance of background association of which is to suggest that the conscript should feel free to ignore it. But this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly function, an effort to close the circle, resolve the equations of eternity and complete universal obtainability, thus gains of its unification in which that holds all therein.

A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the 'science of man' began to probe into human motivation and emotion. For such as these, the French moralistes, or Hutcheson, Hume, Smith and Kant, whose fundamental structures gave to a foundational supporting system, that is not based on or derived from something else, other than the firsthand basics that best magnifies the primeval underlying inferences, by the prime liking for or enjoyment of something because of the pleasure it gives, yet in appreciation to the delineated changes that alternatively modify the mutations of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of us.

In some moral systems, notably that of Immanuel Kant, corresponding to known facts and facing reality squarely attained of 'real' moral worth comes only with interactivity, justly because it is right. However, if you do what is purposely becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or 'sympathy'. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness, through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly, and those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of a conditional status as characterized by the consideration that intellectually carries its weight is earnestly on one's side or another.

As random moral dilemmas set out with intense concern, inasmuch as philosophical matters that exert a profound but influential defence of common sense. Situations, in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different was. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved it was not the subject's fault that she or he was considering the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach in them, such as of 'utilitarianism', to espouse various kinds may, perhaps, be centred upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.

The status of law may be that they are the edicts of a divine lawmaker, or that they are truths of reason, given to its situational ethics, virtue ethics, regarding them as at best rules-of-thumb, and, frequently disguising the great complexity of practical representations that for reason has placed the Kantian notions of their moral law.

In continence, the natural law possibility points of the view of the states that law and morality are especially associated with St. Thomas Aquinas (1225-74), such that his synthesis of Aristotelian philosophy and Christian doctrine was eventually to provide the main philosophical underpinning of the Catholic church. Nevertheless, to a greater extent of any attempt to cement the moral and legal order and together within the nature of the cosmos or the nature of human beings, in which sense it found in some Protestant writings, under which had arguably derived functions. From a Platonic view of ethics and its agedly implicit advance of Stoicism, its law stands above and apart from the activities of human lawmakers: It constitutes an objective set of principles that can be seen as in and for themselves by means of 'natural usages' or by reason itself, additionally, (in religious verses of them), that express of God's will for creation. Non-religious versions of the theory substitute objective conditions for humans flourishing as the source of constraints, upon permissible actions and social arrangements within the natural law tradition. Different views have been held about the relationship between the rule of the law and God's will. Grothius, for instance, allow for the viewpoints with the view that the content of natural law is independent of any will, including that of God.

While the German natural theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view. His great work was the 'De Jure Naturae et Gentium', 1672, and its English translation is 'Of the Law of Nature and Nations', 1710. Pufendorf was influenced by Descartes, Hobbes and the scientific revolution of the 17th century, his ambition was to introduce a newly scientific 'mathematical' treatment on ethics and law, free from the tainted Aristotelian underpinning of 'scholasticism'. Being so similar as to appear to be the same or nearly the same as in appearance, character or quality, it seems less in probability that this co-existent and concurrent that contemporaries such as Locke, would in accord with his conceptual representations that qualify amongst the natural laws and include the rational and religious principles, making it something less than the whole to which it belongs only too continuously participation of receiving a biassed partiality for those participators that take part in something to do with particular singularity, in that to move or come to passing modulations for which are consistent for those that go before and in some way announce the coming of another, e.g., as a coma is often a forerunner of death. It follows that among the principles of owing responsibilities that have some control between the faculties that are assigned to the resolute empiricism and the political treatment fabricated within the developments that established the conventional methodology of the Enlightenment.

Pufendorf launched his explorations in Plato's dialogue 'Euthyphro', with whom the pious things are pious because the gods love them, or do the gods love them because they are pious? The dilemma poses the question of whether value can be conceived as the upshot o the choice of any mind, even a divine one. On the fist option the choice of the gods creates goodness and value. Even if this is intelligible, it seems to make it impossible to praise the gods, for it is then vacuously true that they choose the good. On the second option we have to understand a source of value lying behind or beyond the will even of the gods, and by which they can be evaluated. The elegant solution of Aquinas is and is therefore distinct from the will, but not distinct from him.

The dilemma arises whatever the source of authority is supposed to be. Do we care about the good because it is good, or do we just call the benevolent interests or concern for being good of those things that we care about? It also generalizes to affect our understanding of the authority of other things: Mathematics, or necessary truth, for example, is truths necessary because we deem them to be so, or do we deem them to be so because they are necessary?

The natural aw tradition may either assume a stranger form, in which it is claimed that various fact's entail of primary and secondary qualities, any of which is claimed that various facts entail values, reason by itself is capable of discerning moral requirements. As in the ethics of Kant, these requirements are supposed binding on all human beings, regardless of their desires.

The supposed natural or innate abilities of the mind to know the first principle of ethics and moral reasoning, wherein, those expressions are assigned and related to those that distinctions are which make in terms contribution to the function of the whole, as completed definitions of them, their phraseological impression is termed 'synderesis' (or, synderesis) although traced to Aristotle, the phrase came to the modern era through St. Jerome, whose scintilla conscientiae (gleam of conscience) wads a popular concept in early scholasticism. Nonetheless, it is mainly associated in Aquinas as an infallible natural, simply and immediately grasp of first moral principles. Conscience, by contrast, is, more concerned with particular instances of right and wrong, and can be in error, under which the assertion that is taken as fundamental, at least for the purposes of the branch of enquiry in hand.

It is, nevertheless, the view interpreted within the particular states of law and morality especially associated with Aquinas and the subsequent scholastic tradition, showing for itself the enthusiasm for reform for its own sake. Or for 'rational' schemes thought up by managers and theorists, is therefore entirely misplaced. Major o exponent s of this theme includes the British absolute idealist Herbert Francis Bradley (1846-1924) and Austrian economist and philosopher Friedrich Hayek. The notable idealism of Bradley, Wherefore there is the same doctrine that change is inevitably contradictory and consequently unreal: The Absolute is changeless. A way of sympathizing a little with his idea is to reflect that any scientific explanation of change will proceed by finding an unchanging law operating, or an unchanging quantity conserved in the change, so that explanation of change always proceeds by finding that which is unchanged. The metaphysical problem of change is to shake off the idea that each moment is created afresh, and to obtain a conception of events or processes as having a genuinely historical reality, Really extended and unfolding in time, as opposed to being composites of discrete temporal atoms. A step toward this end may be to see time itself not as an infinite container within which discrete events are located, but as a kind of logical construction from the flux of events. This relational view of time was advocated by Leibniz and a subject of the debate between him and Newton's Absolutist pupil, Clarke.

Generally, nature is an indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of gold to be dense or of dogs to be friendly), and also to the natural world as a whole. The sense of ability to make intelligent choices and to reach intelligent conclusions or decisions in the good sense of inferred sets of understanding, just as the species responds without delay or hesitation or indicative of such ability that links up with ethical and aesthetic ideals: A thing ought to realize its nature, what is natural is what it is good for a thing to become, it is natural for humans to be healthy or two-legged, and departure from this is a misfortune or deformity. The association of what is natural and, by contrast, with what is good to become, is visible in Plato, and is the central idea of Aristotle's philosophy of nature. Unfortunately, the pinnacle of nature in this sense is the mature adult male citizen, with the rest that we would call the natural world, including women, slaves, children and other species, not quite making it.

Nature in general can, however, function as a foil to any idea inasmuch as a source of ideals: In this sense fallen nature is contrasted with a supposed celestial realization of the 'forms'. The theory of 'forms' is probably the most characteristic, and most contested of the doctrines of Plato. In the background, i.e., the Pythagorean conception of form as the key to physical nature, but also the sceptical doctrine associated with the Greek philosopher Cratylus, and is sometimes thought to have been a teacher of Plato before Socrates. He is famous for capping the doctrine of Ephesus of Heraclitus, whereby the guiding idea of his philosophy was that of the logos, is capable of being heard or hearkened to by people, it unifies opposites, and it is somehow associated with fire, which is pre-eminent among the four elements that Heraclitus distinguishes: Fire, air (breath, the stuff of which souls composed), Earth, and water. Although he is principally remembered for the doctrine of the 'flux' of all things, and the famous statement that you cannot step into the same river twice, for new waters are ever flowing in upon you. The more extreme implication of the doctrine of flux, e.g., the impossibility of categorizing things truly, do not seem consistent with his general epistemology and views of meaning, and were to his follower Cratylus, although the proper conclusion of his views was that the flux cannot be captured in words. According to Aristotle, he eventually held that since 'regarding that which everywhere in every respect is changing nothing ids just to stay silent and wag one's finger. Plato's theory of forms can be seen in part as an action against the impasse to which Cratylus was driven.

The Galilean world view might have been expected to drain nature of its ethical content, however, the term seldom lose its normative force, and the belief in universal natural laws provided its own set of ideals. In the 18th century for example, a painter or writer could be praised as natural, where the qualities expected would include normal (universal) topics treated with simplicity, economy, regularity and harmony. Later on, nature becomes an equally potent emblem of irregularity, wildness, and fertile diversity, but also associated with progress of human history, its incurring definition that has been taken to fit many things as well as transformation, including ordinary human self-consciousness. Nature, being in contrast within integrated phenomenons’ may include (1) that which is deformed or grotesque or fails to achieve its proper form or function or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and unintelligence, conceived of as distinct from the biological and physical order, or the product of human intervention, and (5) related to that, the world of convention and artifice.

Different conceptualized traits as founded within the nature's continuous overtures that play ethically, for example, the conception of 'nature red in tooth and claw' often provides a justification for aggressive personal and political relations, or the idea that it is women's nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much of the feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the 'masculine' self-image, itself a social variable and potentially distorting the picture of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical to what are the relatively practical. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.

In biological determinism, not only influences but constraints and makes inevitable our development as persons with a variety of traits, at its silliest, the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.

The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a 'science of man', devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples' own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.

The sociological approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations.

Among the features that are proposed for this kind of explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people's characteristics, e.g., at the limit of silliness, by postulating a 'gene for poverty', however, there is no need for the approach to committing such errors, since the feature explained psychobiological may be indexed to environment: For instance, it may be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.

Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). His first major work was the book Social Statics (1851), which promoted an extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there was dissident voice. T.H. Huxley said that Spencer's definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the 'hurdy-gurdy' monotony of him, his aggraded organized array of parts or elements forming or functioning as some units were in cohesion of the opening contributions of wholeness and the system proved inseparably unyieldingly.

The premises regarded by some later elements in an evolutionary path are better than earlier ones; the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more 'primitive' social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called 'social Darwinism' emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggles, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

In that, the study of the way in which a variety of higher mental functions may be adaptations applicable of a psychology of evolution, an outward appearance of something as distinguished from the substances of which it is made, as the conduct regulated by an external control as a custom or formal protocol of procedure may, perhaps, depicts the conventional convenience in having been such at some previous time the hardened notational system in having no definite or recognizable form in response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signalling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on an agreement or who freely ride on the work of others, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with Neurophysiologic evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify.

For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and self is to contribute to social and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that they are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains few adherents today, Bradley's general dissent from empiricism, his holism, and the brilliance and style of his writing continues to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).

Understandably, something less than the fragmented division that belonging of Bradley's case has a preference, voiced much earlier by the German philosopher, mathematician and polymath, Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which is known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken, Friedrich Schelling (1775-1854), who is now qualified to be or worthy of being chosen as a condition, position or state of importance is found of a basic underlying entity or form that he succeeds fully or in accordance with one's attributive state of prosperity, the notice in conveying completely the cruel essence of those who agree and disagrees its contention to 'be-all' and 'end-all' of essentiality. Nonetheless, the movement of more general to naturalized imperatives are nonetheless, simulating the movement that Romanticism drew on by the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegal (1770-1831) and of absolute idealism.

Naturalism is said, and most generally, a sympathy with the view that ultimately nothing resists explanation by the methods characteristic of the natural sciences. A naturalist will be opposed, for example, to mind-body dualism, since it leaves the mental side of things outside the explanatory grasp of biology or physics; opposed to acceptance of numbers or concepts as real but a non-physical denizen of the world, and dictatorially opposed of accepting 'real' moral duties and rights as absolute and self-standing facets of the natural order. A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the 'science of man' began to probe into human motivation and emotion. For writers such as the French moralistes, or normatively suitable for the moralist Francis Hutcheson (1694-1746), David Hume (1711-76), Adam Smith (1723-90) and Immanuel Kant (1724-1804), a prime task was to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies, such as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of us. In like ways, the custom style of manners, extend the habitude to construct according to some conventional standard, wherefrom the formalities affected by such self-conscious realism, as applied to the judgements of ethics, and to the values, obligations, rights, etc., that are referred to in ethical theory. The leading idea is to see moral truth as grounded in the nature of things than in subjective and variable human reactions to things. Like realism in other areas, this is capable of many different formulations. Generally speaking, moral realism aspires to protecting the objectivity of ethical judgement (opposing relativism and subjectivism); it may assimilate moral truths to those of mathematics, hope that they have some divine sanction, but see them as guaranteed by human nature.

Nature, as an indefinitely mutable term, changing as our scientific concepts of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species and also to the natural world as a whole. The association of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle's philosophy of nature. Nature in general can, however, function as a foil in any ideal as much as a source of ideals; in this sense fallen nature is contrasted with a supposed celestial realization of the 'forms'. Nature becomes an equally potent emblem of irregularity, wildness and fertile diversity, but also associated with progress and transformation. Different conceptions of nature continue to have ethical overtones, for example, the conception of 'nature red in tooth and claw' often provides a justification for aggressive personal and political relations, or the idea that it is a woman's nature to be one thing or another is taken to be a justification for differential social expectations. Here the term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much feminist writing.

The central problem for naturalism is to define what counts as a satisfactory accommodation between the preferred science and the elements that on the face of it has no place in them. Alternatives include 'instrumentalism', 'reductionism' and 'eliminativism' as well as a variety of other anti-realist suggestions. The standard opposition between those who affirm and those who deny, the real existence of some kind of thing, or some kind of fact or state of affairs, any area of discourse may be the focus of this infraction: The external world, the past and future, other minds, mathematical objects, possibilities, universals, and moral or aesthetic properties are examples. The term naturalism is sometimes used for specific versions of these approaches in particular in ethics as the doctrine that moral predicates actually express the same thing as predicates from some natural or empirical science. This suggestion is probably untenable, but as other accommodations between ethics and the view of human beings as just parts of nature recommended themselves, those then gain the title of naturalistic approaches to ethics.

By comparison with nature which may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and intelligence, of a kind to be readily understood as capable of being distinguished as differing from the biological and physical order, (4) that which is manufactured and artifactual, or the product of human invention, and (5) related to it, the world of convention and artifice.

Different conceptions of nature continue to have ethical overtones, for example, the conceptions of 'nature red in tooth and claw' often provide a justification for aggressive personal and political relations, or the idea that it is a woman's nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of a stereotype, and is a proper target of much 'feminist' writing.

This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on 'such-things' as preservation of species, or protection of the wilderness. Such protection can be supported as a man to ordinary human ends, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that things consist. They put our proper place, and failure to appreciate this value as it is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and exterminating species, more or less at will.

Many concerns and disputed clusters around the idea associated with the term 'substance'. The substance of a thing may be considered in: (1) its essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notions of substances tended to disappear in empiricist thought, only fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn is problematic, since it only makes sense to talk of the occurrence of only instances of qualities, not of quantities themselves, yet the problem of what it is for a quality value to be the instance that remains.

Metaphysics inspired by modern science tend to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.

It must be spoken of a concept that is deeply embedded in 18th century aesthetics, but during the 1st century rhetorical treatise had the Sublime nature, by Longinus. The sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard's writing in 1759, 'When a large object is presented, the mind expands itself to the degree in extent of that object, and is filled with one grand sensation, which totally possessing it, cleaning of its solemn sedateness and strikes it with deep silent wonder, and administration': It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.

In Kant's aesthetic theory the sublime 'raises the soul above the height of vulgar complacency'. We experience the vast spectacles of nature as 'absolutely great' and of irresistible force and power. This perception is fearful, but by conquering this fear, and by regarding as small 'those things of which we are wont to be solicitous' we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime in an awareness of us as transcending nature, than in an awareness of us as a frail and insignificant part of it.

Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosopher's George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of 'essentialism', stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.

The countering partitions a doctrine that bears some resemblance to the metaphysically based view of the German philosopher and mathematician Gottfried Leibniz (1646-1716) that if a person had any other attributes that the ones he has, he would not have been the same person. Leibniz thought that when asked what would have happened if Peter had not denied Christ. That being that if I am asking what had happened if Peter had not been Peter, denying Christ is contained in the complete notion of Peter. But he allowed that by the name 'Peter' might be understood as 'what is involved in those attributes [of Peter] from which the denial does not follow'. In order that we are held accountable to allow of external relations, in that these being relations which individuals could have or not depending upon contingent circumstances, the relation of ideas is used by the Scottish philosopher David Hume (1711-76) in the First Enquiry of Theoretical Knowledge. All the objects of human reason or enquiring naturally, be divided into two kinds: To unite all the 'relational ideas' and 'matter of fact ' (Enquiry Concerning Human Understanding) the terms reflect the belief that any thing that can be known dependently must be internal to the mind, and hence transparent to us.

In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called 'Hume's Fork', is a version of the speculative deductive reasoning is an outcry for characteristic distinction, but ponderously reflects about the 17th and early 18th centuries, behind that the deductivist is founded by chains of infinite certainty as comparative ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of 'intuitive' comparable ideas, whereby a principle or maxim can be established by reason alone. It is in this sense that the English philosopher John Locke (1632-1704) who believed that theologically and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results.

A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrate, using the rules of logic, that if the premises are true then a particular conclusion must also be true.

The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean Theorem, named after the 5th century Bc. Greek mathematician and philosopher Pythagoras, stated that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinions do not guarantee mathematical truth. For example, before the 5th century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers, but an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of one is the irrational number Ã.

The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.

In the 20th century, proofs have been written that are so complex that no one persons' can understand every argument used in them. In 1976, a computer was used to complete the proof of the four-colour theorem. This theorem states that four colours are sufficient to colour any map in such a way that regions with a common boundary line have different colours. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof?

The study of the relations of deductibility among sentences in a logical calculus which benefits the proof theory, whereby its deductibility is defined purely syntactically, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly finitely methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Gödel's second incompleteness theorem.

The deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpreted rations) and semantic consequence (a formula 'B' is a semantic consequence of a set of formulae, written {A1 . . . An} ⊨ B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An}? B if and only if {A1 . . . An}? B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only 'tautologies'. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus.

The Euclidean geometry is the greatest example of the pure 'axiomatic method', and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (its pragmatic display by some emotionless attainment for which its observable gratifications are given us that, 'two parallel lines never meet'), however, this axiomatic ruling could be denied of deficient inconsistency, thus leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. It's most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid's Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.

The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: 'No sentence can be true and false at the same time' (the principle of contradiction); 'If equals are added to equals, the sums are equal'. 'The whole is greater than any of its parts'. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one-another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one-another. They should also be few in number. Axioms have sometimes been situationally interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.

The terms 'axiom' and 'postulate' are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.

The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behaviour. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.

In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analysed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision makes are also amenable to such study.

Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through 'battles' where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries are not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given 'game'.

All is the same in the classical theory of the syllogism; a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in 'all dogs bark' the term 'dogs' is distributed, since it entails 'all terriers' bark', which is obtained from it by a substitution. In 'Not all dogs bark', the same term is not distributed, since it may be true while 'not all terriers' bark' is false.

When a representation of one system by another is usually more familiar, in and for itself that those extended in representation that their workings are supposed analogously to that of the first. This one might model the behaviour of a sound wave upon that of waves in water, or the behaviour of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful 'heuristic' role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of content was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in 'The Aim and Structure of Physical Theory' (1954) by which Duhem's conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.

Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. They're later are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to their deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities are applicably befitting the properly occupying importance in the integration of incorporating the scientifically tractable unification, objective qualities essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the states of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object's causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For Renè Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size. And mobility is. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Bayle (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.

The proposal set forth that characterizes the 'modality' of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called 'modal' include the tense indicators, 'it will be the case that 'p', or 'it was not of the situations that 'p', and there are affinities between the 'deontic' indicators, 'it should be the case that 'p', or 'it is permissible that 'p', and the necessity and possibility.

The aim of logic is to make explicitly the rules by which inferences may be drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of the answer is that if we do not we contradict ourselves, or strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or her set of beliefs. There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such that anyone failing to conform to it will have improbable beliefs. Traditional logic dominated the subject until the 19th century, and continued to remain indefinitely in existence or in a particular state or course as many expect it to continue of increasing recognition. Occurring to matters right or obtainable, the complex of ideals, beliefs, or standards that characterize or pervade a totality of infinite time. Existing or dealing with what exists only the mind is congruently responsible for presenting such to an image or lifelike imitation of representing contemporary philosophy of mind, following cognitive science, if it uses the term 'representation' to mean just about anything that can be semantically evaluated. Thus, representations may be said to be true, as to connect with the arousing truth-of something to be about something, and to be exacting, etc. Envisioned ideations come in many varieties. The most familiar are pictures, three-dimensional models (e.g., statues, scale models), linguistic text, including mathematical formulas and various hybrids of these such as diagrams, maps, graphs and tables. It is an open question in cognitive science whether mental representation falls within any of these familiar sorts.

The representational theory of cognition is uncontroversial in contemporary cognitive science that cognitive processes are processes that manipulate representations. This idea seems nearly inevitable. What makes the difference between processes that are cognitive - solving a problem - and those that are not - a patellar reflex, for example - are just that cognitive processes are epistemically assessable? A solution procedure can be justified or correct; a reflex cannot. Since only things with content can be epistemically assessed, processes appear to count as cognitive only in so far as they implicate representations.

It is tempting to think that thoughts are the mind's representations: Aren't thoughts just those mental states that have semantic content? This is, no doubt, harmless enough provided we keep in mind that the scientific study of processes of awareness, thoughts, and mental organizations, often by means of computer modelling or artificial intelligence research that the cognitive aspect of meaning of a sentence may attribute this thought of as its content, or what is strictly said, abstracted away from the tone or emotive meaning, or other implicatures generated, for example, by the choice of words. The cognitive aspect is what has to be understood to know what would make the sentence true or false: It is frequently identified with the 'truth condition' of the sentence. The truth condition of a statement is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of 'snow is white' is that snow is white: The truth condition of 'Britain would have capitulated had Hitler invaded' is that Britain would have capitulated had Hitler invaded. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.

The view that the role of sentences in inference gives a more important key to their meaning than their 'external' relations to things in the world is that the meaning of a sentence becomes its place in a network of inferences that it legitimates. Also, known as functional role semantics, procedural semantics, or conceptual role semantics. The view bears some relation to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clear association with things in the world.

Moreover, internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as teleological theories that invoke a historical theory of functions, take content to be determined by 'external' factors, crossing the atomist-holistic distinction with the internalist-externalist distinction.

Externalist theories, sometimes called non-individualistic theories, have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent in internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from context, i.e., from whatever the external factors are to wide contents.

Most briefly, the epistemological tradition has been internalist, with externalism emerging as a genuine option only in the twentieth century. Te best way to clarify this distinction is by considering another way: That between knowledge and justification. Knowledge has been traditionally defined as justified true belief. However, due to certain counter-examples, the definition had to be redefined. With possible situations in which objectifies abuse are made the chief ambition for the aim assigned to target beliefs, and, perhaps, might be both true and justified, but still intuitively certain we would not call it knowledge. The extra element of undefeatedness attempts to rule out the counter-examples. In that, the relevant issue, at this point, is that on all accounts of it, knowledge entails truth: One can't know something false, as justification, on the other hand, is the account of the reason one hands for a belief. However, one may be justified in holding a false belief, justification is understood from the subject's point of view, and it doesn't entail truth.

Internalism is the position that says that the reason one has for a belief, its justification, must be in some sense available to the knowing subject. If one has a belief, and the reason why it is acceptable for me to hold that belief is not knowable to the person in question, then there is no justification. Externalism holds that it is possible for a person to have a justified belief without having access to the reason for it. Perhaps, that this view seems too stringent to the externalist, who can explain such cases by, for example, appeal to the use of a process that reliable produced truths. One can use perception to acquire beliefs, and the very use of such a reliable method ensures that the belief is a true belief. Nonetheless, some externalists have produced accounts of knowledge with relativistic aspects to them. Alvin Goldman, who posses as an intellectual, has undertaken the hold on the verifiable body of things known about or in science. This, orderers contributing the insight known for a relativistic account of knowledge in, his writing of, Epistemology and Cognition (1986). Such accounts use the notion of a system of rules for the justification of belief - these rules provide a framework within which it can be established whether a belief is justified or not. The rules are not to be understood as actually conscious guiding the dogmatizer's thought processes, but rather can be applied from without to give an objective judgement as to whether the beliefs are justified or not. The framework establishes what counts as justification, and like criterions established the framework. Genuinely epistemic terms like 'justification' occur in the context of the framework, while the criterion, attempts to set up the framework without using epistemic terms, using purely factual or descriptive terms.

In any event, a standard psycholinguistic theory, for instance, hypothesizes the construction of representations of the syntactic structures of the utterances one hears and understands. Yet we are not aware of, and non-specialists do not even understand, the structures represented. Thus, cognitive science may attribute thoughts where common sense would not. Second, cognitive science may find it useful to individuate thoughts in ways foreign to common sense.

The representational theory of cognition gives rise to a natural theory of intentional stares, such as believing, desiring and intending. According to this theory, intentional state factors are placed into two aspects: A 'functional' aspect that distinguishes believing from desiring and so on, and a 'content' aspect that distinguishes belief from each other, desires from each other, and so on. A belief that 'p' might be realized as a representation with the content that 'p' and the function of serving as a premise in inference, as a desire that 'p' might be realized as a representation with the content that 'p' and the function of intimating processing designed to bring about that 'p' and terminating such processing when a belief that 'p' is formed.

A great deal of philosophical effort has been lavished on the attempt to naturalize content, i.e., to explain in non-semantic, non-intentional terms what it is for something to be a representation (have content), and what it is for something to have some particular content than some other. There appear to be only four types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) covariance, (3) functional roles, (4) teleology.

Similar theories had that 'r' represents 'x' in virtue of being similar to 'x'. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the things they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps a notion of similarity that is naturalistic and does not involve property sharing can be worked out, but it is not obviously how.

Covariance theories hold that r's represent 'x' is grounded in the fact that r's occurrence ovaries with that of 'x'. This is most compelling when one thinks about detection systems: The firing neuron structure in the visual system is said to represent vertical orientations if it's firing ovaries with the occurrence of vertical lines in the visual field. Dretske (1981) and Fodor (1987), has in different ways, attempted to promote this idea into a general theory of content.

'Content' has become a technical term in philosophy for whatever it is a representation has that makes it semantically evaluable. Thus, a statement is sometimes said to have a proposition or truth condition s its content: a term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. 'Content' is a useful term precisely because it allows one to abstract away from questions about what semantic properties representations have: a representation's content is just whatever it is that underwrites its semantic evaluation.

Likewise, functional role theories hold that r's representing 'x' is grounded in the functional role 'r' has in the representing system, i.e., on the relations imposed by specified cognitive processes between 'r' and other representations in the system's repertoire. Functional role theories take their cue from such common sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

What is more that theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic? The most generally accepted account of this distinction is that a theory of justification is internalist if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person, internal to his cognitive perspective, and externalist, if it allows hast at least some of the justifying factors need not be thus accessible, so that they can be external to the believer's cognitive perspective, beyond his ken. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering and very explicit explications.

Atomistic theories take a representation's content to be something that can be specified independently of that representation's relations to other representations. What Fodor (1987) calls the crude causal theory, for example, takes a representation to be a
cow
- a mental representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraint on how
cow
’s must or might relate to other representations.

The syllogistic or categorical syllogism is the inference of one proposition from two premises. For example is, 'all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The terms that do not occur in the conclusion are called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term), justly as commended of the first premise of the example, in the minor premise the second the major term, so the first premise of the example is the minor premise, the second the major premise and 'having a tail' is the middle term. This enables syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.

Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been rearguing actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the might range over predicate and functions themselves. The fist-order predicated calculus with identity includes '=' as primitive (undefined) expression: In a higher-order calculus. It may be defined by law that? = y if (? F) (F? - Fy), which gives greater expressive power for less complexity.

Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topics, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His independent proofs worth showing that from a contradiction anything follows its parallelled logic, using a notion of entailment stronger than that of strict implication.

The imparting information has been conduced or carried out of the prescribed conventions, as disconcerting formalities that blend upon the plexuities of circumstance, that takes place in the folly of depending the contingence too secure of possibilities the outlook to be entering one's mind. This may arouse of what is proper or acceptable in the interests of applicability, which from time to time of increasingly forward as placed upon the occasion that various doctrines concerning the necessary properties are themselves represented by an arbiter or a conventional device used for adding to a prepositional or predicated calculus, for its additional rationality that two operators? And? (Sometimes written 'N' and 'M'), meaning necessarily and possible, respectfully. Usually, the production necessitates the likelihood of ‘p’, and 'p’ and ‘p’. While equalled in of wanting, as these controversial subscriptions include ‘p’ and ‘p’, if a proposition is necessary. It's necessarily, characteristic of a system known as S4, and ‘P’, ‘p’ (if as preposition is possible, it's necessarily possible, characteristic of the system known as S5). In classical modal realism, the doctrine advocated by David Lewis (1941-2002), that different possible worlds care to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she for her counterpart. Saying drowned, is spoken from the standpoint of the universe that it should make no difference which world is actual. Critics also charge that the notion fails to fit either with a coherent Theory of how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.

Saul Kripke (1940- ), the American logician and philosopher contributed to the classical modern treatment of the topic of reference, by its clarifying distinction between names and definite description, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.

One of the three branches into which 'semiotic' is usually divided, the study of semantically meaning of words, and the relation of signs to the degree to which the designs are applicable, in that, in formal studies, semantics is provided for by a formal language when an interpretation of 'model' is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of the specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds, has on the truth conditions of sentences containing them.

Holding that the basic case of reference is the relation between a name and the persons or objective worth which it names, its philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description of what it describes, or that between me and the word 'I', are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke's, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term's contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approaches in searching for more substantive possibilities that causality or psychological or social constituents are pronounced between words and things.

However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the 'Liar family', which form the purely logical paradoxes in which no such notions are involved, such as Russell's paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type seem to depend upon an element of a self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although mind-reference itself is often benign (for instance, the sentence 'All English sentences should have a verb', includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that is only existentially pathological and resulting of a self-reference. Paradoxes of the second kind then need a different treatment. Whilst the distinction is convenient in allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still the possibility that while there is no agreed solution to the semantic paradoxes. Our understanding of Russell's paradox may be imperfect as well.

Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and 'none' has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains, the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations of vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary makes an agreement valid, or a tenable position, as a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if 'p' presupposes 'q', 'q' must be true for 'p' to be either true or false. In the theory of knowledge, the English philosopher and historian George Collingwood (1889-1943), announces that any proposition capable of truth or falsity stands on of 'absolute presuppositions' which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore means that either another of a truth value is found, 'intermediate' between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion directionally imparts as to convey there to some consensus that at least who where definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of 'implicatures'.

Views about the meaning of terms will often depend on classifying the implicatures of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carries and pushes in controversial implicatures. Thus, one of the relations between 'he is poor and honest' and 'he is poor but honest' is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.

It is, nonetheless, that we find in classical logic a proposition that may be true or false. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogue between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called 'many-valued logics'.

Nevertheless, an existing definition of the predicate' . . . is true' for a language that satisfies convention 'T', the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of 'recursive' definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a 'metalanguage', Tarski is thus committed to a hierarchy of languages, each with it’s associated, but different truth-predicate. While this enables an easier approach to avoid the contradictions of paradoxical contemplations, it yet conflicts with the idea that a language should be able to say everything that there is to say, and other approaches have become increasingly important.

So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of 'now is white' is that 'snow is white', the truth condition of 'Britain would have capitulated had Hitler invaded', is that 'Britain would have capitulated had Hitler invaded'. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.

Taken to be the view, inferential semantics takes upon the role of a sentence in inference, and gives a more important key to their meaning than this 'external' relation to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clear association with things in the world.

Moreover, a theory of semantic truth is that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the disquotational theory.

The redundancy theory, or also known as the 'deflationary view of truth' fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoxes, such as that of the Liar, and Russell's paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms, e.g., quarks, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives 'topic-neutral' structure of the theory, but removes any implication that we know what the terms so administered to advocate. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.

For in part, while, both Frége and Ramsey are agreeing that the essential claim is that the predicate' . . . is true' does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that 'it is true that 'p' says no more nor less than 'p' (hence, redundancy): (2) that in less direct context, such as 'everything he said was true', or 'all logical consequences of true propositions are true', the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from a true preposition. For example, the second may translate as '(?p, q)(p & p? Q? q)' where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways; nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as 'science aims at the truth', or 'truth is a norm governing discourse'. Post-modern writing frequently advocates that we must abandon such norms, along with a discredited 'objective' conception of truth, perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that 'p', then 'p'. Discourse is to be regulated by the principle that it is wrong to assert 'p', when 'not-p'.

Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or adjoin of something might that there be more so as to a larger combination for us to consider the simplest formulation, is that 'real', assuming that it is right to demand something as one's own or one's due to its call for the challenge and maintain contentually justified. The demands adduced to forgo a defendable right of contend is a real or assumed placement to defend his greatest claim to fame. Claimed that expression of the attached adherently following the responsive quality values as explicated by the body of people who attaches them to another epically as disciplines, patrons or admirers, after al, to come after in time follows the succeeded succession to the proper lineage of the modelled composite of 'S is true' means the same as an induction or enactment into being its expression from something hided, latent or reserved to be educed to arouse the excogitated form of 'S'. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say 'Dogs bark' is True, or whether they say, 'dogs bark'. In the former representation of what they say of the sentence 'Dogs bark' is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that 'Dogs bark' is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the 'redundancy theory of truth'.

The relationship between a set of premises and a conclusion when the conclusion follows from the premise, as several philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.

From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is a purely empirical enterprise.

But this point of view by no means embraces the whole of the actual process, for it overlooks the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the examiners develop a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a 'theory'. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the 'truth' of the theory lies.

Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypothesis of the hereditary transmission of acquired characters. The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as 'neo-Darwinism' became the orthodox theory of evolution in the life sciences.

In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903), the premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more 'primitive' social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called 'social Darwinism' emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggles are usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

Once again, psychological attempts are found to establish a point by appropriate objective means, in that their evidences are well substantiated within the realm of evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive, our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who 'free-ride' on the work of others, our cognitive structures, and many others. Evolutionary psychology goes hand-in-hand with Neurophysiologic evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The terms of use are applied, more or less aggressively, especially to explanations offered in socio-biology and evolutionary psychology.

Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin's view of natural selection as a regarded-threat, competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. Complementary relationships between such results are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.

According to E.O Wilson, the 'human mind evolved to believe in the gods'' and people 'need a sacred narrative' to have a sense of higher purpose. Yet it is also clear that the unspoken 'gods'' in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. 'Science for its part', said Wilson, 'will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religious sentiment. The eventual result of the competition between each other will be the secularization of the human epic and of religion itself.

Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflect 'reality'. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing 'reality' as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide 'comprehensible' guides to living in this way. Man's imagination and intellect play vital roles on his survival and evolution.

Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of 'logical positivist' approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the 'exlanans' (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton's laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering laws are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements, which we make of explanations, and these may include, for instance, that we have a 'feel' for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.

The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.

In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, and pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form, for which is the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics includes that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.

On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Conceptions of meaning s truth-conditions needs not and ought not to be advanced for being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of the sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.

The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms - proper names, indexical, and certain pronouns - this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.

The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: 'London' refers to the city in which there was a huge fire in 1666, is a true statement about the reference of 'London'. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that 'London is beautiful' is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a psychological subject can understand, the given name to 'London' without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity; second, the theorist must offer an account of what it is for a person's language to be truly describable by as semantic theory containing a given semantic axiom.

Since the content of a claim that the sentence, 'Paris is beautiful' is the true amount under which there will be no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than the grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. It’s conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition 'p', it is true that 'p' if and only if 'p'. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claim that a sentence 'Paris is beautiful' is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson and Horwich and - confusing and inconsistently if this article is correct - Frége himself. But is the minimal theory correct?

The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truth from which such an instance as, 'London is beautiful' is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that 'London' refers to London consists in part in the fact that 'London is beautiful' has the truth-condition it does. But it is very implausible, it is, after all, possible for apprehending and for its understanding of the name 'London' without understanding the predicate 'is beautiful'.

Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form if 'p' were to happen 'q' would, or if 'p' were to have happened 'q' would have happened, where the supposition of 'p' is contrary to the known fact that 'not-p'. Such assertions are nevertheless, useful 'if you broke the bone, the X-ray would have looked different', or 'if the reactor was to fail, this mechanism would click in' are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals ('if the metal were to be heated, it would expand'), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals come out true whenever 'p' is false, so there would be no division between true and false counterfactuals.

Although the subjunctive form indicates the counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: 'If you run out of water, you will be in trouble' seems equivalent to 'if you were to run out of water, you would be in trouble', in other contexts there is a big difference: 'If Oswald did not kill Kennedy, someone else did' is clearly true, whereas 'if Oswald had not killed Kennedy, someone would have' is most probably false.

The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether 'q' is true in the 'most similar' possible worlds to ours in which 'p' is true. The similarity-ranking this approach is needed to prove of the controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactual is that they promise to illuminate that notion. There is an expanding force of awareness that the classification of conditionals is an extremely tricky business, and categorizing them as counterfactual or not that it is of limited use.

The pronouncing of any conditional, preposition of the form 'if p then q', the condition hypothesizes, 'p'. It's called the antecedent of the conditional, and 'q' the consequent. Various kinds of conditional have been distinguished. Weaken in that of material implication, merely telling us that with 'not-p' or 'q', stronger conditionals include elements of modality, corresponding to the thought that if 'p' is true then 'q' must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.

Passively, there are many forms of reliabilism. Just as there are many forms of 'Foundationalism' and 'coherence'. How is reliabilism related to these other two theories of justification? We usually regard it as a rival, and this is aptly so, insofar as Foundationalism and coherentism traditionally focussed on purely evidential relations than psychological processes, but we might also offer reliabilism as a deeper-level theory, subsuming some precepts of either Foundationalism or coherentism. Foundationalism says that there are 'basic' beliefs, which acquire justification without dependence on inference; reliabilism might rationalize this indicating that reliable non-inferential processes have formed the basic beliefs. Coherence stresses the primary of systematic in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematic consequently, reliabilism could complement Foundationalism and coherence than completed with them.

These examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman's claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of 'causal theory' intended for the belief as it is justified in case it was produced by a type of process that is 'globally' reliable, that is, its propensity to produce true beliefs that can be defined, to an acceptable approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently reasonable. We have advanced variations of this view for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a 'personality theory' could be progressively advanced from a lower or simpler to a higher or more complex form, as developing to come to have usually gradual acquirements, only based on a precise behavior al notion of preference and expectation. In the philosophy of language, much of Ramsey's work was directed at saving classical mathematics from 'intuitionism', or what he called the 'Bolshevik harassments of Brouwer and Weyl. In the theory of probability he was the first to show how we could develop some personalist's theory, based on precise behavioural notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein and his continuing friendship that led to Wittgenstein's return to Cambridge and to philosophy in 1929.

Ramsey's sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., 'quark'. Replacing the term by a variable, and existentially quantifying into the result, instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If we repeat the process for all of a group of the theoretical terms, the sentence gives the 'topic-neutral' structure of the theory, but removes any implication that we know what the term so treated prove competent. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided, virtually, all theories of knowledge. Of course, share an externalist component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or similar 'external' relations between belief and truth, closely allied to the nomic sufficiency account of knowledge. The core of this approach is that X's belief that 'p' qualifies as knowledge just in case 'X' believes 'p', because of reasons that would not obtain unless 'p's' being true, or because of a process or method that would not yield belief in 'p' if 'p' were not true. An enemy example, 'X' would not have its current reasons for believing there is a telephone before it. Or consigned to not come to believe this in the ways it does, thus, there is a counterfactual reliable guarantor of the belief's being true. Determined to and the facts of counterfactual approach say that 'X' knows that 'p' only if there is no 'relevant alternative' situation in which 'p' is false but 'X' would still believe that a proposition 'p'; must be sufficient to eliminate all the alternatives to 'p' where an alternative to a proposition 'p' is a proposition incompatible with 'p?'. That I, one's justification or evidence for 'p' must be sufficient for one to know that every alternative to 'p' is false. This element of our evolving thinking, sceptical arguments have exploited about which knowledge. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for 'us'. By pointing out alternate but hidden points of nature, in that we cannot eliminate, and others with more general application, as dreams, hallucinations, etc. The sceptic appears to show that every alternative is seldom. If ever, satisfied.

All the same, and without a problem, is noted by the distinction between the 'in itself' and the; for itself' originated in the Kantian logical and epistemological distinction between a thing as it is in itself, and that thing as an appearance, or as it is for us. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing apart from any relations in which it happens to stand. The thing for which, or as an appearance, is the thing in so far as it stands in relation to our cognitive faculties and other objects. 'Now a thing in itself cannot be known through mere relations: and we may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself'. Kant applies this same distinction to the subject's cognition of itself. Since the subject can know itself only in so far as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to its own self, it represents itself 'as it appears to itself, not as it is'. Thus, the distinction between what the subject is in itself and hat it is for itself arises in Kant in so far as the distinction between what an object is in itself and what it is for a Knower is applied to the subject's own knowledge of itself.

Hegel (1770-1831) begins the transition of the epistemological distinct ion between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel, what is, s it is in fact it in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact it in itself involves a relation to itself, or self-consciousness. Hegel suggests that the cognition of an entity in terms of such relations or self-relations do not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potentiality of that thing to enter specific explicit relations with it. And, just as for consciousness to be explicitly itself is for it to be for itself by being in relation to itself, i.e., to be explicitly self-conscious, for-itself of any entity is that entity in so far as it is actually related to itself. The distinction between the entity in itself and the entity for itself is thus taken to apply to every entity, and not only to the subject. For example, the seed of a plant is that plant in itself or implicitly, while the mature plant which involves actual relation among the plant's various organs is the plant 'for itself'. In Hegel, then, the in itself/for itself distinction becomes universalized, in is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are one and the same entity, being in itself of the plan, or the plant as potential adult, in that an ontologically distinct commonality is in for itself on the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To know a thing, it is necessary to know both the actual explicit self-relations which mark the thing (the being for itself of the thing), and the inherent simpler principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in knowledge of the thing as it is in and for itself.

Sartre's distinction between being in itself and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction. Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being in itself. What is it for consciousness to be, being for itself, is marked by self relation? Sartre posits a 'Pre-reflective Cogito', such that every consciousness of '?' necessarily involves a 'non-positional' consciousness of the consciousness of '?'. While in Kant every subject is both in itself, i.e., as it is apart from its relations, and for itself in so far as it is related to itself, and for itself in so far as it is related to itself by appearing to itself, and in Hegel every entity can be considered as both 'in itself' and 'for itself', in Sartre, to be self-related or for itself is the distinctive ontological mark of consciousness, while to lack relations or to be in itself is the distinctive e ontological mark of non-conscious entities.

This conclusion conflicts with another strand in our thinking about knowledge, in that we know many things. Thus, there is a tension in our ordinary thinking about knowledge -. We believe that knowledge is, in the sense indicated, an absolute concept and yet, we also believe that there are many instances of that concept.

If one finds absoluteness to be too central a component of our concept of knowledge to be relinquished, one could argue from the absolute character of knowledge to a sceptic conclusion (Unger, 1975). Most philosophers, however, have taken the other course, choosing to respond to the conflict by giving up, perhaps reluctantly, the absolute criterion. This latter response holds as sacrosanct our commonsense belief that we know many things (Pollock, 1979 and Chisholm, 1977). Each approach is subject to the criticism that it preserves one aspect of our ordinary thinking about knowledge at the expense of denying another. We can view the theory of relevant alternatives as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

This approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution an evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin's theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, put it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offspring's than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the hemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread; with the unfortunate consequence that sickle-cell anaemia came to exist.

When proximate and evolutionary explanations are carefully distinguished, many questions in biology make more sense. A proximate explanation describes a trait - its anatomy, physiology, and biochemistry, as well as its development from the genetic instructions provided by a bit of DNA in the fertilized egg to the adult individual. An evolutionary explanation is about why DNA specifies that trait in the first place and why has DNA that encodes for one kind of structure and not some other. Proximate and evolutionary explanations are not alternatives, but both are needed to understand every trait. A proximate explanation for the external ear would incorporate of its arteries and nerves, and how it develops from the embryo to the adult form. Even if we know this, however, we still need an evolutionary explanation of how its structure gives creatures with ears an advantage, why those that lack the structure shaped by selection to give the ear its current form. To take another example, a proximate explanation of taste buds describes their structure and chemistry, how they detect salt, sweet, sour, and bitter, and how they transform this information into impulses that travel via neurons to the brain. An evolutionary explanation of taste buds shows why they detect saltiness, acidity, sweetness and bitterness instead of other chemical characteristics, and how the capacities detect these characteristics help, and cope with life.

Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in whether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual's actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.

We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analyzed carefully. The extent for which evolution obtainably achieves perfection depends on the enacting fitness for which Darwin speaks in terms of their survival and their fittest are most likely as perfect than the non-surviving species, only, that it enables us to know exactly what you mean. If in what you mean, 'Does natural selection always takes the best path for the long-term welfare of a species?' The answer is no. That would require adaptation by group selection, and this is, unlikely. If you mean 'Does natural selection creates every adaptation that would be valuable?' The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate it mean that will evolve.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin's theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin's theory of natural selection, variations are not pre-designed to perform certain functions. Rather, these variations that perform useful functions are selected. While those that suffice on doing nothing are not selected but, nevertheless, such selections are responsible for the appearance that specific variations built upon intentionally do really occur. In the modern theory of evolution, genetic mutations provide the blind variations ( blind in the sense that variations are not influenced by the effects they would have, - the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism), the environment provides the filter of selection, and reproduction provides the retention. It is achieved because those organisms with features that make them less adapted for survival do not survive about other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes in general.

The parallel between biological evolutions and conceptual or we can see 'epistemic' evolution as either literal or analogical. The literal version of evolutionary epistemological biological evolution as the main cause of the growth of knowledge stemmed from this view, called the 'evolution of cognitive mechanic programs', by Bradie (1986) and the 'Darwinian approach to epistemology' by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisition of non-innate beliefs are themselves innately and the result of biological natural selection. Ruses (1986) repossess to resume of the insistence of an interlingual rendition of literal evolutionary epistemology that he links to sociology.

Determining the value upon innate ideas can take the path to consider as these have been variously defined by philosophers either as ideas consciously present to the mind priori to sense experience (the non-dispositional sense), or as ideas which we have an innate disposition to form, though we need to be actually aware of them at a particular r time, e.g., as babies - the dispositional sense. Understood in either way they were invoked to account for our recognition of certain verification, such as those of mathematics, or to justify certain moral and religious clams which were held to b capable of being know by introspection of our innate ideas. Examples of such supposed truths might include 'murder is wrong' or 'God exists'.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times one about a source of propositional knowledge, in so far as concepts are taken to be innate the doctrine relates primarily to claims about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, their supposed innateness is taken an evidence for the truth. This latter thesis clearly rests on the assumption that innate propositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky's influential account of the mind's linguistic capacities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some propositions are certainly true where that recognition cannot be justified solely o the basis of an appeal to sense experiences. Thus Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection, in Plato, the recollection of knowledge, possibly obtained in a previous stat e of existence e draws its topic as most famously broached in the dialogue “Meno,” and the doctrine is one attemptive account for the 'innate' unlearned character of knowledge of first principles. Since there was no plausible post-natal source the recollection must refer of a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supported the views that there were importantly gradatorially innate human beings and it was this sense which hindered their proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and scholastic teaching until its displacement by Locke' philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our idea of God must necessarily exist, is Descartes held, logically independent of sense experience. In England the Cambridge Plantonists such as Henry Moore and Ralph Cudworth added considerable support.

Locke's rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated disposition version of theory, but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions in the direction of construing with necessary truths as analytic, justly be for Kant's refinement of the classification of propositions with the fourfold analytic/synthetic distention and deductive/inductive did nothing to encourage a return to their innate idea's doctrine, which slipped from view. The doctrine may fruitfully be understood as the genesis of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Chomsky's revival of the term in connection with his account of the spoken exchange acquisition has once more made the issue topical. He claims that the principles of language and 'natural logic' are known unconsciously and is a precondition for language acquisition. But for his purposes innate ideas must be taken in a strong dispositional sense - so strong that it is far from clear that Chomsky's claims are as in direct conflict, and make unclear in mind or purpose, as with empiricists accounts of valuation, some (including Chomsky) have supposed. Willard van Orman Quine (1808-2000), for example, sees no disaccording with his own version of empirical behaviorism, in which sees the typical of an earlier time and often replaced by something more modern or fashionable converse [in] views upon the meaning of determining what a thing should be, as each generation has its own standards of mutuality.

Locke' accounts of analytic propositions was, that everything that a succinct account of analyticity should be (Locke, 1924). He distinguishes two kinds of analytic propositions, identity propositions for which 'we affirm the said term of itself', e.g., 'Roses are roses' and predicative propositions in which 'a part of the complex idea is predicated of the name of the whole', e.g., 'Roses are flowers'. Locke calls such sentences 'trifling' because a speaker who uses them 'trifling with words'. A synthetic sentence, in contrast, such as a mathematical theorem, that state of real truth and presents its instructive parallel's of real knowledge'. Correspondingly, Locke distinguishes both kinds of 'necessary consequences', analytic entailments where validity depends on the literal containment of the conclusion in the premise and synthetic entailment where it does not. John Locke (1632-1704) did not originate this concept-containment notion of analyticity. It is discussed by Arnaud and Nicole, and it is safe to say that it has been around for a very long time.

All the same, the analogical version of evolutionary epistemology, called the 'evolution of theory's program', by Bradie (1986). The 'Spenserians approach' (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), a process analogous to biological natural selection has governed the development of human knowledge, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) and Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.

We have usually taken both versions of evolutionary epistemology to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. By contrast, the analogical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Savagery put, evolutionary epistemology of the analogical sort could still be true even if creationism is the correct theory of the origin of species.

Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. (Campbell 1974) says that 'if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom', i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one's knowledge beyond what one knows, one must processed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one's knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because we can empirically falsify it. The central claim of evolutionary epistemology is synthetic, not analytic, but if the central contradictory of which they are not, then Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature.

Two extra-ordinary issues lie to awaken the literature that involves questions about 'realism', i.e., what metaphysical commitment does an evolutionary epistemologist have to make? (Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal?) With respect to realism, many evolutionary epistemologists endorse that is called 'hypothetical realism', a view that combines a version of epistemological 'scepticism' and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge is. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biological evolution does not. Some have argued that evolutionary epistemologists must give up the 'truth-topic' sense of progress because a natural selection model is in non-teleological in essence alternatively, following Kuhn (1970), and embraced along with evolutionary epistemology.

Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind are to argue that, however, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton argue that lunatics are analogous to biological pre-adaptations, evolutionary pre-biological pre-adaptations, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their discountable structures: The function of descendability may result in the function of their descendable character embodied to its structural foundations, is that of the guideline of epistemic variation is, on this view, not the source of dis-analogy, but the source of a more articulated account of the analogy.

Many evolutionary epistemologists try to combine the literal and the analogical versions, saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blindness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind.

Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is used for understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programmed.

What makes a belief justified and what makes true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused such subjectivity to have the belief. In recent decades many epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that 'p' is knowledge just in case it has the right causal connection to the fact that 'p'. They can apply such a criterion only to cases where the fact that 'p' is a sort that can enter intuit causal relations, as this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects' environments.

For example, Armstrong (1973) initially proposed something which is proposed to another for consideration, as a set before the mind for consideration, as to put forth an intended purpose. That a belief to carry a one's affairs independently and self-sufficiently often under difficult circumstances progress for oneself and makes do and stand on one's own formalities in the transitional form 'This [perceived] objects is 'F' is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is 'F', that is, the fact that the object is 'F' contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject, and the perceived objective 'y', if 'p' had those properties and believed that 'y' is 'F', then 'y' is 'F'. Offers a rather similar account, in terms of the belief's being caused by a signal received by the perceiver that carries the information that the object is 'F'.

This sort of condition fails, however, to be sufficiently for non-inferential perceptivity, for knowledge is accountable for its compatibility with the belief's being unjustified, and an unjustified belief cannot be knowledge. The view that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth, seems by accountabilities that they have variations of this view which has been advanced for both knowledge and justified belief. The first formulation of a reliable account of knowing notably appeared as marked and noted and accredited to F. P. Ramsey (1903-30), whereby much of Ramsey's work was directed at saving classical mathematics from 'intuitionism', or what he called the 'Bolshevik menace of Brouwer and Weyl'. In the theory of probability he was the first to develop, based on precise behavioural nations of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a 'redundancy theory of truth', which he combined with radical views of the function of many kinds of propositions. Neither generalizations, nor causal positions, nor those treating probability or ethics, described facts, but each have a different specific function in our intellectual economy. Additionally, Ramsey, who said that an impression of belief was knowledge if it were true, certain and obtained by a reliable process. P. Unger (1968) suggested that 'S' knows that 'p' just in case it is of at all accidental that 'S' is right about its being the case that drew an analogy between a thermometer that reliably indicates the temperature and a belief interaction of reliability that indicates the truth. Armstrong said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantees its truth via laws of nature.

They standardly classify reliabilism as an 'externaturalist' theory because it invokes some truth-linked factor, and truth is 'eternal' to the believer the main argument for externalism derives from the philosophy of language, more specifically, from the various phenomena pertaining to natural kind terms, indexicals, etc., that motivate the views that have come to be known as direct reference' theories. Such phenomena seem, at least to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, i.e., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc. -. Not just on what is going on internally in his mind or brain (Putnam, 175 and Burge, 1979.) Virtually all theories of knowledge, of course, share an externalist component in requiring truth as a condition for knowing. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by means of a nomic, counterfactual or other such 'external' relations between 'belief' and 'truth'.

The most influential counterexample to reliabilism is the demon-world and the clairvoyance examples. The demon-world example challenges the necessity of the reliability requirement, in that a possible world in which an evil demon creates deceptive visual experience, the process of vision is not reliable. Still, the visually formed beliefs in this world are intuitively justified. The clairvoyance example challenges the sufficiency of reliability. Suppose a cognitive agent possesses a reliable clairvoyance power, but has no evidence for or against his possessing such a power. Intuitively, his clairvoyantly formed beliefs are unjustifiably unreasoned, but Reliabilism declares them justified.

Another form of reliabilism, - 'normal worlds', reliabilism, answers to the range problem differently, and treats the demon-world problem in the same fashionable manner, and so permitting a 'normal world', as one that is consistent with our general beliefs about the actual world. Normal-worlds reliabilism says that a belief, in any possible world is justified just in case its generating processes have high truth ratios in normal worlds. This resolves the demon-world problem because the relevant truth ratio of the visual process is not its truth ratio in the demon world itself, but its ratio in normal worlds. Since this ratio is presumably high, visually formed beliefs in the demon world turn out to be justified.

Yet, a different version of reliabilism attempts to meet the demon-world and clairvoyance problems without recourse to the questionable notion of 'normal worlds'. Consider Sosa's (1992) suggestion that justified beliefs is belief acquired through 'intellectual virtues', and not through intellectual 'vices', whereby virtues are reliable cognitive faculties or processes. The task is to explain how epistemic evaluators have used the notion of indelible virtues, and vices, to arrive at their judgments, especially in the problematic cases. Goldman (1992) proposes a two-stage reconstruction of an evaluator's activity. The first stage is a reliability-based acquisition of a 'list' of virtues and vices. The second stage is application of this list to queried cases. Determining has executed the second stage whether processes in the queried cases resemble virtues or vices. We have classified visual beliefs in the demon world as justified because visual belief formation is one of the virtues. Clairvoyance formed, beliefs are classified as unjustified because clairvoyance resembles scientifically suspect processes that the evaluator represents as vices, e.g., mental telepathy, ESP, and so forth

A philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), Wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentences ids only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for examples, belief in God, are the widest sense of the works satisfactorily in the widest sense of the word. On James's view almost any belief might be respectable, and even true, but working with true beliefs is not a simple matter for James. The apparent subjectivist consequences of this were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20th-century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an 'automatic sweetheart' or female zombie) and remarks' that the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others, these implications that make it true that the other persons have minds in the disturbing part.

Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who has usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on the one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because beliefs have effects, as they work. Pragmatism can be found in Kant's doctrine of the primary of practical over pure reason, and continued to play an influential role in the theory of meaning and of truth.

In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926- ) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdate, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what the force of impression of one thing on another, inducing to come into being and carry to as successful conclusions as found a pass that allowed them to affect passage through the mountains. A condition or occurrence traceable to a cause drawing forth the underlying and hidden layers of deep-seated latencies. Very well protected but the digression belongs to the patient, in that, what exists of the back-burners of the mind, slowly simmering, and very much of your self control is intact: Furthering the outcry of latent incestuousness that affects the likelihood of having an influence upon behaviour, so then all that we would have done otherwise, contains all that is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or 'realization' of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to support thoughts and desires too differently from our own, it may then seem as though beliefs and desires are obtained in the consenting availability of 'variably acquired' causal architecture, just as much as they can be in different Neurophysiologic states.

The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.

In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C.S. Peirce, James held that truth is what compellingly works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism's refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists' denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers' Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; His objective was to infuse scientific thinking into philosophy and society and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept 'brittle', for example, is given by the observed consequences or properties that objects called 'brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivist, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce's doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called 'the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any-one philosophy to explain everything.

Dewey's philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and societies are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey's writings, although he aspired to synthesize the two realms.

The pragmatists' tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - have an alternative to Rorty's interpretation of the tradition.

One of the earliest versions of a correspondence theory was put forward in the 4th century Bc Greek philosopher Plato, who sought to understand the meaning of knowledge and how it is acquired. Plato wished to distinguish between true belief and false belief. He proposed a theory based on intuitive recognition that true statements correspond to the facts - that is, agree with reality - while false statements do not. In Plato's example, the sentence "Theaetetus flies" can be true only if the world contains the fact that Theaetetus flies. However, Plato - and much later, 20th-century British philosopher Bertrand Russell - recognized this theory as unsatisfactory because it did not allow for false belief. Both Plato and Russell reasoned that if a belief is false because there is no fact to which it corresponds, it would then be a belief about nothing and so not a belief at all. Each then speculated that the grammar of a sentence could offer a way around this problem. A sentence can be about something (the person Theaetetus), yet false (flying is not true of Theaetetus). But how, they asked, are the parts of a sentence related to reality?

One suggestion, proposed by 20th-century philosopher Ludwig Wittgenstein, is that the parts of a sentence relate to the objects they describe in much the same way that the parts of a picture relate to the objects pictured. Once again, however, false sentences pose a problem: If a false sentence pictures nothing, there can be no meaning in the sentence.

In the late 19th-century American philosopher Charles S. Peirce offered another answer to the question "What is truth?" He asserted that truth is that which experts will agree upon when their investigations are final. Many pragmatists such as Peirce claim that the truth of our ideas must be tested through practice. Some pragmatists have gone so far as to question the usefulness of the idea of truth, arguing that in evaluating our beliefs we should rather pay attention to the consequences that our beliefs may have. However, critics of the pragmatic theory are concerned that we would have no knowledge because we do not know which set of beliefs will ultimately be agreed upon; nor are their sets of beliefs that are useful in every context.

A third theory of truth, the coherence theory, also concerns the meaning of knowledge. Coherence theorists have claimed that a set of beliefs is true if the beliefs are comprehensive - that is, they cover everything - and do not contradict each other.

Other philosophers dismiss the question "What is truth?" With the observation that attaching the claim 'it is true that' to a sentence adds no meaning, however, these theorists, who have proposed what are known as deflationary theories of truth, do not dismiss such talk about truth as useless. They agree that there are contexts in which a sentence such as 'it is true that the book is blue' can have a different impact than the shorter statement 'the book is blue'. What is more important, use of the word true is essential when making a general claim about everything, nothing, or something, as in the statement 'most of what he says is true?'

Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato's expression of ideas in the form of dialogues-the dialectical method, used most famously by his teacher Socrates - has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.

Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosopher's G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, and they set the mood and style of philosophizing for much of the 20th century English-speaking world.

For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as 'time is unreal', analyses that aided of determining the truth of such assertions.

Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitutes what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements 'John is good' and 'John is tall' have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property 'goodness' as if it were a characteristic of John in the same way that the property 'tallness' is a characteristic of John. Such failure results in philosophical confusion.

Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.

Russell's work of mathematics attracted towards studying in Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; translation 1922), in which he first presented his theory of language, Wittgenstein argued that 'all philosophy is a 'critique of language' and that 'philosophy aims at the logical clarification of thoughts'. The results of Wittgenstein's analysis resembled Russell's logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts - the propositions of science - are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.

Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism: Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).

The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition 'two plus two equals four'. The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually dwindling. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer's Language, Truth and Logic in 1936.

The positivists' verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953, translated 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.

This recognition led to Wittgenstein's influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.

Additional contributions within the analytic and linguistic movement include the work of the British philosopher's Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate 'systematically misleading expressions' in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.

Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.

Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, are needed in addition to logic in analysing ordinary language.

Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.

The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyse ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can many a time have an eye to aid in anatomize Philosophical problems.

A loose title for various philosophies that emphasize certain common themes, the individual, the experience of choice, and the absence of rational understanding of the universe, with the additional ways of addition seems a consternation of dismay or one fear, or the other extreme, as far apart is the sense of the dottiness of 'absurdity in human life', however, existentialism is a philosophical movement or tendency, emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries.

Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.

Most philosophers since Plato have held that the highest ethical good are the same for everyone; Insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual are to find his or her own unique vocation. As he wrote in his journal, 'I must find a truth that is true for me . . . the idea for which I can live or die'. Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.



One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885), which was articulated by the German philosopher Friedrich Nietzsche's theory of the Übermensch, a term translated as "Superman" or "Overman." The Superman was an individual who overcame what Nietzsche termed the 'slave morality' of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that 'God is dead', or that traditional morality was no longer relevant in people's lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the "death of God" and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.

The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of being (Heidegger's term for that which underlies all existence).

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis - in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology as well as on language.

Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much did of Sartre's works focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that 'man is condemned to be free', Sartre reminds us of the responsibility that accompanies human decisions.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one and thus human life is a 'futile passion'. Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.

Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on a 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theologies through his preoccupation with transcendence and the limits of human experience. The German Protestant theologian's Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels which probed the motivations and moral justifications for his characters' actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky's best work, interlaces religious exploration with the story of a family's violent quarrels over a woman and a disputed inheritance.

A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), "We must love life more than the meaning of it."

The opening series of arranged passages in continuous or uniform order, by ways that the progressive course accommodates to arrange in a line or lines of continuity, Wherefore, the Russian novelist Fyodor Dostoyevsky's Notes from Underground (1864) - 'I am a sick man . . . I am a spiteful man' - are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary, military service in Siberia, Notes from Underground is a sign of Dostoyevsky's rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader's sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an 'overly conscious' intellectual.

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925 translations, 1937) and The Castle (1926 translations, 1930), presents isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writer's André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer and John Barth.

The problem of defining knowledge in terms of true belief plus some favoured relation between the believer and the facts began with Plato's view in the Theaetetus, that knowledge is true belief plus some logos, and epistemology is a beginning for which it is bound to the foundations of knowledge, a special branch of philosophy that addresses the philosophical problems surrounding the theory of knowledge. Epistemology is concerned with the definition of knowledge and related concepts, the sources and criteria of knowledge, the kinds of knowledge possible and the degree to which each is certain, and the exact integrations among the one's who are understandably of knowing and the object known.

Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. And explicated by the author, Anthony Kenny who examines the complexities of Aquinas's concepts of substance and accident.

In the 5th century Bc, the Greek Sophists questioned the possibility of reliable and objective knowledge. Thus, a leading Sophist, Gorgias, argued that nothing really exists, that if anything did exist it could not be known, and that if knowledge were possible, it could not be communicated. Another prominent Sophist, Protagoras, maintained that no person's opinions can be said to be more correct than another's, because each is the sole judge of his or her own experience. Plato, following his illustrious teacher Socrates, tried to answer the Sophists by postulating the existence of a world of unchanging and invisible forms, or ideas, about which it is possible to have exact and certain knowledge. The thing's one sees and touches, they maintained, are imperfect copies of the pure forms studied in mathematics and philosophy. Accordingly, only the abstract reasoning of these disciplines yields genuine knowledge, whereas reliance on sense perception produces vague and inconsistent opinions. They concluded that philosophical contemplation of the unseen world of forms is the highest goal of human life.

Aristotle followed Plato in regarding abstract knowledge as superior to any other, but disagreed with him as to the proper method of achieving it. Aristotle maintained that almost all knowledge is derived from experience. Knowledge is gained either directly, by abstracting the defining traits of a species, or indirectly, by deducing new facts from those already known, in accordance with the rules of logic. Careful observation and strict adherence to the rules of logic, which were first set down in systematic form by Aristotle, would help guard against the pitfalls the Sophists had exposed. The Stoic and Epicurean schools agreed with Aristotle that knowledge originates in sense perception, but against both Aristotle and Plato they maintained that philosophy is to be valued as a practical guide to life, rather than as an end in itself.

After many centuries of declining interest in rational and scientific knowledge, the Scholastic philosopher Saint Thomas Aquinas and other philosophers of the middle Ages helped to restore confidence in reason and experience, blending rational methods with faith into a unified system of beliefs. Aquinas followed Aristotle in regarding perception as the starting point and logic as the intellectual procedure for arriving at reliable knowledge of nature, but he considered faith in scriptural authority as the main source of religious belief.

From the 17th to the late 19th century, the main issue in epistemology was reasoning versus sense perception in acquiring knowledge. For the rationalists, of whom the French philosopher René Descartes, the Dutch philosopher Baruch Spinoza, and the German philosopher Gottfried Wilhelm Leibniz were the leaders, the main source and final test of knowledge was deductive reasoning based on self-evident principles, or axioms. For the empiricists, beginning with the English philosophers Francis Bacon and John Locke, the main source and final test of knowledge was sense perception.

Bacon inaugurated the new era of modern science by criticizing the medieval reliance on tradition and authority and also by setting down new rules of scientific method, including the first set of rules of inductive logic ever formulated. Locke attacked the rationalist belief that the principles of knowledge are intuitively self-evident, arguing that all knowledge is derived from experience, either from experience of the external world, which stamps sensations on the mind, or from internal experience, in which the mind reflects on its own activities. Human knowledge of external physical objects, he claimed, is always subject to the errors of the senses, and he concluded that one cannot have absolutely certain knowledge of the physical world.

Irish-born philosopher and clergyman George Berkeley (1685-1753) argued that of everything a human being conceived of exists, as an idea in a mind, a philosophical focus which is known as idealism. Berkeley reasoned that because one cannot control one's thoughts, they must come directly from a larger mind: that of God. In this excerpt from his Treatise Concerning the Principles of Human Knowledge, written in 1710, Berkeley explained why he believed that it is 'impossible . . . that there should be any such thing as an outward object'.

The Irish philosopher George Berkeley acknowledged along with Locke, that knowledge occurs through ideas, but he denied Locke's belief that a distinction can appear between ideas and objects. The British philosopher David Hume continued the empiricist tradition, but he did not accept Berkeley's conclusion that knowledge was of ideas only. He divided all knowledge into two kinds: Knowledge of relations of ideas - that is, the knowledge found in mathematics and logic, which is exact and certain but provide no information about the world. Knowledge of matters of fact - that is, the knowledge derived from sense perception. Hume argued that most knowledge of matters of fact depends upon cause and effect, and since no logical connection exists between any given cause and its effect, one cannot hope to know any future matter of fact with certainty. Thus, the most reliable laws of science might not remain true - a conclusion that had a revolutionary impact on philosophy.

The German philosopher Immanuel Kant tried to solve the crisis precipitated by Locke and brought to a climax by Hume; His proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists, one can have exact and certain knowledge, but he followed the empiricists in holding that such knowledge is more informative. Adding upon a proposed structure of thought than about the world outside of thought, and distinguishing upon three kinds of knowledge: Analytical deduction, which is exact and certain but uninformative, because it makes clear only what is contained in definitions; synthetic a posterior, which conveys information about the world learned from experience, but is subject to the errors of the senses; and synthetic a priori, which is discovered by pure intuition and is both exact and certain, for it expresses the necessary conditions that the mind imposes on all objects of experience. Mathematics and philosophy, according to Kant, provide this last. Since the time of Kant, one of the most frequently argued questions in philosophy has been whether or not such a thing as synthetic a priori knowledge really exists.

During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge that was further emphasized by Herbert Spencer in Britain and by the German school of historicisms. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge and both extended the principles of empiricism to the study of society.

The American school of pragmatism, founded by the philosophers Charles Sanders Peirce, William James, and John Dewey at the turn of this century, carried empiricism further by maintaining that knowledge is an instrument of action and that all beliefs should be judged by their usefulness as rules for predicting experiences.

In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known as a result of the perception. The phenomena lists contended that the objects of knowledge are the same as the objects perceived. The neutralists argued that one has direct perceptions of physical objects or parts of physical objects, rather than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colours and sounds, these stand for physical objects and provide knowledge thereof.

A method for dealing with the problem of clarifying the relation between the act of knowing and the object known was developed by the German philosopher Edmund Husserl. He outlined an elaborate procedure that he called phenomenology, by which one is said to be able to distinguish the way things appear to be from the way one thinks they really are, thus gaining a more precise understanding of the conceptual foundations of knowledge.

During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricists insisted that there is only one kind of knowledge: scientific knowledge; that any valid knowledge claim must be verifiable in experience; and hence that much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements. The so-called verifiability criterion of meaning has undergone changes as a result of discussions among the logical empiricists themselves, as well as their critics, but has not been discarded. More recently, the sharp distinction between the analytic and the synthetic has been attacked by a number of philosophers, chiefly by American philosopher W.V.O. Quine, whose overall approach is in the pragmatic tradition.

The latter of these recent schools of thought, generally referred to as linguistic analysis, or ordinary language philosophy, seem to break with traditional epistemology. The linguistic analysts undertake to examine the actual way key epistemological terms are used - terms such as knowledge, perception, and probability - and to formulate definitive rules for their use in order to avoid verbal confusion. British philosopher John Langshaw Austin argued, for example, that to say a statement was true, and add nothing to the statement except a promise by the speaker or writer. Austin does not consider truth a quality or property attaching to statements or utterances. However, the ruling thought is that it is only through a correct appreciation of the role and point of this language is that we can come to a better conceptual understanding of what the language is about, and avoid the oversimplifications and distortion we are apt to bring to its subject matter.

Linguistics is the scientific study of language. It encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner's first language and about the language being acquired.

Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.

Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to record and analyse Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathered a body of data and analyses' it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as the berry in a blueberry; or prefixes (pre- in preview) and suffixes (-ness in openness).

The linguist's next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence "She pushed the bush," the morpheme she, a pronoun, is the subject 'push', a transitive verb, is the verb 'the', a definite article, is the determiner, and bush, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntax (describing the order of morphemes and their function) provides descriptive linguists with a way to write down grammars of languages never before written down or analysed. In this way they can begin to study and understand these languages.

Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greek, and Latins were related to each other and had descended from a common source. He based this assertion on observations of similarities in sounds and meanings among the three languages. For example, the Sanskrit word borate for "brother" resembles the Latin word frater, the Greek word phrater, (and the English word brother).

Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo-European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.

Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verbs 'go', changes too, 'went' and 'gone' to express a past action. Chinese, on the other hand, has no such inflected forms; the verb remains the same while other words indicate the time (as in "go store tomorrow"). In Swahili, prefixes, suffixes, and infixes (additions in the body of the word) combine with a root word to change its meaning. For example, a single word might be express when something was done, by whom, to whom, and in what manner.

Some comparative linguists reconstruct hypothetical ancestral languages known as proto-languages, which they use to demonstrate relatedness among contemporary languages. A proto-language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of people.

Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture-free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.

By the 1960s comparativists were no longer satisfied with focussing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.

The field of linguistics, which lends from its own theories and methods into other disciplines, and many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross-disciplinary fields.

Sociolinguistic study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as "fourth floor" can indicate the person's social class. According to one study, people aspiring to move from the lower middle class to the upper middle class attach prestige to pronouncing /r/. Sometimes they even overcorrect their speech, pronouncing /r/ where those whom they wish to copy may not.

Some Sociolinguists believe that analysing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other Sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of a Sociolinguistical understanding, perhaps, takes a position whereby a communicative competence - what people need to know to use the appropriate language for a given social setting.

Psycholinguistics merge the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children's language acquisition and of second-language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).

Computational linguistics involves the use of computers to compile linguistic data, analyse languages, translate from one language to another, and develop and test models of language processing. Linguists use computers and large samples of actual language to analyse the relatedness and the structure of languages and to look for patterns and similarities. Computers also assist in stylistic studies, information retrieval, various forms of textual analysis, and the construction of dictionaries and concordances. Applying computers to language studies has resulted in a machine translated systems and machines that recognize and produce speech and text. Such machines facilitate communication with humans, including those who are perceptually or linguistically impaired.

Applied linguistics employs linguistic theory and methods in teaching and in research on learning a second language. Linguists look at the errors people make as they learn another language and at their strategies for communicating in the new language at different degrees of competence. In seeking to understand what happens in the mind of the learner, applied linguists recognize that motivation, attitude, learning style, and personality affect how well a person learns another language.

Anthropological linguistics, also known as linguistic anthropology, uses linguistic approaches to analyse culture. Anthropological linguists examine the relationship between a culture and its language. The way cultures and languages have moderately changed uninterruptedly through intermittent intervals of time. And how various cultures and languages are related to each other, for example, the present English usage of family and given names arose in the late 13th and early 14th centuries when the laws concerning registration, tenure, and inheritance of property were changed.

Once linguists began to study language as a set of abstract rules that somehow account for speech, other scholars began to take an interest in the field. They drew analogies between language and other forms of human behaviour, based on the belief that a shared structure underlies many aspects of a culture. Anthropologists, for example, became interested in a structuralist approach to the interpretation of kinship systems and analysis of myth and religion. American linguist Leonard Bloomfield promoted structuralism in the United States.

Saussure's ideas also influenced European linguistics, most notably in France and Czechoslovakia (now the Czech Republic). In 1926 Czech linguist Vilem Mathesius founded the Linguistic Circle of Prague, a group that expanded the focus of the field to include the context of language use. The Prague circle developed the field of phonology, or the study of sounds, and demonstrated that universal features of sounds in the languages of the world interrelate in a systematic way. Linguistic analysis, they said, should focus on the distinctiveness of sounds rather than on the ways they combine. Where descriptivist tried to locate and describe individual phonemes, such as /b/ and /p/, the Prague linguists stressed the features of these phonemes and their interrelationships in different languages. In English, for example, the voice distinguishes between the similar sounds of /b/ and /p/, but these are not distinct phonemes in a number of other languages. An Arabic speaker might pronounce the cities Pompeii and Bombay the same way.

As linguistics developed in the 20th century, the notion became prevalent that language is more than speech - specifically, that it is an abstract system of interrelationships shared by members of a speech community. Structural linguistics led linguists to look at the rules and the patterns of behaviour shared by such communities. Whereas structural linguists saw the basis of language in the social structure, other linguists looked at language as a mental process.

The 1957 publication of ”Syntactic Structures” by American linguist Noam Chomsky initiated what many views as a scientific revolution in linguistics. Chomsky sought a theory that would account for both linguistic structure and the creativity of language - the fact that we can create entirely original sentences and understand sentences never before uttered. He proposed that all people have an innate ability to acquire language. The task of the linguist, he claimed, is to describe this universal human ability, known as language competence, with a grammar from which the grammars of all languages could be derived. The linguist would develop this grammar by looking at the rules children use in hearing and speaking their first language. He termed the resulting model, or grammar, a transformational-generative grammar, referring to the transformations (or rules) that create (or account for) language. Certain rules, Chomsky asserted, are shared by all languages and form part of a universal grammar, while others are language specific and associated with particular speech communities. Since the 1960s much of the development in the field of linguistics has been a reaction to or against Chomsky's theories.

At the end of the 20th century, linguists used the term grammar primarily to refer to a subconscious linguistic system that enables people to produce and comprehend an unlimited number of utterances. Grammar thus accounts for our linguistic competence. Observations about the actual language we use, or language performance, are used to theorize about this invisible mechanism known as grammar.

The scientific study of language led by Chomsky has had an impact on nongenerative linguists as well. Comparative and historically oriented linguists are looking for the various ways linguistic universals show up in individual languages. Psycholinguists, interested in language acquisition, are investigating the notion that an ideal speaker-hearer is the origin of the acquisition process. Sociolinguists are examining the rules that underlie the choice of language variants, or codes, and allow for switching from one code to another. Some linguists are studying language performance - the way people use language - to see how it reveals a cognitive ability shared by all human beings. Others seek to understand animal communication within such a framework. What mental processes enable chimpanzees to make signs and communicate with one another and how do these processes differ from those of humans?

From these initial concerns came some of the great themes of twentieth-century philosophy. How exactly does language relate to thought? Are the irredeemable problems about putative private thought? These issues are captured under the general label ‘Lingual Turn’. The subsequent development of those early twentieth-century positions has led to a bewildering heterogeneity in philosophy in the early twenty-first century. the very nature of philosophy is itself radically disputed: Analytic, continental, postmodern, critical theory, feminist t, and non-Western, are all prefixes that give a different meaning when joined to ‘philosophy’. The variety of thriving different schools, the number of professional philosophers, the proliferation of publications, the development of technology in helping research as all manifest a radically different situation to that of one hundred years ago.

As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.

As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.

A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.

Finally, proof, least of mention, is a collection of considerations and reasons that instill and sustain conviction that some proposed theorem-the theorem proved-is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized -, i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as 'folk psychology') are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do. We have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.

Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the 'intentional stance' toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational -, i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a 'moderate' realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic 'structural' or 'syntactic' properties. The semantic properties of a mental state, however, are determined by its extrinsic properties -, e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ('what-it's-like') features ('Qualia'), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts but, nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)

Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations - percepts ('impressions'), images ('ideas') and the like - are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.

Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as Qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term 'representationalism' is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether Qualia are intrinsically representational (Loar) or not (Block, Peacocke).

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal - though not in the same way.)

The main argument for representationalism appeals to the transparency of experience. The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to 'see through it' to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of 'symbol-filled arrays.' (The account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences - Qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual 'scenario' (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is 'correct' (a semantic property) if in the corresponding 'scene' (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the 'phenomenal concept' -, a conceptual/phenomenal hybrid consisting of a phenomenological 'sample' (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, 'you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.' One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that they have spatial properties, i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery. The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery - hence the designation 'pictorial'; Though of course, there may imagery in other modalities - auditory, olfactory, etc. - as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such for being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is 'quasi-pictorial' when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially - for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are '(labelled) interpreted symbol-filled arrays.' The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each 'cell' in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.

Causal-informational theories hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories, the Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories, hold that the content of a mental representation are well grounded in causal computational inferential relations to other mental portrayals other than mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localisms (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists, about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to Internalist individuation of the content (if not the reference) of such states.

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists).

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both 'narrow' content (determined by intrinsic factors) and 'wide' or 'broad' content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.

Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986) for example, seems to understand it as something like dedictorial content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, has also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. Both, construe of as a narrow content and are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role or its phenomenology.

Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that there may be no need to narrow its contentual representations, accountable for reasons of an ordering supply of naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are nomologically either impossible or dismissible as exceptions to non-strict psychological laws.

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind, claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes' implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some - so-called 'subpersonal' or 'sub-doxastic' representations - are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental. That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the 'mental models' of Johnson-Laird 1983, the 'retinal arrays,' 'primal sketches' and '2½ -D sketches' of Marr 1982, the 'frames' of Minsky 1974, the 'sub-symbolic' structures of Smolensky 1989, the 'quasi-pictures' of Kosslyn 1980, and the 'interpreted symbol-filled arrays' of Tye 1991 - in addition to representations that may be appropriate to the explanation of commonsense

Psychological states. Computational explanations have been offered of, among other mental phenomena, belief.

The classicists hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists, hold that mental representations are realized by patterns of activation in a network of simple processors ('nodes') and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism -, 'localist' versions - on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program.

Classicists are motivated (in part) by properties thought seems to share with language. Jerry Alan Fodor's (1935-), Language of Thought Hypothesis, (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of a thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypotheses explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drive's computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of 'weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is 'trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively 'brittle' or 'fragile.'

Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures, collect the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. Computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that you took to sniffing snuff. I am thinking about you, and if what I think of you (that they take snuff) is true of you, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that you take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that you take snuff. I am talking about you, and if what I say of you (that they take snuff) is true of them, then my utterance is true. Now, to say that you take snuff is (in part) to utter a sentence that means that you take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express. On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as à whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.

Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.

However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a
cow
- a menial representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
's must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations à representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
should behave in inference.

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are Internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the Internalist-externalist distinction.

Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.

All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like 'arthritis', or the kind of tree I refer to as a 'Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: 'situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.

Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements - 'is statements' in the relevant sense - represent some state of affairs as obtaining, whereas normative statements - evaluative, and deontic ones - attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. 'That is a good book' expresses a value judgement though the term 'value' is absent (nor would 'valuable' be synonymous with 'good'). Similarly, 'we are morally obligated to fight' superficially expresses a statement, and 'By all indications it ough to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.

Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute - in a factual analysable way - to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are 'theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, and attributing an obligation and, on the other hand, saying how the world is.

Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, as a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value - e.g., moral value - they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of valuational or deontic judgements: Thus, 'it is a good book, but that is no reason for a positive attribute towards it' and 'you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, 'an expensive book' and 'you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are 'value-free' in the required way.

Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulations - a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.

Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.

Traditionally, belief has been of epistemological interest in its propositional guise: 'S' believes that 'p', where 'p' is a reposition towards which an agent, 'S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believe in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is 'reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.

Some philosophers have followed St. Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.

Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this ids united with his belief that God exists, and reasonably so - in a way that an ordinary propositional belief that would not.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliablist condition is satisfied.

One sort of response to this latter sorts of an objection is to 'bite the bullet' and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent Internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly Internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general Internalist view of justification that externalist is committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justicatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the Internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain Internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults' posse's knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is, at least, less vulnerable to Internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge, for which is accepted or advanced as true or real on the basis of less than conclusive evidence, as can only be assumed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, and knowledge?`

A rather different use of the terms 'internalism' and 'externalism' have to do with the issue of how the content of beliefs and thoughts is determined: According to an Internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.

Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.

Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.

The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.

A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.

Finally, proof, least of mention, is a collection of considerations and reasonings that instill and sustain conviction that some proposed theorem-the theorem proved-is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs’ as mathematicians are quite content to give them. For example, formal-logical derivations depend solely on the logical form of the considered proposition, whereas usually proofs depend in large measure on content of propositions other than their logical form

No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs’ as mathematicians are quite content to give them, fas or example, formal-logical .





EVOLVING PRINCIPLES OF THOUGHT



THE HUMAN CONDITION

BOOK TWO





A period of geologic time is an intermittent unit of time that geologists use to divide the earth’s history. On the geologic time scale, a period is longer than an epoch and shorter than an era. The earth is about 4.5 billions of year’s old. Earth scientists divide its age into shorter blocks of time. The largest of these are eons, of which there are three in the earth’s history. The last eon is formally divided into eras, which are made up of periods. Many periods are divided into epochs. Geological or biological events mark the beginnings and ends of some periods, but some are based on a convenient interval of time determined by radiometric dating.

Geologists divide much of the earth’s history into periods. Too little is accedingly obscure about pre-Archaean time, from the origin of the earth to 3.8 billion years ago, to divide it into units. The Archaean Eon: 3.8 to 2.5 billion years before present) is not divided into periods. It marks a time in which the structure of the earth underwent many changes and the first life appeared on the earth. Rocks of the Archaean Eon contain some very simple single-cell organisms called Prokaryotes and early blue-green algae colonies called stromatolites. During the Proterozoic Eon (2.5 billion to 570 million years before present) the earth was partially covered alternately by shallow seas and ice sheets. Life advanced from the most basic single-celled organisms into plants and mysteriously to animals that resembled some species that is actively living today. Pre-Archaean time, the Archaean Eon, and the Proterozoic Eon make up what is called Precambrian time. The most recent eon of the earth is the Phanerozoic (570 million years before present to the present). During this eon, the earth and life on it gradually changed to their present state.

Some scientists divide the Proterozoic Eon into four eras and at least ten periods. These divisions are not universally accepted. The eras defined for the Proterozoic is the Huronian Era (2.5 billion to 2.2 billion years before present), the Animikean Era (2.2 billion to 1.65 billion years before present), the Riphean Era (1.65 billion to 800 million years before present), and the Sinian Era (800 million to 570 million years before present). The four informally recognized periods of the Huronian Era are the Elliot Lake Period, the Hough Lake Period, the Quirke Lake Period, and the Cobalt Period, from oldest too youngest. These periods correspond only to deposits in a region of Canada around Lake Superior and have no definite time correlation. A comparison of rocks of the Elliot Lake and Cobalt periods show that oxygen levels in the atmosphere rose during the Huronian Era. The Hough Lake, Quirke Lake, and Cobalt periods all begin with times of glaciation.

The Animikean Era has only one informally acceptant amount of time, in that of a time called the Gunflint Period, which lasted from about 2.2 billion years before present to about two billion years before present. Rocks of the Gunflint Period contain many species of microbes and stromatolites.

The Riphean Era has three informal periods. The oldest period is the Burzian Period (1.65 billion to 1.35 billion years before present), followed by the Yurmatin Period (1.35 billion to 1.05 billion years before present) and then the Karatau Period (from 1.05 billion to 800 million years before present). All three are named from sedimentary rocks in a section of the southern Ural Mountains in Russia.

The Sinian Era is divided into two informal geologic periods-the Sturtian Periods (from 800 million to 610 million years before present) and the Vendian Period (610 million to 570 million years awaiting the presence to the future). The Sturtian is named from rocks in southern Australia that show two distinct glacial episodes. The Vendian is named from rocks in the southern Ural Mountains. The Vendian Period is divided into two epochs, the Varanger Epoch (about 610 million to 590 million years awaiting the presence to the future) and the Ediacara Epoch (590 million to 570 million years awaiting the presence to the future). Rocks from the Ediacara Epoch show the first fossils of complex organisms.

The Phanerozoic Eon is the most recent eon of the earth and is divided into the Paleozoic Era (570 million to 240 million years awaiting the presence to the future), the Mesozoic Era (240 million to 65 million years awaiting the presence to the future), and the Cenozoic Era (65 million years awaiting the presence to the future to the present).

The periods of the Paleozoic Era are the Cambrian Period (570 million to 500 million years awaiting the presence to the future), the Ordovician Period (500 million to 435 million years awaiting the presence to the future), the Silurian Period (435 million to 410 million years awaiting the presence to the future), the Devonian Period (410 million to 360 million years awaiting the presence to the future), the Carboniferous Period (360 million to 290 million years awaiting the presence to the future), and the Permian Period (290 million to 240 million years awaiting). The rocks of the Paleozoic Era contain abundant and diverse fossils, so each period is marked by both geologic and biological events.

The rocks of the Cambrian Period contain many fossils of shelled animals such as trilobites, gastropods, and brachiopods that are not present in earlier rocks. The Ordovician Period is characterized by an abundance of extinct floating marine organisms called graptolites. One of the greatest mass extinctions of the Phanerozoic Eon occurred at the end of the Ordovician Period.

Rocks from the Silurian Period reveal the first evidence of plants and insects on land and the first fossils of fishes with jaws. In the Devonian Period, the first animals with backbones appeared on land. The Devonian was the first period to produce the substantial organic deposits that are used today as energy sources.

The rocks of the Carboniferous Period contain about one-half of the world’s coal supplies, created by the remains of the vast population of animals and plants of that period. Besides the abundance of terrestrial vegetation, the first winged insects appeared during the Carboniferous.

During the Permian Period all the continents on the earth came together to form one landmass, called Pangaea. The shallow inland seas of the Permian created an environment in which invertebrate marine life flourished. At the end of the period, one of the greatest extinctions in the earth’s history occurred, wiping out most species on the planet.

The Mesozoic Era is composed of the Triassic Period (240 million to 205 million years awaiting the presence to the future), the Jurassic Period (205 million to 138 million years awaiting the presence to the future), and the Cretaceous Period (138 million to 65 million years awaiting the presence to the future). During the Triassic Period, the super-continent of Pangea began to break apart. Dinosaurs first appeared during the Triassic, as did the earliest mammals.

The continents continued to break apart during the Jurassic Period. Reptiles, including the dinosaurs, flourished, taking over ecological niches on the land, in the sea, and in the air, while mammals remained small and rodent like. The continents continued to drift toward their present locations during the Cretaceous Period. Another mass extinction, which killed off the large reptiles such as the dinosaurs, occurred near the end of the Cretaceous.

The Cenozoic Era is divided into the Tertiary Period (65 million to 1.6 million years awaiting the present) and the Quaternary Period (1.6 million years awaiting the presence to the future to the present). During the Tertiary Period the continents assumed their current positions. Mammals became the dominant life forms on the planet during this period, and the direct ancestors of humans appeared at the end of the Tertiary. The most recent ice age occurred during the Quaternary Period. The first humans appeared during the Quaternary. The changing climate and melting of the glaciers, possibly combined with hunting by humans, drove many large mammals of the early Quaternary to extinction, making way for the animal life on the earth today.

The Precambrian is a time span that includes the Archaean and Proterozoic eons that heed of shape as practically as four billion years ago. The Precambrian marks the first formation of continents, the oceans, the atmosphere, and life. The Precambrian represents the oldest chapter in Earth’s history that can still be studied. Notably little outlasted of Earth from the period of 4.6 billion to about four billion years ago due to the melting of rock caused by the early period of meteorite bombardment. Rocks dating from the Precambrian, however, have been found in Africa, Antarctica, Australia, Brazil, Canada, and Scandinavia. Some zircon mineral grains deposited in Australian rock layers have been dated to 4.2 billion years.

The Precambrian is also the longest chapter in Earth’s history, spanning a period of about 3.5 billion years. During this time frame, the atmosphere and the oceans formed from gases that escaped from the hot interior of the planet because of widespread volcanic eruptions. The early atmosphere consisted primarily of nitrogen, carbon dioxide, and water vapour. As Earth continued to cool, the water vapour condensed out and fell as precipitation to form the oceans. Some scientists believe that much of Earth’s water vapour originally came from comets containing frozen water that struck Earth during meteorite bombardment.

By studying 2-billion-year-old rocks found in northwestern Canada, and 2.5-billion-year-old rocks in China, scientists have found evidence that plate tectonics began shaping Earth’s surface as early as the middle Precambrian. About a billion years ago, the Earth’s plates were entered around the South Pole and formed a super-continent called Rodinia. Slowly, pieces of this super-continent broke away from the central continent and travelled north, forming smaller continents.

Life originated during the Precambrian. The earliest fossil evidence of life consists of Prokaryotes, one-celled organisms that lacked a nucleus and reproduced by dividing, a process known as asexual reproduction. Asexual division meant that a prokaryote’s genetic heritage that had been productively unaltered. The first Prokaryotes were bacteria known as archaebacteria. Scientists believe they came into existence perhaps as early as 3.8 billion years ago, but of an unequivocal practicality of some 3.5 billion years ago, and where anaerobic-that is, they did not require oxygen to produce energy. Free oxygen barely existed in the atmosphere of the early Earth.

Archaebacteria were followed about 3.46 billion years ago by another type of prokaryote known as Cyanobacteria or blue-green algae. These Cyanobacteria gradually introduced oxygen in the atmosphere because of photosynthesis. In shallow tropical waters, Cyanobacteria formed mats that grew into humps called stromatolites. Fossilized stromatolites have been found in rocks in the Pilbara region of western Australia that are more than 3.4 billion years old. As, some rocks found in the Gunflint Chert region of northwest Lake Superior extend over an age of about 2.1 billion years old.

For billions of years, life existed only in the simple form of Prokaryotes. Prokaryotes were referentially followed by an advanced eucaryote, organisms that have a nucleus in their cells and that reproduces by combining or sharing their heredity makeup rather than by simply dividing. Sexual reproduction marked a milestone in life on Earth because it created the possibility of hereditary variation and enabled organisms to adapt more easily to a changing environment. The latest part of Precambrian time some 560 million to 545 million years ago saw the appearance of an intriguing group of fossil organisms known as the Ediacaran fauna, these were first discovered in the northern Flinders Range region of Australia in the mid-1940s and subsequent findings in many locations throughout the world, these strange fossils might be the precursors of many fossil groups that were to explode in Earth's oceans in the Paleozoic Era.

At the start of the Paleozoic Era about 543 million years ago, an enormous expansion in the diversity and complexity of life occurred. This event took place in the Cambrian Period and is called the Cambrian explosion. Nothing like it has happened since. Most of all set-groups of animals known today made their initial arrival during the Cambrian explosion. Most of the different ‘body plans’ are found in animals today-that is, the way and animal’s body is designed, with heads, legs, rear ends, claws, tentacles, or antennae-also originated during this period.

Fishes first appeared during the Paleozoic Era, and multicellular plants began growing on the land. Other land animals, such as scorpions, insects, and amphibians, also originated during this time. Just as new forms of life were being created, however, other forms of life were going out of existence. Natural selection meant that some species could flourish, while others failed. In fact, mass extinctions of animal and plant species were commonplace.

Most of the early complex life forms of the Cambrian explosion lived in the sea. The creation of warm, shallow seas, along with the buildup of oxygen in the atmosphere, may have aided this explosion of life forms. The shallow seas were created by the breakup of the super-continent Rodinia. During the Ordovician, Silurian, and Devonian periods, which followed the Cambrian Period and lasted from 490 million to 354 million years ago, some continental pieces that had broken off Rodinia collided. These collisions resulted in larger continental masses in equatorial regions and in the Northern Hemisphere. The collisions built many numbers of mountain ranges, including parts of the Appalachian Mountains in North America and the Caledonian Mountains of northern Europe.

Toward the close of the Paleozoic Era, two large continental masses, Gondwanaland to the south and Laurasia to the north, faced each other across the equator. They are slow but eventful collision during the Permian Period of the Paleozoic Era, which lasted from 290 million to 248 million years ago, assembled the super-continent Pangaea and resulted within several grandest mountains in the history of Earth. These mountains included other parts of the Appalachians and the Ural Mountains of Asia. At the close of the Paleozoic Era, Pangaea represented more than 90 percent of all the continental landmasses. Pangaea straddled the equator with a huge mouth-like opening that faced east. This opening was the Tethys Ocean, which closed as India moved northward creating the Himalayas. The last remnants of the Tethys Ocean can be seen in today’s Mediterranean Sea.

The Paleozoic Ere spread an end with a major extinction event, when perhaps as many as 90 percent of all plant and animal species died out. The reason is not known for sure, but many scientists believe that huge volcanic outpourings of lavas in central Siberia, coupled with an asteroid impact, resulting among the fragmented contributive factors.

The Mesozoic Era, sprang into formation and are approximately 248 million years ago, is often characterized as the Age of Reptiles because reptiles were the dominant life forms during this era. Reptiles dominated not only on land, as dinosaurs, but also in the sea, as the plesiosaurs and ichthyosaurs, and in the air, as pterosaurs, which were flying reptiles.

The Mesozoic Era is divided into three geological periods: the Triassic, which lasted from 248 million to 206 million years ago; the Jurassic, from 206 million to 144 million years ago; and the Cretaceous, from 144 million to 65 million years ago. The dinosaurs emerged during the Triassic Period and was one of the most successful animals in Earth’s history, lasting for about 180 million years before going extinct at the end of the Cretaceous Period. The first and mammals and the first flowering plants also appeared during the Mesozoic Era. Before flowering plants emerged, plants with seed-bearing cones known as conifers were the dominant form of plants. Flowering plants soon replaced conifers as the dominant form of vegetation during the Mesozoic Era.

Mesozoic was an eventful era geologically with many changes to Earth’s surface. Pangaea continued to exist for another 50 million years during the early Mesozoic Era. By the early Jurassic Period, Pangaea began to break up. What is now South America begun splitting from what is now Africa, and in the process the South Atlantic Ocean formed? As the landmass that became North America drifted away from Pangaea and moved westward, a long Subduction zone extended along North America’s western margin. This Subduction zone and the accompanying arc of volcanoes extended from what is now Alaska to the southern tip of South America. Abounding of this focus, called the American Cordillera, exists today as the eastern margin of the Pacific Ring of Fire.

During the Cretaceous Period, heat continued to be released from the margins of the drifting continents, and as they slowly sank, vast inland seas formed in much of the continental interiors. The fossilized remains of fishes and marine mollusks called ammonites can be found today in the middle of the North American continent because these areas were once underwater. Large continental masses broke off the northern part of southern Gondwanaland during this period and began to narrow the Tethys Ocean. The largest of these continental masses, present-day India, moved northward toward its collision with southern Asia. As both the North Atlantic Ocean and South Atlantic Ocean continued to open, North and South America became isolated continents for the first time in 450 million years. Their westward journey resulted in mountains along their western margins, including the Andes of South America.

Birds are members of a group of animals called vertebrates, which possess a spinal column or backbone. Other vertebrates are fish, amphibians, reptiles, and mammals. Many characteristics and behaviours of birds are distinct from all other animals, yet they have noticeable similarities. Like mammals, birds have four-chambered hearts and are warm-blooded-having a proportionally constant body temperature that enables them to live in a variety of environments. Like reptiles, birds develop from embryos in eggs outside the mother’s body.

Birds are found worldwide in many habitats. They can fly over some of the highest mountains on earth plus both of the earth’s poles, dive through water to depths of more than 250 m.’s (850 ft.), and occupy habitats with the most extreme climates on the planet, including arctic tundra and the Sahara Desert. Certain kinds of seabirds are commonly seen over the open ocean thousands of kilometres from the nearest land, but all birds must come ashore to raise their young.

Highly-developed animals, birds are sensitive and responsive, colourful and graceful, with habits that excite interest and inquiry. People have long been fascinated by birds, in part because birds are found in great abundance and variety in the same habitats in which humans thrive. Like people, most species of birds are active during daylight hours. Humans find inspiration in birds’ capacity for flight and in their musical calls. Humans also find birds useful-their flesh and eggs for food, their feathers for warmth, and their companionship. Perhaps a key basis for our rapport with birds is the similarity of our sensory worlds: Both birds and humans rely more heavily on hearing and colour vision than on smell. Birds are useful indicators of the quality of the environment, because the health of bird populations mirrors the health of our environment. The rapid declination of bird populations and the accelerating extinction rates of birds in the world’s forests, grasslands, wetlands, and islands are therefore reasons for great concern.

Birds vary in size from the tiny bee hummingbird, which measures about 57 mm. (about 2.25 in.) from a beak tip to tail tips and weigh 1.6 g. (0.06 oz.), to the ostrich, which stands 2.7 m. (9 ft.) tall and weighs up to 156 kg. (345 lb.). The heaviest flying bird is the great bustard, which can weigh up to 18 kg. (40 lb.).

All birds are covered with feathers, collectively called plumage, which are specialized structures of the epidermis, or outer layer of skin. The main component of feathers is keratin, a flexible protein that also forms the hair and fingernails of mammals. Feathers provide the strong yet lightweight surface area needed for powered, aerodynamic flight. They also serve as insulation, trapping pockets of air to help birds conserve their body heat. The varied patterns, colours, textures, and shapes of feathers help birds to signal their age, sex, social status, and species that identity of one another. Some birds have plumage that blends in with their surroundings to provide camouflage, helping these birds escape notice by their predators. Birds use their beaks to preen their feathers, often using oil from a gland at the base of their tails. Preening removes dirt and parasites and keeps feathers waterproof and supple. Because feathers are nonliving structures that cannot repair themselves when bigeneric or broken, and they must be renewed periodically. Most adult birds molt-lose and replace their feathers-at least once a year.

Bird wings are highly modified forelimbs with a skeletal structure resembling that of arms. Wings may be long or short, round or pointed. The shape of a bird’s wings influences its style of flight, which may consist of gliding, soaring, or flapping. Wings are powered by flight muscles, which are the largest muscles in birds that fly. Flight muscles are found in the chest and are attached to the wings by large tendons. The breastbone, a large bone shaped like the keel of a boat, supports the flight muscles.

Nearly all birds have a tail, which helps them control the direction in which they fly and play a role in landing. The paired flight feathers of the tail, called retrices, extend from the margins of a bird’s tail. Smaller feathers called coverts’ lie on top of the retrices. Tails may be square, rounded, pointed, or forked, depending on the lengths of the retrices and the way they end. The shapes of bird tails vary more than the shapes of wings, possibly because tail shape is less critical to flight than wing shape. Many male birds, such as pheasants, have ornamental tails that they use to attract mating partners.

Birds have two legs; the lower part of each leg is called the tarsus. Most birds had four toes on each foot, and in many birds, including all songbirds, the first toe, called a hallux, points backwards. Bird toes are adapted in various species for grasping perches, climbing, swimming, capturing prey, and carrying and manipulating food.

Instead of heavy jaws with teeth, modern birds have toothless, lightweight jaws, called beaks or bills. Unlike humans or other mammals, birds can move their upper jaws independently of the rest of their heads. This helps them to open their mouths extremely wide. Beaks occur in a wide range of shapes and sizes, depending on the type of food a bird eats.

The eyes of birds are large and provide excellent vision. They are protected by three eyelids: An upper lid resembling that of humans, a lower lid that closes when a bird sleeps, and a third lid, called a nictitating membrane, that sweeps across the eye sideways, starting from the side near the beak. This lid is a thin, translucent fold of skin that moistens and cleans the eye and protects it from wind and bright light.

The ears of birds are completely internal, with openings placed just behind and below the eyes. In most birds, textured feathers called auriculars form a protective screen that prevents objects from entering the ear. Birds rely on their ears for hearing and for balance, which is especially critical during flight. Two groups of birds, cave swiftlets and oilbirds, find their way in dark places by echolocation-making clicks or rattle calls and interpreting the returning echoes to obtain clues about their environment.

The throats of nearly all birds contain a syrinx (plural, syringes), an organ that is comparable to the voice box of mammals. The syrinx has two membranes that produce sound when they vibrate. Birds classified as songbirds have a peculiarly greater extent-developed syringe. Some songbirds, such as the wood thrush, can control each membrane independently; in this way they can sing two songs simultaneously.

Birds have well-developed brains, which provide acute sensory perception, keen balance and coordination, and instinctive behaviours, along with a surprising degree of intelligence. Parts of the bird brain that are especially developed are the optic lobes, where nerve impulses from the eyes are processed, and the cerebellum, which coordinates muscle actions. The cerebral cortex, the part of the brain responsible for thought in humans, is primitive in birds. However, birds have a hyperstriatum -a forebrain component that mammals lack. This part of the brain helps songbirds to learn their songs, and scientists believe that it may also be the source of bird intelligence.

The internal body parts of all birds, including flightless ones, reflects the evolution of birds as flying creatures. Birds have lightweight skeletons in which many major bones are hollow. A unique feature of birds is the furculum, or wishbone, which is comparable to the collarbones of humans, although in birds the left and right portions are fused. The furculum absorbs the shock of wing motion and acts as a spring to help birds breathe while they fly. Several anical adaptations help to reduce weight and concentrate it near the centre of gravity. For example, modern birds are toothless, which helps reduce the weight of their beaks, and food grinding is carried out in the muscular gizzard, a part of the stomach near the body’s core. The egg-laying habit of birds enables their young to develop outside the body of the female, significantly lightening her load. For further weight reduction, the reproductive organs of birds atrophy, or become greatly reduced in size, except the breeding season.

Flight, especially taking off and landing, requires a huge amount of energy-more than humans need even for running. Taking flight is less demanding for small birds than it is for large ones, but small birds need more energy to stay warm. In keeping with their enormous energy needs, birds have an extremely fast metabolism, which includes the chemical reactions involved in releasing stored energy from food. The high body temperature of birds-40° to 42° C.’s (104° to about 108° F.’s)-provides an environment that supports rapid chemical reactions.

To sustain this high-speed metabolism, birds need an abundant supply of oxygen, which combines with food molecules within cells to release energy. The respiratory, or breathing, system of birds is adapted to meet their special needs. Unlike humans, birds have lungs with an opening at each end. New air entered the lungs from one end, and used air goes out the other end. The lungs are connected to a series of air sacs, which simplify the movement of air. Birds breathe faster than any other animal. For example, a flying pigeon breathes 450 times each minute, whereas a human, when running, might breathe only about 30 times each minute.

The circulatory system of birds also functions at high speed. Blood vessels pick up oxygen in the lungs and carry it, along with nutrients and other substances essential to life, to all of a bird’s body tissues. In contrast to the human heart, which beats about 160 times per minute when a person runs, a small bird’s heart beats between 400 and 1,000 times per minute. The hearts of birds are proportionately larger than the hearts of other animals. Birds that migrate and those that live at high altitudes have larger hearts, compared with their body size, than other birds.

The characteristic means locomotion in birds is flight. However, birds are also variously adapted for movement on land, and some are excellent swimmers and divers.

Like aeroplanes, birds rely on lift-an upward force that counters gravity-to fly. Birds generate lift by pushing down on the air with their wings. This action causes the air, in return, to push the wings up. The shape of wings, which have an upper surface that is convex and a lower surface that is concave, contributes to this effect. To turn, birds often tilt so that one wing is higher than the other.

Different wing shapes adapt birds for different styles of flight. The short, rounded wings and strong breast muscles of quail are ideal for short bursts of powered flight. Conversely, the albatross’s long narrow wings enable these birds to soar effortlessly over windswept ocean surfaces. The long, broad wings of storks, vultures, and eagles provide excellent lift on rising air currents.

Feathers play a crucial role in flight. The wings and tails of birds have detailed Flight feathers-the largest and strongest type of feathers-that contribute to lift. Because each of the flight feathers is connected to a muscle, birds can adjust their position individually. As a bird pushes down on the air with its wings, its flight feathers overlap to prevent air from passing through. The same feathers twist open on the upstroke, so that air flows between them and less effort is needed to lift the wings.

Feathers also help to reduce drag, a force of resistance that acts on solid bodies moving through air. Contour feathers, which are the most abundant type of feather, fill and cover angular parts of a bird’s body, giving birds a smooth, aerodynamic form.

Bird tails are also important to flight. Birds tip their tail feathers in different directions to achieve stability and to help change direction while flying. When soaring, birds spread their tail feathers to obtain more lift. When landing, birds turn their tails downward, so that their tails act like brakes.

Most birds can move their legs alternately to walk and run, and some birds are adept at climbing trees. Birds’ agility on land varies widely among different species. The American robin both hops and walks, while the starling usually walks. The ostrich can run as fast as

64 km./h. (40 mph.). Swifts, however, can neither hop nor run; their weak feet are useful only for clinging to vertical surfaces, such as the walls of caves and houses.

Birds that walk in shallow water, such as herons and stilts, have long legs that simplify wading. Jacanas, which walk on lily pads and mud, have long toes and nails that disperse their weight to help prevent them from sinking. Penguins have stubby legs placed far back from their centre of gravity. So, they can walk only with an upright posture and a short-stepping gait. When penguins need to move quickly, they ‘toboggan’ on their bellies, propelling themselves across ice with their wings and feet.

Many birds are excellent swimmers and divers, including such distantly related types of birds as grebes, loons, ducks, auks, cormorants, penguins, and diving petrels. Most of these birds have webbed or lobed toes that act as paddles, which they use to propel themselves underwater. Others, including auks and penguins, use their wings to propel themselves through the water. Swimming birds have broad, raft-like bodies that provide stability. They have dense feather coverings that hold pockets of air for warmth, but they can compress the air out of these pockets to reduce buoyancy when diving.

Many fish-catching birds can dive to great depths, either from the air or from the water’s surface. The emperor penguin can plunge into depths of more than 250 m. (850 ft.) and remain submerged for about 12 minutes. Some ducks, swans, and geese perform an action called dabbling, in which they tip their tails up and reach down with their beaks to forage on the mud beneath shallow water.

Like other animals, birds must eat, rest, and defend themselves against predators to survive. They must also reproduce and raise their young to contribute to the survival of their species. For many bird species, migration is an essential part of survival. Birds have acquired remarkably diverse and effective strategies for achieving these ends.

Birds spend much of their time feeding and searching for food. Most birds cannot store large reserves of food internally, because the extra weight would prevent them from flying. Small birds need to eat even more frequently than large ones, because they have a greater surface area in proportion to their weight and therefore lose their body heat more quickly. Some extremely small birds, such as hummingbirds, have so little food in reserve that they enter a state resembling hibernation during the night and rely on the warmth of the sun to energize them in the morning.

Depending on the species, birds eat insects, fish, meat, seeds, nectar, and fruit. Most birds are either carnivorous, meaning they eat other animals, or herbivorous, meaning they eat plant material. Many birds, including crows and gulls, are omnivorous, eating almost anything. Many herbivorous birds feed protein-rich animal material to their undergoing maturation. Some bird species have highly abstractive diets, such as the Everglade kite, which feeds exclusively on snails.

Two unusual internal organs help birds to process food. The gizzard, which is part of a bird’s stomach, has thick muscular walls with hard inner ridges. It can crush large seeds and even shellfish. Some seed-eating birds swallow small stones so that the gizzard will grind food more efficiently. Birds that feed on nectar and soft fruits have poorly developed gizzards.

Most birds have a crop-a sac-like extension of the esophagus, the tubular organ through which food passes after leaving the mouth. Some birds store food in their crops and transport it to the place where they sleep. Others use the crop to carry food that they will later regurgitate to their offspring.

The bills of birds are modified in ways that help birds obtain and handle food. Nectar-feeders, such as hummingbirds, have long thin bills, which they insert into flowers, and particularized expansible or brushlike tongues, through which they draw up nectar. Meat-eating birds, including hawks, owls, and shrikes, have strong, hooked bills that can tear flesh. Many fish-eating birds, such as merganser ducks, have tooth-like ridges on their bills that help them to hold their slippery prey. The thick bills and strong jaw muscle of various finches and sparrows are ideal for crushing seeds. Woodpeckers use their bills as chisels, working into dead or living wood to find insect larvae and excavate nest cavities.

At least two species of birds use tools in obtaining food. One is the woodpecker finch, which uses twigs or leaf stalks to extract insects from narrow crevices in trees. The other is the Egyptian vulture, which picks up large stones in its bill and throws them at ostrich eggs to crack them open.

Birds need far less sleep than humans do. Birds probably sleep to relax their muscles and conserve energy but not to refresh their brains. Many seabirds, in particular, sleep very little. For example, the sooty tern, which rarely plummet into settling on water, may fly for several years with only brief periods of sleep lasting a few seconds each. Flying is so effortless for the sooty tern and other seabirds that it takes virtually no energy at all.

Most birds are active during the day and sleep at night. Exceptions are birds that hunt at night, such as owls and night jars. Birds use nests for sleeping only during the breeding season. The rest of the year, birds sleep in shrubs, on tree branches, in holes in trees, and on the bare ground. Most ducks sleep on the water. Many birds stand while they sleep, and some birds sleep while perched on some branch-sometimes using only one foot. These birds can avoid falling over because of a muscle arrangement that causes their claws to tighten when they bend their legs to relax.

To reproduce, birds must find a suitable mate, or mates, and the necessary resources-food, water, and nesting materials-for caring for their eggs and raising the hatched young to independence. Most birds mate during a specific season in a particular habitat, although some birds may reproduce in varied places and seasons, provided environmental conditions are suitable.

Most of all birds have monogamous mating patterns, meaning that one male and one female mate exclusively with each other for at least one season. However, some bird species is either polygynous, that is, the males mate with more than one female, or polyandrous, in which case the females mate with more than one male. Among many types of birds, including some jays, several adults, rather than a single breeding pair, often help to raise the young within an individual nest.

Birds rely heavily on their two main senses, vision and hearing, in courtship and breeding. Among most songbirds, including the nightingale and the sky lark, males use song to establish breeding territories and attract mates. In many species, female songbirds may be attracted to males that sing the loudest, longest, or most varied songs. Many birds, including starlings, mimic the sounds of other birds. This may help males to achieve sufficiently varied songs to attract females.

Many birds rely on visual displays of their feathers to obtain a mating partner. For example, the blue bird of paradise hangs upside down from a tree branch to show off the dazzling feathers of its body and tail. A remarkable courtship strategy is exhibited by male bowerbirds of Australia and New Guinea. These birds attract females by building bowers for shelter, which they decorate with colourful objects such as flower petals, feathers, fruit, and even human-made items such as ribbons and tinfoil.

Among some grouse, cotingas, the small wading birds called shorebirds, hummingbirds, and other groups, males gather in areas called leks to attract mates through vocal and visual displays. Females visiting the leks select particularly impressive males, and often only one or a very few males effectively mate. Among western grebes, both males and females participate in a dramatic courtship ritual called rushing, in which mating partners lift their upper bodies far above the water and paddle rapidly to race side by side over the water’s surface. Although male birds usually court females, there are some types of birds, including the phalaropes, in which females court males.

Many birds establish breeding territories, which they defend from rivals of the same species. In areas where suitably nesting habitats is limited, birds may nest in large colonies. An example is the crab plover, which sometimes congregates by the thousands in areas of only about 0.6 hectares (about 1.5 acres).

For breeding, most birds build nests, which help them to incubate, or warm, the developing eggs. Nests sometimes offer camouflage from predators and physical protection from the elements. Nests may be elaborate constructions or some mere scrapes on the ground. Some birds, including many shorebirds, incubate their eggs without any type of nest at all. The male emperor penguin of icy Antarctica incubates the single egg on top of its feet under a fold of skin.

Bird nests range in size from the tiny cups of hummingbirds to the huge stick nests of eagles, which may weigh a ton or more. Some birds, such as the mallee-fowl of southern Australia, use external heat sources, such as decaying plant material, to incubate their eggs. Many birds, including woodpeckers, use tree cavities for nests. Others, such as cowbirds and cuckoos, are brood parasites; they neither build nests nor care for their young. Instead, females of these species lay their eggs in the nests of birds of other species, so that the eggs are incubated-and hatchling duration’s bobbed up to raised birds, in so, that they and other hatchling chicks are than the hatchlings’ unfeigned by the same rearing nest.

Incubation by one or both parents works with the nest structure to provide an ideal environment for the eggs. The attending parent may warm the eggs with a part of its belly called the brood patch. Bird parents may also wet or shade the eggs to prevent them from overheating.

The size, shape, colour, and texture of a bird egg are specific to each species. Eggs provide an ideal environment for the developing embryo. The shells of eggs are made from calcium carbonates. They contain thousands of pores through which water can evaporate and air can seep in, enabling the developing embryo to breathe. The number of eggs in a clutch (the egg or eggs laid by a female bird in one nesting effort) may be 15 or more for some birds, including pheasants. In contrast, some large birds, such as condors and albatross, may lay only a single egg every two years. The eggs of many songbirds hatch after developing for as few as ten days, whereas those of an albatross and kiwis may require 80 days or more.

Among some birds, including songbirds and pelicans, newly hatched younkers that are without feathers, blind, and incapable of regulating their body temperature. Many other birds, such as ducks, are born covered with down and can feed themselves within hours after hatching. Depending on the species, young birds may remain in the nest for as little as part of a day or as long as several months. Grown older from their young (those that has left the nest) may still rely on parental care for many days or weeks. Only about 10 percent of birds survive their first year of life; the rest die of starvation, disease, predators, or inexperience with the behaviours necessary for survival. The age at which birds begin to breed varies from less than a year in many songbirds and some quail to ten years or more in some albatross. The life spans of birds in the wild are poorly known. Many small songbirds live only three to five years, whereas some albatrosses are known to have survived more than 60 years in the wild.

The keen eyesight and acute hearing of birds help them react quickly to predators, which may be other birds, such as falcons and hawks, or other types of animals, such as snakes and weasels. Many small birds feed in flocks, where they can benefit from the observing power of multiple pairs of eyes. The first bird in a flock to spot a predator usually warns the others with an alarm call.

Birds that feed alone commonly rely on camouflage and rapid flight as means of evading predators. Many birds have highly specific and unusual defence strategies. The burrowing owl in North America, which lives in the burrows of ground squirrels, frightens away predators by making a call that sounds much like a rattlesnake. The snipe, a wading bird, flees from its enemies with a zigzag flight pattern that is hard for other birds to follow.

Many bird species undergo annual migrations, travelling between seasonally productive habitats. Migration helps birds to have continuous sources of food and water, and to avoid environments that are too hot or too cold. Most spectacular of bird migrations are made by seabirds, in which they fly across oceans and along coastlines, sometimes travelling 32,000 km. (20,000 mi.) or more in a single year. Migrating birds use a variety of cues to find their way. These include the positions of the sun during the day and the stars at night; the earth’s magnetic field; and visual, olfactory, and auditory landmarks. The strict formations in which many birds fly help them on the journey, for example, migrating geese travel in a V-shaped formation, which enables all of the geese except the leader to take advantage of the updrafts generated by the flapping wings of the goose in front. Young birds of many species undertake their first autumn migration with no guidance from experienced adults. These inexperienced birds do not necessarily reach their destinations; many birds stray in the wrong direction and are sometimes observed thousands of kilometres away from their normal route.

There are nearly 10,000 known species of modern or recently extinct birds. Traditionally, taxonomists (those who classify living things based on evolutionary relationships) have looked at bird characteristics such as skeletal structure, plumage, and bill shape to determine which birds have a shared evolutionary history. More recently, scientists have turned to deoxyribonucleic acid (DNA)-the genetic information found in the cells of all living organisms-for clues about relationships among birds. DNA is useful to volaille bird taxonomists because closely related birds have more similar DNA than do groups of birds that are distantly related. DNA comparisons have challenged some of scientists’ previous ideas about relationships among birds. For example, these studies have revealed that vultures of the Americas are more closely related to storks than to the vultures of Europe, Asia, or Africa.

Another method of categorizing birds focuses on adaptive types, or lifestyles. This system groups together birds that live in similar environments or have similar methods for obtaining food. Even among a given adaptive types, birds show tremendous diversity.

Aquatic birds obtain most or all of their food from the water. All aquatic birds that live in saltwater environments have salt glands, which enable them to drink seawater and excrete the excess salt. Albatross, shearwaters, storm petrels, and diving petrels are considered the most exclusively aquatic of all birds. These birds spend much of their time over the open ocean, well away from land.

Many other birds have aquatic lifestyles but live closer to land. Among these are penguins, which live in the southernmost oceans near the Antarctic. Some species of penguins spend most of their lives in the water, coming on land only to reproduce and molt. Grebes and divers, or loons, are found on or near lakes. Grebes are unusual among birds because they make their nests on the water, using floating plant materials that they hide among reeds. Pelicans, known for their long bills and huge throat pouches, often switch between salt water and fresh water habitats during the year. Gulls are generalists among the aquatic birds, feeding largely by scavenging over open water, along shores, or even inland areas. Waterfowls, a group that includes ducks, geese, and swans, often breed on freshwater lakes and marshes, although they sometimes make their homes in marine habitats.

Many long-legged, long-billed birds are adapted to live at the junction of land and water. Large wading birds, including herons, storks, ibises, spoonbills, and flamingoes, are found throughout the world, except near the poles. These birds wade in shallow water or across mudflats, wet fields, or similar environments to find food. Depending on the species, large wading birds may eat fish, frogs, shrimp, or microscopic marine life. Many large wading birds gather in enormous groups to feed, sleep, or nest. Shorebirds often inhabit puddles or other shallow bodies of water. The diversity of shorebirds is reflected in their varied bill shapes and leg lengths. The smallest North American shorebirds, called stints or peeps, have short, thin bills that enable them to pick at surface prey, whereas curlews probe with their long bills for burrowing shellfish and marine worms that are beyond the reach of most other shore feeders. Avocets and stilts have long legs and long bills, both of which help them to feed in deeper water.

Among the best-known birds are the birds of prey. Some, including hawks, eagles, and falcons, are active during the daytime. Others, notably owls, are nocturnal, or active at night. Birds of prey have hooked beaks, strong talons or claws on their feet, and keen eyesight and hearing. The larger hawks and eagles prey on small mammals, such as rodents and other vertebrates. Some birds of prey, such as the osprey and many eagles, eat fish. Falcons eat mainly insects, and owls, depending on the species, have diets ranging from insects to fish and mammalians. Scavengers that feed on dead animals are also considered birds of prey. These include relatives of eagles called Old World vultures, which live in Eurasia and Africa, and the condors and vultures of North and South America.

Some birds, including the largest of all living birds, have lost the ability to fly. The ostriches and their relatives-rheas, emus, cassowaries, and kiwis-are flightless birds settling in Africa, South America, and Australia, including New Guinea and New Zealand. The tinamous of Central and South America are related to the ostrich group, but they have a limited ability to fly. Other birds that feed primarily on the ground and exceed as excellent runners include the bustards (relatives of the cranes) and megapodes, members of a group of chicken-like birds that includes quail, turkeys, pheasants, and grouse. Vegetation is an important part of the diets of running birds.

More than half of all living species of birds are perching birds. Perching birds have been successful in all terrestrial habitats. Typically small birds, perching birds have a distinctive arrangement of toes and leg tendons that enables them to perch acrobatically on small twigs. They have the most satisfactorially-developed and complex vocalizations of all birds. They are divided into two main groups: the sub-oscines, which are mainly tropical and include tyrant flycatchers, antbirds, and oven-birds, and the oscines or songbirds, which make up about 80 percent of all perching bird species, among them the familiar sparrows, finches, warblers, crows, blackbirds, thrushes, and swallow. Some birds of this group catch and feed upon flying insects. An example is the swallow, which opens its mouth in a large trap-like gape to gather food. One apparent group, the dippers, is aquatic; its members obtain their food during short dives in streams and rivers.

Many other groups of birds thrive in terrestrial habitats. Parrots, known for their brilliantly coloured plumage, form a distinctive group of tropical and southern temperate birds that inhabit woodlands and grasslands. Doves and pigeons, like parrots, are seed and fruit eaters but are more widespread and typically more subdued in colour. The cuckoos-including the tree-dwelling species such as the European cuckoo, whose call is mimicked by the cuckoo clock, and ground-inhabiting species, such as roadrunners-are land birds. Hummingbirds are a group of nectars and insect-feeding land birds whose range extends from Alaska to the tip of South America. Woodpeckers and their relatives thrive in forests. Kingfishers are considered land birds despite their habit of eating fish.

Although birds collectively occupy most of the earth’s surfaces, most individual species are found only in particular regions and habitats. Some species are quite restricted, occurring only on a single oceanic island or an isolated mountaintop, whereas others are cosmopolitan, living in suitable habitats on most continents. The greatest species diversity occurs in the tropics in North and South America, extending from Mexico to South America. This part of the world is especially rich in tyrant flycatchers, oven-birds, antbirds, tanagers, and hummingbirds. The Australia and New Guinea region have possibly the most distinguishing groups of birds, because its birds have long been isolated from those of the rest of the world. Emus, cassowaries, and several songbird groups, including birds-of-paradise, are found nowhere else. Africa is the unique home to many bird families, including turacos, secretary birds, and helmet-shrikes. Areas that are further from the equator have fewer diverse birds. For example, about 225 bird species breed in the British Isles-approximately half the number of breeding strains that inhabit a single reserve in Ecuador or Peru. Despite the abundance of seabirds at its fringes, Antarctica is the poorest bird continent, with only about 20 species.

The habitats occupied by birds are also diverse. Tropical rain forests have high species diversity, as do savannas and wetlands. Fewer species generally occupy extremely arid habitats and very high elevations. A given species might be a habitat specialist, such as the marsh wren, which lives only in marshes of cattails or tules, or a generalist, such as the house sparrow, which can thrive in a variety of environments.

Many habitats are only seasonally productive for birds. The arctic tundra, for example, teams with birds during the short summer season, when food and water are plentiful. In the winter, however, this habitat is too cold and dry for all but a few species. Many bird species respond to such seasonal changes by undergoing annual migrations. Many bird species that breed in the United States and Canada move south to winter in Central or northern South America. Similar migrations from temperate regions to tropical ones exist between Europe and Africa, northeastern Asia and southeast Asia and India and, to a lesser degree, from southern Africa and southern South America to the equatorial parts of those continents.

Scientists disagree about many aspects of the evolution of birds. Many paleontologists (scientists who study fossils to learn about prehistoric life) believe that birds evolved from small, pillaging dinosaurs called theropods. These scientists say that many skeletal features of birds, such as light, hollow bones and a furculum, were present in theropod dinosaurs before the evolution of birds. Others, however, think that birds evolved from an earlier type of reptile called the codonts-a group that ultimately produced dinosaurs, crocodiles, and the flying reptiles known as pterosaurs. These scientists assert that similarities between birds and theropod dinosaurs are due to a phenomenon called convergent evolution-the evolution of similar traits among groups of organisms that are not necessarily related.

Scientists also disagree about how flight evolved. Some scientists believe that flight first occurred when the ancestors of birds climbed trees and glided down from branches. Others theorize that bird flight began from the ground up, when dinosaurs or reptiles ran along the ground and leaped into the air to catch insects or to avoid predators. Continued discovery and analysis of fossils will help clarify the origins of birds.

Despite uncertainties about bird evolution, scientists do know that many types of birds lived during the Cretaceous Period, which dates to about 138 million to 65 million years ago. Among these birds was Ichthyornis Victor, which resembled a gull and had vertebrae similar to those of a fish, and Hesperonis regalis, which was nearly wingless and had vertebrae like those of today’s birds. Most birds of the Cretaceous Period are thought to have died out in the mass extinctions-deaths of many animal species, which took place at the end of the Cretaceous Period.

The remains of prehistoric plants and animals buried and preserved in sedimentary rock or trapped in amber or other deposits of ancient organic matter, provided a record of the history of life on Earth. Scientists who subject in the field of fossil records are called paleontologists, for which having learnt those extinguishing natural archeological remains, are an ongoing phenomenons. In fact, the hundreds of millions of species that have lived on Earth over the past 3.8 billion years, more than 99 percent are already extinct. Some of this happens as the natural result of competition between species and is known as natural selection. According to natural selection, living things must compete for food and space. They must evade the ravages of predators and disease while dealing with unpredictable shifts in their environment. Those species incapable of adapting are faced with imminent extinction. This constant rate of extinction, sometimes called background extinction, is like a slowly ticking clock. First one species, then another becomes extinct, and new species appear almost at random as geological time goes by. Normal rates of background extinction are usually about five families of organisms lost per million years.

More recently, paleontologists have discovered that not all extinction is slow and gradual. At various times in the fossil record, many different, unrelated species became extinct at nearly the same time. The cause of these large-scale extinctions is always dramatic environmental change that produces conditions too severe for organisms to endure. Environmental changes of this caliber result from extreme climatic change, such as the global cooling observed during the ice ages, or from catastrophic events, such as meteorite impacts or widespread volcanic activity. Whatever their causes, these events dramatically alter the composition of life on Earth, as entire groups of organisms disappear and entirely new groups rise to take their place.

In its most general sense, the term mass extinction refers to any episode of multiple loss of species. Nonetheless, the term is generally reserved for truly global extinction events-events in which extensive species loss occurs in all ecosystems on land and in the sea, affecting every part of the Earth's surface. Scientists recognize five such mass extinctions in the past 500 million years. The first occurred around 438 million years ago in the Ordovician Period. At this time, more than 85 percent of the species on Earth became extinct. The second took place 367 million years ago, near the end of the Devonian Period, when 82 percent of all species were lost. The third and greatest mass extinction to date occurred 245 million years ago at the end of the Permian Period. In this mass extinction, as many as 96 percent of all species on Earth were lost. The devastation was so great that paleontologists use this event to mark the end of the ancient, or Paleozoic Era, and the beginning of the middle, or Mesozoic Era, when many new groups of animals evolved.

About 208 million years ago near the end of the Triassic Period, the fourth in mass extinction claimed 76 percent of the species alive at the time, including many species of amphibians and reptiles. The fifth and most recent mass extinction occurred about 65 million years ago at the end of the Cretaceous Period and resulted in the loss of 76 percent of all species, most notably the dinosaurs.

Many geologists and paleontologists speculate that this fifth mass extinction occurred when one or more meteorites struck the Earth. They believe the impact created a dust cloud that blocked much of the sunlight-seriously altering global temperatures and disrupting photosynthesis, the process by which plants derive energy. As plants died, organisms that relied on them for food also disappeared. Supporting evidence for this theory comes from a buried impact crater in the Yucatán Peninsula of Mexico. Measured at 200 km. (124 mi.) in diameter, this huge crater is thought to be the result of a large meteorite striking the Earth. A layer of the element iridium in the geologic sediment from this time provides additional evidence. Unusual in such quantities on Earth, iridium is common in extraterrestrial bodies, and theory supporters suggest iridium travelled to Earth on a meteorite.

Other scientists suspect that widespread volcanic activity in what is now India and the Indian Ocean may have been the source of the atmospheric gases and dust that blocked sunlight. Ancient volcanoes could have been the source of the unusually high levels of iridium, and advocates of this theory point out that iridium is still being released today by at least one volcano in the Indian Ocean. No matter what the cause, the extinction at the end of the Cretaceous Period was so great that scientists use this point in time to divide the Mesozoic Era (also called the Age of Reptiles) from the Cenozoic Era (otherwise known as the Age of Mammals).

Historically biologists-most famous among them British naturalist Charles Darwin-assumed that extinction is the natural outcome of competition between newly evolved, adaptively superior species and their older, more primitive ancestors. These scientists believed that newer, and more peremptorily evolved species simply drove less well-adapted species to extinction. That is, historically, extinction was thought to result from evolution. It was also thought that this process happens in a slow and regular manner and occurs at different times in different groups of organisms.

In the case of background extinction, this holds true. An average of three species becomes extinct every million years, usually because of the forces of natural selection. When this happens, appraisive species, characteristically differs only slightly from the organisms that disappeared-rise to take their places, creating evolutionary lineages of related species. The modern horse, for example, comes from a long evolutionary lineage of related, but now extinct, species. The earliest known horse had four toes on its front feet, three toes on its rear feet, and weighed just 36 kg. (80 lb.). About 45 million years ago, this horse became extinct. It was succeeded by other types of horses with different characteristics, such as teeth better shaped for eating different plants, which made them better in agreement or accorded with their owing environments. This pattern of extinction and the ensuing rise of related species continued for more than 55 million years, ultimately resulting in the modern horse and its relatives the zebras and asses.

In mass extinctions, entire groups of species-such as families, orders, and classes-die out, creating opportunities for the survivors to exploit new habitats. In their new niches, the survivors evolve new characteristics and habits and, consequently, develop into entirely new species. What this course of events means is that mass extinctions are not the result of the evolution of new species, but actually a cause of evolution. Fossils from periods of mass extinction suggest that most new species evolve after waves of extinction. Mass extinctions cause periodic spurts of evolutionary change that shake up the dynamics of life on Earth.

This is perhaps best shown in the development of our own ancestors, the early mammals. Before the fall of the dinosaurs, which had dominated Earth for more than 150 million years, mammals were small, nocturnal, and secretive. They devoted much of their time and energy to evading meat-eating dinosaurs. With the extinction of dinosaurs, the remaining mammals moved into habitats and ecological niches previously dominated by the dinosaurs. Over the next 200 million years, those early mammals evolved into a variety of species, assuming many ecological roles and rising to dominate the Earth as the dinosaurs had before them.

Most scientists agree that life on Earth is now faced with the most severe extinction episode since the event that drove the dinosaurs extinct. No one knows exactly how many species are being lost because no one knows exactly how many species exist on Earth. Estimates vary, but the most widely accepted figure lies between 10 and 13 million species. Of these, biologists estimate that as many as 27,000 species are becoming extinct each year. This translates into an astounding three varieties every hour.

Instead of global climate change, humans are the cause of this latest mass extinction. With the invention of agriculture some 10,000 years ago, humans began destroying the world's terrestrial ecosystems to produce farmland. Today pollution destroys ecosystems even in remote deserts and in the world’s deepest oceans. In addition, we have cleared forests for lumber, pulp, and firewood. We have harvested the fish and shellfish of the world's largest lakes and oceans in volumes that make it impossible for populations to recover fast enough to meet our harvesting needs. Everywhere we go, whether on purpose or by accident, we have brought along species that disrupt local ecosystems and, in many cases, drive native species extinct. For instance, Nile perch were intentionally introduced to Lake Victoria for commercial fishing in 1959. This fish proved to be an efficient predator, driving 200 rare species of cichlid fishes to extinction.

This sixth extinction, as it has become known, poses a great threat to our continued existence on the planet. As the sum of all species living in the world's ecosystems, knew as biodiversity, least of mention, losing substance under which a great deal is excessively generically set to one side by much of the resourcefulness from which we depend. Humans use at least 40,000 different plants, animals, fungi, bacteria, and virus species for food, clothing, shelter, and medicines. In addition, the fresh airs we breathe, the water we drink, cook, and wash with, as many chemicals cycles-including the nitrogen cycle and the carbon cycle, so vital to sustain life-depend on the continued health of ecosystems and the species within them.

The list of victims of the sixth extinction grows by the year. Forever lost are the penguin-like great auk, the passengers’ pigeon, the zebra-like quagga, the thylacine, the Balinese tiger, the ostrich-like moa, and the tarpan, a small species of wild horse, to name but a few. More than 1,000 plants and animals are threatened by extinction. Each of these organisms has unique attributes-some of which may hold the secrets to increasing world food supplies, eradicating water pollution, or curing disease. A subspecies of the endangered chimpanzee, for example, has recently been identified as the probable origin of the human immunodeficiency virus, the virus that causes acquired immunodeficiency syndrome (AIDS). All the same, these animals are widely hunted in their west African habitat, and just as researchers learn of their significance to the AIDS epidemic, the animals face extinction. If they become extinct, they will take with them many secrets surrounding this devastating disease.

In the United States, legislation to protect endangered species from impending extinction includes the Endangered Species Act of 1973. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), established in 1975, enforces the prohibition of trading of threatened plants and animals between countries. The Convention on Biological Diversity, an international treaty developed in 1992 at the United Nations Conference on the Environment and Development, obligates more than 160 countries to take action to protect plant and animal species.

Scientists meanwhile are intensifying their efforts to describe the species of the world. So far biologists have identified and named around 1.75 million species-a mere fraction of the species believed to exist today. Of those identified, special attention is given to species at or near the brink of extinction. The World Conservation Union (IUCN) maintains an active list of endangered plants and animals called the Red List. In addition, captive breeding programs at zoos and private laboratories are dedicated to the preservation of endangered species. Participants in these programs breed members of different populations of endangered species to increase their genetic diversity, thus enabling the species to cope with further threats to their numbers better.

All these programs together have had some notable successes. The peregrine falcon, nearly extinct in the United States due to the widespread use of the pesticide DDT, rebounded strongly after DDT was banned in 1973. The brown pelican and the bald eagle offer similar success stories. The California condor, a victim of habitat destruction, was bred in captivity, and small numbers of them are now being released back into the wild.

Growing numbers of legislators and conservation biologists, scientists who specialize in preserving and nurturing biodiversity, are realizing that the primary cause of the current wave of extinction is habitat destruction. Efforts have accelerated to identify ecosystems at greatest risk, including those with high numbers of critically endangered species. Programs to set aside large tracts of habitats, often interconnected by narrow zones or corridors, offer the best hope yet of sustaining ecosystems, and by that most of the world's species.

Nevertheless, the Tertiary Period directly following the Cretaceous witnessed an explosive evolution of birds. One bird that lived during the Tertiary Period was Diatryma, which stood 1.8 to 2.4 m. (about six to 8 ft.) tall and had massive legs, a huge bill, and very small, underdeveloped wings. Most modern families of birds can be traced back in the fossil record to the early or mid-Eocene Epoch-a stage within the Tertiary Period that occurred about 50 million years ago. Perching birds, called passerines, experienced a tremendous growth in species diversity in the latter part of the Tertiary; today this group is the most diverse order of birds.

During the Pleistocene Epoch, from 1.6 million to 10,000 years ago, also known as the Ice Age, glacier ice spread over more than one-fourth of the land surfaces of the earth. These glaciers isolated many groups of birds from other groups with which they had previously interbred. Scientists have long assumed that the resulting isolated breeding groups evolved into the species of birds that exist today. This assumption has been modified from studies involving bird DNA within cellular components called mitochondria. Pairs of species that only recently diverged from a shared ancestry are expected to have more similar mitochondrial DNA than are pairs that diverged in the more distant past. Because mutations in mitochondrial DNA are thought to occur at a fixed rate, some scientists believe that this DNA can be interpreted as a molecular clock that reveals the approximate amount of time that has elapsed since two species diverged from one another. Studies of North American songbirds based on this approach suggest that only the earliest glaciers of the Pleistocene are likely to have played a role in shaping bird species.

The evolution of birds has not ended with the birds that we know today. Some bird species are dying out. In addition, the process of Speciation, evolutionary changes that resultant products in some newer species-continues always.

Birds have been of ecological and economic importance to humans for thousands of years. Archaeological sites reveal that prehistoric people used many kinds of birds for food, ornamentation, and other cultural purposes. The earliest domesticated bird was probably the domestic fowl or chicken, derived from jungle fowls of Southeast Asia. Domesticated chickens existed even before 3000 Bc. Other long-domesticated birds are ducks, geese, turkeys, guineas-fowl, and pigeons.

Today the adults, young, and eggs of both wild and domesticated birds provide humans with food. People in many parts of Asia even eat nests that certain swiftlets in southeastern Asia construct out of saliva. Birds give us companionship as pets, assume religious significance in many cultures, and, with hawks and falcons, perform work for us as hunters. People in maritime cultures have learned to monitor seabird flocks to find fish, sometimes even using cormorants to do the fishing.

Birds are good indicators of the quality of our environment. In the 19th century, coal miners brought caged canaries with them into the mines, knowing that if the birds stopped singing, dangerous mine gases had escaped into the air and poisoned them. Birds provided a comparable warning to humans in the early 1960s, when the numbers of peregrine falcons in the United Kingdom and raptors in the United States suddenly declined. This decline was caused by organochlorine pesticides, such as DDT, which were accumulating in the birds and causing them to produce eggs with overly fragile shells. This decline in the bird populations alerted humans to the possibility that pesticides can harm people as well. Today certain species of birds are considered indicators of the environmental health of their habitats. An example of an indicator bird is the northern spotted owl, which can only reproduce within old growth forests in the Pacific Northwest.

Many people enjoy bird-watching. Equipped with binoculars and field guides, they identify birds and their songs, often keeping lists of the various species they have witnessed. Scientists who study birds are known as ornithologists. These experts investigate many behaviours, and, evolutionary histories, ecology, set-classification, and species distributed of both domesticated and rashly or wild birds.

Overall, birds pose little direct danger to humans. A few birds, such as the cassowaries of New Guinea and northeastern Australia, can kill humans with their strong legs and bladelike claws, but actual attacks are extremely rare. Many birds become quite aggressive when defending a nest site; humans are routinely attacked, and occasionally killed, by hawks engaging in such defence. Birds pose a greater threat to human health as carriers of diseases. Diseases carried by birds that can affect humans include influenza and psittacosis.

Negative impacts by birds on humans are primarily economic. Blackbirds, starlings, sparrows, weavers, crows, parrots, and other birds may seriously deplete crops of fruit and grain. Similarly, fish-eating birds, such as cormorants and herons, may adversely influence aquacultural production. However, the economic benefits of wild birds to humans are well documented. Many birds help humans, especially farmers, by eating insects, weeds, slugs, and rodents.

Although birds, with some exceptions, are tremendously beneficial to humans, humans have a long history of causing harm to birds. Studies of bone deposits on some Pacific islands, including New Zealand and Polynesia, suggest that early humans hunted many hundreds of bird species to extinction. Island birds have always been particularly susceptible to predation by humans. Because these birds have largely evolved without land-based predators, they are tame and in many descriptions are flightless. They are therefore easy prey for humans and the animals that accompany them, such as rats. The dodo, a flightless pigeon-like bird on the island of Mauritius in the Indian Ocean, was hunted to extinction by humans in the 1600s.

With colonial expansion and the technological advances of the 18th and 19th centuries, humans hunted birds on an unprecedented scale. This time period witnessed the extinction of the great auk, a large flightless seabird of the North Atlantic Ocean that was easily killed by sailors for food and oil. The Carolina parakeet also became extinct in this intermittent interval of time, although the last one of these birds survived in the Cincinnati Zoo until 1918.

In the 20th century, a time of explosive growth in human populations, the major threats to birds have been the destruction and modification of their habitats. The relentless clearing of hardwood forests outweighed even relentless hunting as the cause of the extinction of the famous passengers’ pigeon, whose eastern North American populations may have once numbered in the billions. The fragmentation of habitats into small parcels is also harmful to birds, because it increases their vulnerability to predators and parasites.

Habitat fragmentation and reduction particularly affect songbirds that breed in North America in the summer and migrate to Mexico, the Caribbean, Central America, and Colombia for the winter. In North America, these birds suffer from forest fragmentation caused by the construction of roads, housing developments, and shopping malls. In the southern part of their range, songbirds are losing traditional nesting sites as tropical forests are destroyed and shade trees are removed from coffee plantations.

Pesticides, pollution, and other poisons also threaten today’s birds. These substances may kill birds outright, limit their ability to reproduce, or diminish their food supplies. Oil spills have killed thousands of aquatic birds, because birds with oil-drenched feathers cannot fly, float, or stay warm. Acid rain, caused by chemical reactions between airborne pollutants and water and an oxygen providence in the atmosphere, has decreased the food supply of many birds that feed on fish or other aquatic life in polluted lakes. Many birds are thought to be harmed by selenium, mercury, and other toxic elements present in agricultural runoff and in drainage from mines and power plants. For example, loons in the state of Maine may be in danger due to mercury that drifts into the state from unregulated coal-fired power plants in the Midwest and other sources. Global warming, an increase in the earth’s temperature due to a buildup of greenhouse gases, is another potential threat to birds.

Sanctuaries for birds exist all over the world-two examples are the Bharatpur Bird Sanctuaries in India’s Keoladeo National Park, which protects painted storks, gray herons, and many other bird species, and the National Wildlife Refuge system of the United States. In North America, some endangered birds are bred in settings such as zoos and specialized animal clinics and later released into the wild. Such breeding programs have added significantly to the numbers of whooping cranes, peregrine falcons, and California condors. Many countries, including Costa Rica, are finding they can reap economic benefits, including the promotion of tourism, by protecting the habitats of birds and other wildlife.

The protection of the earth’s birds will require more than a single strategy. Many endangered birds need a combination of legal protections, habitat management, and control of predators and competitors. Ultimately, humans must decide that the bird’s world is worth preserving along with our own.

Most people did not understand the true nature of fossils until the beginning of the 19th century, when the basic principles of modern geology were established. Since about 1500 AD., scholars had engaged in a bitter controversy over the origin of fossils. One group held that fossils are the remains of prehistoric plants and animals. This group was opposed by another, which declared that fossils were either freaks of nature or creations of the devil. During the 18th century, many people believed that all fossils were relics of the great flood recorded in the Bible.

Paleontologists gain most of their information by studying deposits of sedimentary rocks that formed in strata over millions of years. Most fossils are found in sedimentary rock. Paleontologists use fossils and other qualities of the rock to compare strata around the world. By comparing, they can determine whether strata developed during the same time or in the same type of environment. This helps them assemble a general picture of how the earth evolved. The study and comparison of different strata are called stratigraphy.

Fossils sustain most of the data on which strata are compared. Some fossils, called index fossils, are especially useful because they have a broad geographic range but a narrow temporal one-that is, they represent a species that was widespread but existed for a brief period of time. The best index fossils tend to be marine creatures. These animals evolved rapidly and spread over large areas of the world. Paleontologists divide the last 570 million years of the earth's history into eras, periods, and epochs. The part of the earth's history before about 570 million years ago is called Precambrian time, which began with the earth's birth, probably more than four billion years ago.

The earliest evidence of life consists of microscopic fossils of bacteria that lived as early as 3.6 billion years ago. Most Precambrian fossils are very tiny. Most species of larger animals that lived in later Precambrian time had soft bodies, without shells or other hard body parts that would create lasting fossils, the first abundant fossils of larger animals had been dated from around 600 million years ago. The Paleozoic era lasted to about 330 million years. It includes the Cambrian, Ordovician, Silurian, Devonian, Carboniferous, and Permian periods. Index fossils of the first half of the Paleozoic era are those of the invertebrates, such as trilobites, graptolites, and crinoids. Remains of plants and such vertebrates as fish and reptiles make up the index fossils of the second half of this era.

At the beginning of the Cambrian period (570 million to 500 million years ago) animal life was entirely confined to the seas. By the end of the period, all the phyla of the animal kingdom existed, except vertebrates. The characteristic animals of the Cambrian period were the trilobites, a primitive form of arthropod, which reached their fullest development in this period and became extinct by the end of the Paleozoic era. The earliest snails appeared in this period, as did the cephalopod mollusks. Other groups represented in the Cambrian period were brachiopods, bryozoans, and foraminifers. Plants of the Cambrian period included seaweeds in the oceans and lichens on land.

The most characteristic animals of the Ordovician period (500 million to 435 million years ago) were the graptolites, which were small, colonial hemichordates (animals possessing an anical structure suggesting part of a spinal cord). The first vertebrates-primitive fish-and the earliest corals emerged during the Ordovician period. The largest animal of this period was a cephalopod mollusk that had a shell about three m.’s (about 10 ft.) in length. Plants of this period resembled those of the Cambrian periods.

The most important evolutionary development of the Silurian period (435 million to 410 million years ago) was that of the first air-breathing animal, a scorpion. Fossils of this creature have been found in Scandinavia and Great Britain. The first fossil records of vascular plants-that are, land plants with tissue that carries food-appeared in the Silurian period. They were simple plants that had not developed separate stems and leaves.

The dominant forms of animal life in the Devonian period (410 million to 360 million years ago) were fish of various types, including sharks, lungfish, armoured fish, and primitive forms of ganoid (hard-scaled) fish that were probably the evolutionary ancestors of amphibians. Fossil remains found in Pennsylvania and Greenland suggest that early forms of amphibia may already have existed during the Devonian period. Early animal forms included corals, starfish, sponges, and trilobites. The earliest known insect was found in Devonian rock.

The Devonian is the first period from which any considerable number of fossilized plants have been preserved. During this period, the first woody plants developed, and by the end of the period, land-growing forms included seed ferns, ferns, scouring rushes, and scale trees, the modern relative of club moss. Although the present-day equivalents of these groups are mostly small plants, they developed into treelike forms in the Devonian period. Fossil evidence shows that forests existed in Devonian times, and petrified stumps of some larger plants from the period measure about 60 cm. (about 24 in.) in diameter.

The Carboniferous period lasted from 360 million to 290 million years ago. During the first part of this period, sometimes called the Mississippian period (360 million to 330 million years ago), the seas contained a variety of echinoderms and foraminifers, and most forms of animal life that appeared in the Devonian. A group of sharks, the Cestraciontes, or shell-crushers were dominant among the larger marine animals. The predominant group of land animals was the Stegocephalia, an order of primitive, lizard-like amphibians that developed from the lungfish. The various forms of land plants became diversified and grew larger, particularly those that grew in low-laying swampy areas.

The second part of the Carboniferous, sometimes called the Pennsylvanian period (330 million to 290 million years ago), saw the evolution of the first reptiles, a group that developed from the amphibians and lived entirely on land. Other land animals included spiders, snails, scorpions, more than 800 species of cockroaches, and the largest insect ever evolved, a species resembling the dragonfly, with a wingspread of about 74 cm. (about 29 in.). The largest plants were the scale trees, which had tapered trunks that measured as much as 1.8 m.’s (6 ft.) in diameter at the base and 30 m.’s (100 ft.) in height. Primitive gymnosperms known as cordaites, which had pithy stems surrounded by a woody shell, were more slender but even taller. The first true conifers, forms of advanced gymnosperms, also developed during the Pennsylvanian period.

The chief events of the Permian period (290 million to 240 million years ago) were the disappearance of many forms of marine animals and the rapid spread and evolution of the reptiles. Usually, Permian reptiles were of two types: lizard-like reptiles that lived entirely on land, and sluggish, semiaquatic types. A comparatively small group of reptiles that evolved in this period, the Theriodontia, were the ancestors of mammals. Most vegetation of the Permian period was composed of ferns and conifers.

The Mesozoic era is often called the Age of Reptiles, because the reptile class was dominant on land throughout the age The Mesozoic era lasted to about 175 million years, and includes the Triassic, Jurassic, and Cretaceous periods. Index fossils from this era include a group of extinct cephalopods called ammonites, and extinct forms of sand dollars and sea urchins.

The most notable of the Mesozoic reptiles, the dinosaur, first evolved in the Triassic period (240 million to 205 million years ago). The Triassic dinosaurs were not as large as their descendants in later Mesozoic times. They were comparatively slender animals that ran on their hind feet, balancing their bodies with heavy, fleshy tails, and seldom exceeded 4.5 m. (15 ft.) in length. Other reptiles of the Triassic period included such aquatic creatures as the ichthyosaurs, and a group of flying reptiles, the pterosaurs.

The first mammals also appeared during this period. The fossil remains of these animals are fragmentary, but the animals were apparently small and reptilian in appearance. In the sea, Teleostei, the first ancestors of the modern bony fishes, made their appearance. The plant life of the Triassic seas included a large variety of marine algae. On land, the dominant vegetation included various evergreens, such as ginkgos, conifers, and palms. Small scouring rushes and ferns still existed, but the larger members of these groups had become extinct.

During the Jurassic period (205 million to 138 million years ago), dinosaurs continued to evolve in a wide range of size and diversity. Types included heavy four-footed sauropods, such as Apatosaurus (formerly Brontosaurus); two-footed carnivorous dinosaurs, such as Allosaurus; two-footed vegetarian dinosaurs, such as Camptosaurus; and four-footed armoured dinosaurs, such as Stegosaurus. Winged reptiles included the pterodactyl, which, during this period, ranged in size from extremely small species to those with wingspreads of 1.2 m.’s (4 ft.). Marine reptiles included plesiosaurs, a group that had broad, flat bodies like those of turtles, with long necks and large flippers for swimming, Ichthyosauria, which resembled dolphins, or at times they appear like primitive crocodiles.

The mammals of the Jurassic period consisted of four orders, all of which were smaller than small modern dogs. Many insects of the modern orders, including moths, flies, beetles, grasshoppers, and termites appeared during the Jurassic period. Shellfish included lobsters, shrimp, and ammonites, and the extinct group of belemnites, which resembled squid and had cigar-shaped internal shells. Plant life of the Jurassic period was dominated by the cycads, which resembled thick-stemmed palms. Fossils of most species of Jurassic plants are widely distributed in temperate zones and polar regions, suggesting that the climate was uniformly mild.

The reptiles were still the dominant form of animal life in the Cretaceous period (138 million to 65 million years ago). The four types of dinosaurs found in the Jurassic also lived during this period, and a fifth type, the horned dinosaurs, also appeared. By the end of the Cretaceous, about 65 million years ago, all these creatures had become extinct. The largest of the pterodactyls lived during this period. Pterodactyl fossils discovered in Texas have wingspreads of up to 15.5 m’s (50 ft). Other reptiles of the period include the first snakes and lizards. Several types of Cretaceous birds have been discovered, including Hesperornis, a diving bird about 1.8 m’s (about 6 ft) in length, which had only vestigial wings and was unable to fly. Mammals of the period included the first marsupials, which strongly resembled the modern opossum, and the first placental mammals, which belonged to the group of insectivores. The first crabs developed during this period, and several modern varieties of fish also evolved.

The most important evolutionary advance in the plant kingdom during the Cretaceous period was the development of deciduous plants, the earliest fossils of which appear in early Cretaceous rock formations. By the end of the period, many modern varieties of trees and shrubs had made their appearance. They represented more than 90 percent of the known plants of the period. Mid-Cretaceous fossils include remains of beech, holly, laurel, maple, oak, plane tree, and walnut. Some paleontologists believe that these deciduous woody plants first evolved in Jurassic times but grew only in upland areas, where conditions were unfavourable for fossil preservation.

The Cenozoic era (65 million years ago to the present time) is divided into the Tertiary period (65 million to 1.6 million years ago) and the Quaternary period (1.6 million years ago to the present). However, because scientists have so much more information about this era, they tend to focus on the epochs that make up each period. During the first part of the Cenozoic era, an abrupt transition from the Age of Reptiles to the Age of Mammals occurred, when the large dinosaurs and other reptiles that had dominated the life of the Mesozoic era disappeared.

The Paleocene epoch (65 million to 55 million years ago) marks the beginning of the Cenozoic era. Seven groups of Paleocene mammals are known. All of them appear to have developed in northern Asia and to have migrated to other parts of the world. These primitive mammals had many features in common. They were small, with no species exceeding the size of a small modern bear. They were four-footed, with five toes on each foot, and they walked on the soles of their feet. Most of them had slim heads with narrow muzzles and small brain cavities. The predominant mammals of the period were members of three groups that are now extinct. They were the creodonts, which were the ancestors of modern carnivores; the amblypods, which were small, heavy-bodied animals; and the condylarths, which were light-bodied herbivorous animals with small brains. The Paleocene groups that have survived are the marsupials, the insectivores, the primates, and the rodents.

During the Eocene epoch (55 million to 38 million years ago), several ancestors direct evolutionary modern animals appeared. Among these animals-all of which were small in stature-were the horse, rhinoceros, camel, rodent, and monkey. The creodonts and amblypods continued to develop during the epoch, but the condylarths became extinct before it ended. The first aquatic mammals, ancestors of modern whales, also appeared in Eocene times, as did such modern birds as eagles, pelicans, quail, and vultures. Changes in vegetation during the Eocene epoch were limited chiefly to the migration of types of plants in response to climate changes.

During the Oligocene epoch (38 million to 24 million years ago), most of the archaic mammals from earlier epochs of the Cenozoic era disappeared. In their place appeared representatives of several modern mammalian groups. The creodonts became extinct, and the first true carnivores, resembling dogs and cats, evolved. The first anthropoid apes also lived during this time, but they became extinct in North America by the end of the epoch. Two groups of animals that are now extinct flourished during the Oligocene epoch: the titanotheres, which are related to the rhinoceros and the horse; and the oreodonts, which were small, dog-like, grazing animals.

The development of mammals during the Miocene epoch (24 million to five million years ago) was influenced by an important evolutionary development in the plant kingdom: the first appearance of grasses. These plants, which were ideally suited for forage, encouraged the growth and development of grazing animals such as horses, camels, and rhinoceroses, which were abundant during the epoch. During the Miocene epoch, the mastodon evolved, and in Europe and Asia a gorilla-like ape, Dryopithecus, was common. Various types of carnivores, including cats and wolflike dogs, ranged over many parts of the world.

The paleontology of the Pliocene epoch (five million to 1.6 million years ago) does not differ much from that of the Miocene, although the period is regarded by many zoologists as the climax of the Age of Mammals. The Pleistocene Epoch (1.6 million to 10,000 years ago) in both Europe and North America was marked by an abundance of large mammals, most of which were fundamentally forward-moving in type. Among them were buffalo, elephants, mammoths, and mastodons. Mammoths and mastodons became extinct before the end of the epoch. In Europe, antelope, lions, and hippopotamuses also appeared. Carnivores included badgers, foxes, lynx, otters, pumas, and skunks, and now-extinct species such as the giant saber-toothed tigers. In North America, the first bears made their appearance as migrants from Asia. The armadillo and ground sloth migrated from South America to North America, and the musk-ox ranged southward from the Arctic regions. Modern human beings also emerged during this epoch.

Earth is one of nine planets in the solar system, the only planet known to harbor life, and the ‘home’ of human beings. From space Earth resembles a big blue marble with swirling white clouds floating above blue oceans. About 71 percent of Earth’s surface is covered by water, which is essential to life. The rest is land, mostly as continents that rise above the oceans.

Earth’s surface is surrounded by a layer of gases known as the atmosphere, which extends upward from the surface, slowly thinning out into space. Below the surface is a hot interior of rocky material and two core layers composed of the metals nickel and iron in solid and liquid form.

Unlike the other planets, Earth has a unique set of characteristics ideally suited to supporting life as we know it. It is neither too hot, like Mercury, the closest planet to the Sun, nor too cold, like distant Mars and the even more distant outer planets-Jupiter, Saturn, Uranus, Neptune, and tiny Pluto. Earth’s atmosphere includes just the right degree of gases that trap heat from the Sun, resulting in a moderate climate suitable for water to exist in liquid form. The atmosphere also helps block radiation from the Sun that would be harmful to life. Earth’s atmosphere distinguishes it from the planet Venus, which is otherwise much like Earth. Venus is about the same size and mass as Earth, not either too far or nearer from the Sun. Nevertheless, because Venus has too much heat-trapping carbon dioxide in its atmosphere, its surface is extremely hot-462°C’s (864°F)-hot enough to melt lead and too hot for life to exist.

Although Earth is the only planet known to have life, scientists do not rule out the possibility that life may once have existed on other planets or their moons, or may exist today in primitive form. Mars, for example, has many features that resemble river channels, suggesting that liquid water once flowed on its surface. If so, life may also have evolved there, and evidence for it may one day be found in fossil form. Water still exists on Mars, but it is frozen in polar ice caps, in permafrost, and possibly in rocks below the surface.

For thousands of years, human beings could only wonder about Earth and the other observable planets in the solar system. Many early ideas, for example, that the Earth was a sphere and that it travelled around the Sun were based on brilliant reasoning. However, it was only with the development of the scientific method and scientific instruments, especially in the 18th and 19th centuries, that humans began to gather data that could be used to verify theories about Earth and the rest of the solar system. By studying fossils found in rock layers, for example, scientists realized that the Earth was much older than previously believed. With the use of telescopes, new planets such as Uranus, Neptune, and Pluto were discovered.

In the second half of the 20th century, more advances in the study of Earth and the solar system occurred due to the development of rockets that could send spacecraft beyond Earth. Human beings can study and observe Earth from space with satellites equipped with scientific instruments. Astronauts landed on the Moon and gathered ancient rocks that revealed much about the early solar system. During this remarkable advancement in human history, humans also sent unmanned spacecraft to the other planets and their moons. Spacecraft have now visited all of the planets except Pluto. The study of other planets and moons has provided new insights about Earth, just as the study of the Sun and other stars like it has helped shape new theories about how Earth and the rest of the solar system formed.

From this recent space exploration, we now know that Earth is one of the most geological activities unbound of all the planets and moons in the solar system. Earth is constantly changing. Over long periods land is built up and worn away, oceans are formed and re-formed. Continents move around, break up, and merge.

Life itself contributes to changes on Earth, especially in, and the way living things can alter Earth’s atmosphere. For example, Earth at once had the same amount of carbon dioxide in its atmosphere as Venus now has, but early forms of life helped remove this carbon dioxide over millions of years. These life forms also added oxygen to Earth’s atmosphere and made it possible for animal life to evolve on land.

A variety of scientific fields have broadened our knowledge about Earth, including biogeography, climatology, geology, geophysics, hydrology, meteorology, oceanography, and zoogeography. Collectively, these fields are known as Earth science. By studying Earth’s atmosphere, its surface, and its interior and by studying the Sun and the rest of the solar system, scientists have learned much about how Earth came into existence, how it changed, and why it continues to change.

Earth is the third planet from the Sun, after Mercury and Venus. The average distance between Earth and the Sun is 150 million km. (93 million mi). Earth and all the other planets in the solar system revolve, or orbit, around the Sun due to the force of gravitation. The Earth travels at a velocity of about 107,000 km./h. (about 67,000 mph.) as it orbits the Sun. All but one planet orbit the Sun in the same plane-that is, if an imaginary line were extended from the centre of the Sun to the outer regions of the solar system, the orbital paths of the planets would intersect that line. The exception is Pluto, which has an eccentric (unusual) orbit.

Earth’s orbital path is not quite a perfect circle but instead is elliptical (oval-shaped). For example, at maximum distance Earth is about 152 million km. (about 95 million mi.) from the Sun; at minimum distance Earth is about 147 million km (about 91 million mi.) from the Sun. If Earth orbited the Sun in a perfect circle, it would always be the same distance from the Sun.

The solar system, in turn, is part of the Milky Way Galaxy, a collection of billions of stars bound together by gravity. The Milky Way has arm-like discs of stars that spiral out from its centre. The solar system is found in one of these spiral arms, known as the Orion arm, which is about two-thirds of the way from the centre of the Galaxy. In most parts of the Northern Hemisphere, this disc of stars is visible on a summer night as a dense band of light known as the Milky Way.

Earth is the fifth largest planet in the solar system. Its diameter, measured around the equator, is 12,756 km (7,926 mi). Earth is not a perfect sphere but is slightly flattened at the poles. Its polar diameter, measured from the North Pole to the South Pole, is in a measure less than the equatorial diameter because of this flattening. Although Earth is the largest of the four planets-Mercury, Venus, Earth, and Mars-that makes up the inner solar system (the planets closest to the Sun), it is small compared with the giant planets of the outer solar system-Jupiter, Saturn, Uranus, and Neptune. For example, the largest planet, Jupiter, has a diameter at its equator of 143,000 km (89,000 mi), 11 times greater than that of Earth. A famous atmospheric feature on Jupiter, the Great Red Spot, is so large that three Earths would fit inside it.

Earth has one natural satellite, the Moon. The Moon orbits the Earth, undivided and compelling of one revolution in an elliptical path in 27 days 7 hr 43 min 11.5-sec. The Moon orbits the Earth because of the force of Earth’s gravity. However, the Moon also exerts a gravitational force on the Earth. Evidence for the Moon’s gravitational influence can be seen in the ocean tides. A popular theory suggests that the Moon split off from Earth more than four billion years ago when a large meteorite or small planet struck the Earth.

As Earth revolves around the Sun, it rotates, or spins, on its axis, an imaginary line that runs between the North and South poles. The period of one complete rotation is defined as a day and takes 23 hr 56 min’s 4.1-sec. The period of one revolution around the Sun is defined as a year, or 365.2422 solar days, or 365 days 5 hr. 48 min.’s 46-sec. Earth also moves along with the Milky Way Galaxy as the Galaxy rotates and moves through space. It indirectly takes by more than 200 million years for the stars in the Milky Way to complete one revolution around the Galaxy’s centre.

Earth’s axis of rotation is inclined (tilted) 23.5° on its plane of revolution around the Sun. This inclination of the axis creates the seasons and causes the height of the Sun in the sky at noon to increase and decrease as the seasons change. The Northern Hemisphere receives the most energy from the Sun when it is tilted toward the Sun. This orientation corresponds to summer in the Northern Hemisphere and winter in the Southern Hemisphere. The Southern Hemisphere receives maximum energy when it is tilted toward the Sun, corresponding to summer in the Southern Hemisphere and winter in the Northern Hemisphere. Fall and spring occur between these orientations.

The atmosphere is a layer of different gases that extends from Earth’s surface to the exosphere, the outer limit of the atmosphere, about 9,600 km. (6,000 mi.) above the surface. Near Earth’s surface, the atmosphere consists almost entirely of nitrogen (78 percent) and oxygen (21 percent). The remaining 1 percent of atmospheric gases consist of argon (0.9 percent); carbon dioxide (0.03 percent); varying amounts of water vapour; and trace amounts of hydrogen, nitrous oxide, ozone, methane, carbon monoxide, helium, neon, krypton, and xenon.

The layers of the atmosphere are the troposphere, the stratosphere, the mesosphere, the thermosphere, and the exosphere. The troposphere is the layer in which weather occurs and extends from the surface to about 16 km (about 10 mi.) above sea level at the equator. Above the troposphere is the stratosphere, which has an upper boundary of about 50 km (about 30 mi) above sea level. The layer from 50 to 90 km (30 to 60 mi.) is called the mesosphere. At an altitude of about 90 km, temperatures begin to rise. The layer that begins at this altitude is called the thermosphere because of the high temperatures that can be reached in this layer (about 1200°C’s, or about 2200°F). The region beyond the thermosphere is called the exosphere. The thermosphere and the exosphere overlap with another region of the atmosphere known as the ionosphere, a layer or layers of ionized air extending from almost 60 km (about 50 mi) above Earth’s surface to altitudes of 1,000 km (600 mi) and more.

Earth’s atmosphere and the way it interacts with the oceans and radiation from the Sun are responsible for the planet’s climate and weather. The atmosphere plays a key role in supporting life. Most life on Earth uses atmospheric oxygen for energy in a process known as cellular respiration, which is essential to life. The atmosphere also helps moderate Earth’s climate by trapping radiation from the Sun that is reflected from Earth’s surface. Water vapour, carbon dioxide, methane, and nitrous oxide in the atmosphere act as ‘greenhouse gases’. Like the glass in a greenhouse, they trap infrared, or heat, radiation from the Sun in the lower atmosphere and by that help warm Earth’s surface. Without this greenhouse effect, heat radiation would escape into space, and Earth would be too cold to support most forms of life.

Other gases in the atmosphere are also essential to life. The trace amount of ozone based in Earth’s stratosphere blocks harmful ultraviolet radiation from the Sun. Without the ozone layer, life as we know it could not survive on land. Earth’s atmosphere is also an important part of a phenomenon known as the water cycle or the hydrologic cycle.

The water cycle simply means that Earth’s water is continually recycled between the oceans, the atmosphere, and the land. All of the water that exists on Earth today has been used and reused for billions of years. Very little water has been created or lost during this period of time. Water is always shifting on the Earth’s surface and changing back and forth between ice, liquid water, and water vapour.

The water cycle begins when the Sun heats the water in the oceans and causes it to evaporate and enter the atmosphere as water vapour. Some of this water vapour falls as precipitation directly back into the oceans, completing a short cycle. Some water vapour, however, reaches land, where it may fall as snow or rain. Melted snow or rain enters rivers or lakes on the land. Due to the force of gravity, the water in the rivers eventually empties back into the oceans. Melted snow or rain also may enter the ground. Groundwater may be stored for hundreds or thousands of years, but it will eventually reach the surface as springs or small pools known as seeps. Even snow that forms glacial ice or becomes part of the polar caps and is kept out of the cycle for thousands of years eventual melts or is warmed by the Sun and turned into water vapour, entering the atmosphere and falling again as precipitation. All water that falls on land eventually return to the ocean, completing the water cycle.

The hydrosphere consists of the bodies of water that cover 71 percent of Earth’s surface. The largest of these are the oceans, which hold more than 97 percent of all water on Earth. Glaciers and the polar ice caps encircle just more than 2 percent of Earth’s water as solid ice. Only about 0.6 percent is under the surface as groundwater. Nevertheless, groundwater is 36 times more plentiful than water found in lakes, inland seas, rivers, and in the atmosphere as water vapour. Only 0.017 percent of all the water on Earth is found in lakes and rivers. A mere 0.001 percent is found in the atmosphere as water vapour. Most of the water in glaciers, lakes, inland seas, rivers, and groundwater is fresh and can be used for drinking and agriculture. Dissolved salts compose about 3.5 percent of the water in the oceans, however, making it unsuitable for drinking or agriculture unless it is treated to remove the salts.

The crust consists of the continents, other land areas, and the basins, or floors, of the oceans. The dry land of Earth’s surface is called the continental crust. It is about 15 to 75 km (nine to 47 mi) thick. The oceanic crust is thinner than the continental crust. Its average thickness is five to 10 km (three to 6 mi). The crust has a definite boundary called the Mohorovicic discontinuity, or simply the Moho. The boundary separates the crust from the underlying mantle, which is much thicker and is part of Earth’s interior.

Oceanic crust and continental crust differ in the type of rocks they contain. There are three main types of rocks: igneous, sedimentary, and metamorphic. Igneous rocks form when molten rock, called magma, cools and solidifies. Sedimentary rocks are usually created by the breakdown of igneous rocks. They have a tendency to form in layers as small particles of other rocks or as the mineralized remains of dead animals and plants that have fused over time. The remains of dead animals and plants occasionally become mineralized in sedimentary rock and are recognizable as fossils. Metamorphic rocks form when sedimentary or igneous rocks are altered by heat and pressure deep underground.

Oceanic crust consists of dark, dense igneous rocks, such as basalt and gabbro. Continental crust consists of lighter coloured, less dense igneous rock, such as granite and diorite. Continental crust also includes metamorphic rocks and sedimentary rocks.

The biosphere can support life. The biosphere ranges from about 10 km (about 6 mi) into the atmosphere to the deepest ocean floor. For a long time, scientists believed that all life depended on energy from the Sun and consequently could only exist where sunlight penetrated. In the 1970s, however, scientists discovered various forms of life around hydrothermal vents on the floor of the Pacific Ocean where no sunlight penetrated. They learned that primitive bacteria formed the basis of this living community and that the bacteria derived their energy from a process called chemosynthesis that did not depend on sunlight. Some scientists believe that the biosphere may extend deeply into the Earth’s crust. They have recovered what they believe are primitive bacteria from deeply drilled holes below the surface.

Earth’s surface has been constantly changing ever since the planet formed. Most of these changes have been gradual, taking place over millions of years. Nevertheless, these gradual changes have resulted in radical modifications, involving the formation, erosion, and re-formation of mountain ranges, the movement of continents, the creation of huge super-continents, and the breakup of super-continents into smaller continents.

The weathering and erosion that result from the water cycle are among the principal factors responsible for changes to Earth’s surface. Another principal factor is the movement of Earth’s continents and sea-floors and the buildup of mountain ranges due to a phenomenon known as plate tectonics. Heat is the basis for all these changes. Heat in Earth’s interior is believed to be responsible for continental movement, mountain building, and the creation of new sea-floor in ocean basins. Heat from the Sun is responsible for the evaporation of ocean water and the resulting precipitation that causes weathering and erosion. In effect, heat in Earth’s interior helps build up Earth’s surface while heat from the Sun helps wear down the surface.

Weathering is the breakdown of rock at and near the surface of Earth. Most rocks originally formed in a hot, high-pressure environment below the surface where there was little exposure to water. Once the rocks reached Earth’s surface, however, they were subjected to temperature changes and exposed to water. When rocks are subjected to these kinds of surface conditions, the minerals they contain tend to change. These changes make up the process of weathering. There are two types of weathering: physical weathering and chemical weathering.

Physical weathering involves a decrease in the size of rock material. Freezing and thawing of water in rock cavities, for example, splits rock into small pieces because water expands when it freezes.

Chemical weathering involves a chemical change in the composition of rock. For example, feldspar, a common mineral in granite and other rocks, reacts with water to form clay minerals, resulting in a new substance with totally different properties than the parent feldspar. Chemical weathering is of significance to humans because it creates the clay minerals that are important components of soil, the basis of agriculture. Chemical feed weathering also causes the exit of dissolved forms of sodium, calcium, potassium, magnesium, and other chemical elements into surface and groundwater water. These elements are carried by surface water and groundwater to the sea and are the sources of dissolved salts in the sea.

Erosion is the process that removes lose and weathered rock and carries it to a new site. Water, wind, and glacial ice combined with the force of gravity can cause erosion.

Erosion by running water is by far the most common process of erosion. It takes place over a longer period of time than other forms of erosion. When water from rain or melted snow moves downhill, it can lend support to lose rock or soil with it. Erosion by running water forms the familiar gullies and V-shaped valleys that cut into most landscapes. The forces of the running water removes lose particles formed by weathering. In the process, gullies and valleys are lengthened, widened, and deepened. Often, water overflows the banks of the gullies or river channels, resulting in floods. Each new flood carries more material away to increase the size of the valley. Meanwhile, weathering loosens ever more material so the process continues.

Erosion by glacial ice is less common, but it can cause the greatest landscape changes in the shortest amount of time. Glacial ice forms in a region where snow fails to melt in the spring and summer and instead builds of a functional dynamic spread of ice. For major glaciers to form, this lack of snowmelt has to occur for many years in areas with high precipitation. As ice accumulates and thickens, it flows as a solid mass. As it flows, it has a tremendous capacity to erode soil and even solid rock. Ice is a major factor in shaping some landscapes, especially mountainous regions. Glacial ice provides much of the spectacular scenery in these regions. Features such as horns (sharp mountain peaks), Arêtes (sharp ridges), glacially formed lakes, and U-shaped valleys are all the results of glacial erosion. Wind is an important cause of erosion only in arid (dry) regions. Wind carries sand and dust, which can scour even solid rock. Many factors determine the rate and kind of erosion that occurs in a given area. The climate of an area determines the distribution, amount, and kind of precipitation that the area receives and thus the type and rate of weathering. An area with an arid climate erodes differently than an area with a humid climate. The elevation of an area also plays a role by determining the potential energy of running water. The higher the elevation the more energetic water will flow due to the force of gravity. The type of bedrock in an area (sandstone, granite, or shale) can determine the shapes of valleys and slopes, and the depth of streams.

A landscape’s geologic age-that is, how long current conditions of weathering and erosion have affected the area-determines its overall appearance. Younger landscapes tend to be more rugged and angular in appearance. Older landscapes have a tendency to have more rounded slopes and hills. The oldest landscapes tend to be low-lying with broad, open river valleys and low, rounded hills. The overall effect of the wearing down of an area is to level the land; the tendency is toward the reduction of all land surfaces to sea level.

Opposing this tendency toward a levelling is a force responsible for raising mountains and plateaus and for creating new landmasses. These changes to Earth’s surface occur in the outermost solid portion of Earth, known as the lithosphere. The lithosphere consists of the crust and another region known as the upper mantle and is approximately 65 to 100 km. (40 to 60 mi.) thick. Compared with the interior of the Earth, however, this region is moderately thin. The lithosphere is thinner in proportion to the whole Earth than the skin of an apple is to the whole apple.

Scientists believe that the lithosphere is broken into a series of plates, or segments. According to the theory of plate tectonics, these plates move around on Earth’s surface over long periods. Tectonics comes from the Greek word, tektonikos, which means ‘builder’.

According to the theory, the lithosphere is divided into large and small plates. The largest plates include the Pacific plate, the North American plate, the Eurasian plate, the Antarctic plate, the Indo-Australian plate, and the African plate. Smaller plates include the Cocos plate, the Nazca plate, the Philippine plate, and the Caribbean plate. Plate sizes vary a great deal. The Coco’s plate is 2,000 km (1,000 mi) wide, while the Pacific plate is nearly 14,000 km (nearly 9,000 mi) wide.

These plates move in three different ways in relation to each other. They pull apart or move away from each other, they collide or move against each other, or they slide past each other as they move sideways. The movement of these plates helps explain many geological events, such as earthquakes and volcanic eruptions and mountain building and the formation of the oceans and continents.

When the plates pull apart, two types of phenomena come about, depending on whether the movement takes place in the oceans or on land. When plates pull apart on land, deep valleys known as rift valleys form. An example of a rift valley is the Great Rift Valley that extends from Syria in the Middle East to Mozambique in Africa. When plates pull apart in the oceans, long, sinuous chains of volcanic mountains called mid-ocean ridges form, and new sea-floor is created at the site of these ridges. Rift valleys are also present along the crests of the mid-ocean ridges.

Most scientists believe that gravity and heat from the interior of the Earth cause the plates to move apart and to create new sea-floor. According to this explanation, molten rock known as magma rises from Earth’s interior to form hot spots beneath the ocean floor. As two oceanic plates pull apart from each other in the middle of the oceans, a crack, or rupture, appear and forms the mid-ocean ridges. These ridges exist in all the worlds’ ocean basins and resemble the seams of a baseball. The molten rock rises through these cracks and creates new sea-floor.

When plates collide or push against each other, regions called convergent plate margins form. Along these margins, one plate is usually forced to dive below the other. As that plate dives, it triggers the melting of the surrounding lithosphere and a region just below is known as the asthenosphere. These pockets of molten crust rise behind the margin through the overlying plate, creating curved chains of volcanoes known as arcs. This process is called Subduction.

If one plate consists of oceanic crust and the other consists of continental crust, the denser oceanic crust will dive below the continental crust. If both plates are oceanic crust, then either may be sub-ducted. If both are continental crust, Subduction can continue for a brief while but will eventually ends because continental crust is not dense enough to be forced very far into the upper mantle.

The results of this Subduction process are readily visible on a map showing that 80 percent of the world’s volcanoes rim the Pacific Ocean where plates are colliding against each other. The Subduction zone created by the collision of two oceanic plates-the Pacific plate and the Philippine plate-can also create a trench. Such a trench resulted in the formation of the deepest point on Earth, the Mariana Trench, which is estimated to be 11,033 m’s (36,198 ft) below sea level.

On the other hand, when two continental plates collide, mountain building occurs. The collision of the Indo-Australian plate with the Eurasian plate has produced the Himalayan Mountains. This collision resulted in the highest point of Earth, Mount Everest, which is 8,850 m’s (29,035 ft) above sea level.

Finally, some of Earth’s plates neither collide nor pull apart yet slips past each other. These regions are convened by the transforming margins. Few volcanoes occur in these areas because neither plate is forced down into Earth’s interior and little melting occurs. Earthquakes, however, are abundant as the two rigid plates slide past each other. The San Andreas Fault in California is a well-known example of a transformed margin.

The movement of plates occurs at a slow pace, at an average rate of only 2.5 cm (one in) per year. Still, over millions of years this gradual movement results in radical changes. Current plate movement is making the Pacific Ocean and Mediterranean Sea smaller, the Atlantic Ocean larger, and the Himalayan Mountains higher.

The interior of Earth plays an important role in plate tectonics. Scientists believe it is also responsible for Earth’s magnetic field. This field is vital to life because it shields the planet’s surface from harmful cosmic rays and from a steady stream of energetic particles from the Sun known as the solar wind.

Earth’s interior consists of the mantle and the core. The mantle and core make up by far the largest part of Earth’s mass. The distance from the base of the crust to the centre of the core is about 6,400 km (about 4,000 mi).

Scientists have learned about Earth’s interior by studying rocks that formed in the interior and rose to the surface. The study of meteorites, which are believed to be made of the same material that formed the Earth and its interior, has also offered clues about Earth’s interior. Finally, seismic waves generated by earthquakes send geophysicists information about the composition of the interior. The sudden movement of rocks during an earthquake causes vibrations that transmit energy through the Earth as waves. The way these waves proceed through the interior of Earth reveals the nature of materials inside the planet.

The mantle consists of three parts: the lower part of the lithosphere, the region below it known as the asthenosphere, and the region below the asthenosphere called the lower mantle. The entire mantle extends from the base of the crust to a depth of about 2,900 km (about 1,800 mi). Scientists believe the asthenosphere is made up of mushy plastic-like rock with pockets of molten rock. The term asthenosphere is derived from Greek and means ‘a weak layer’. The asthenosphere’s soft, plastic quality allows plates in the lithosphere above it to shift and slide on top of the asthenosphere. This shifting of the lithosphere’s plates is the source of most tectonic activity. The asthenosphere is also the source of the basaltic magma that makes up much of the oceanic crust and rises through volcanic vents on the ocean floor.

The mantle consists of mostly solid iron-magnesium silicate rock mixed with many other minor components including radioactive elements. However, even this solid rock can flow like a ‘sticky’ liquid when it is subjected to enough heat and pressure.

The core is divided into two parts, the outer core and the inner core. The outer core is about 2,260 km (about 1,404 mi) thick. The outer core is a liquid region composed mostly of iron, with smaller amounts of nickel and sulfur in liquid form. The inner core is about 1,220 km (about 758 mi) thick. The inner core is solid and is composed of iron, nickel, and sulfur in solid form. The inner core and the outer core also contain a small percentage of radioactive material. The existence of radioactive material is one source of heat in Earth’s interior because as radioactive material decays, it gives off heat. Temperatures in the inner core may be as high as 6650°C’s (12,000°F).

Scientists believe that Earth’s liquid iron core aids to make over a magnetic field that surrounds Earth and shields the planet from harmful cosmic rays and the Sun’s solar wind. The idea that Earth is like a giant magnet was first proposed in 1600 by English physician and natural philosopher William Gilbert. Gilbert proposed the idea to explain why the magnetized needle in a compass point north. According to Gilbert, Earth’s magnetic field creates a magnetic north pole and a magnetic south pole. The magnetic poles do not correspond to the geographic North and South poles, however. Moreover, the magnetic poles wander and are not always in the same place. The north magnetic pole is currently close to Ellef Ringnes Island in the Queen Elizabeth Islands near the boundary of Canada’s Northwest Territories with Nunavut. The magnetic south poles lies just off the coast of Wilkes Land, Antarctica.

Not only do the magnetic poles wander, but they also reverse their polarity-that is, the north magnetic pole becomes the south magnetic pole and vice versa. Magnetic reversals have occurred at least 170 times over the past 100 million years. The reversals occur on average about every 200,000 years and take place gradually over a period of several thousand years. Scientists still do not understand why these magnetic reversals occur but think they may be related to Earth’s rotation and changes in the flow of liquid iron in the outer core.

Some scientists theorize that the flow of liquid iron in the outer core sets up electrical currents that produce Earth’s magnetic field. Known as the dynamo theory, this theory may be the best explanation yet for the origin of the magnetic field. Earth’s magnetic field operates in a region above Earth’s surface known as the magnetosphere. The magnetosphere is shaped in some respects like a teardrop with a long tail that trails away from the Earth due to the force of the solar wind.

Inside the magnetosphere are the Van Allen’s radiation belts, named for the American physicist James A. Van Allen who discovered them in 1958. The Van Allen belts are regions where charged particles from the Sun and from cosmic rays are trapped and sent into spiral paths resembling Earth’s magnetic field. The radiation belts by that shield Earth’s surface from these highly energetic particles. Occasionally, however, due to extremely strong magnetic fields on the Sun’s surface, which are visible as sunspots, a brief burst of highly energetic particles streams along with the solar wind. Because Earth’s magnetic field lines converge and are closest to the surface at the poles, some of these energetic particles sneak through and interact with Earth’s atmosphere, creating the phenomenon known. Most scientists believe that the Earth, Sun, and all of the other planets and moons in the solar system took form of about 4.6 billion years. Originating endurably in some lengthily endurance from dust and giant gaseous particles-wave substances known as the solar nebula. The gas and dust in this solar nebula originated in a star that ended its life in an explosion known as a supernova. The solar nebula consisted principally of hydrogen, the lightest element, but the nebula was also seeded with a smaller percentage of heavier elements, such as carbon and oxygen. All of the chemical elements we know were originally made in the star that became a supernova. Our bodies are made of these same chemical elements. Therefore, all of the elements in our solar system, including all of the elements in our bodies, originally came from this star-seeded solar nebula.

Due to the force of gravity tiny clumps of gas and dust began to form in the early solar nebula. As these clumps came together and grew larger, they caused the solar nebula to contract in on itself. The contraction caused the cloud of gas and dust to flatten in the shape of a disc. As the clumps continued to contract, they became very dense and hot. Eventually the s of hydrogen became so dense that they began to fuse in the innermost part of the cloud, and these nuclear reactions gave birth to the Sun. The fusion of hydrogen s in the Sun is the source of its energy.

Many scientists favour the planetesimal theory for how the Earth and other planets formed out of this solar nebula. This theory helps explain why the inner planets became rocky while the outer planets, except Pluto, are made up mostly of gases. The theory also explains why all of the planets orbit the Sun in the same plane.

According to this theory, temperatures decreased with increasing distance from the centre of the solar nebula. In the inner region, where Mercury, Venus, Earth, and Mars formed, temperatures were low enough that certain heavier elements, such as iron and the other heavy compounds that make up rock, could condense of departing-that is, could change from a gas to a solid or liquid. Due to the force of gravity, small clumps of this rocky material eventually came with the dust in the original solar nebula to form protoplanets or planetesimals (small rocky bodies). These planetesimals collided, broke apart, and re-formed until they became the four inner rocky planets. The inner region, however, was still too hot for other light elements, such as hydrogen and helium, to be retained. These elements could only exist in the outermost part of the disc, where temperatures were lower. As a result two of the outer planets-Jupiter and Saturn-are by and large made of hydrogen and helium, which are also the dominant elements in the atmospheres of Uranus and Neptune.

Within the planetesimal Earth, heavier matter sank to the centre and lighter matter rose toward the surface. Most scientists believe that Earth was never truly molten and that this transfer of matter took place in the solid state. Much of the matter that went toward the centre contained radioactive material, an important source of Earth’s internal heat. As heavier material moved inward, lighter material moved outward, the planet became layered, and the layers of the core and mantle were formed. This process is called differentiation.

Not long after they formed, more than four billion years ago, the Earth and the Moon underwent a period when they were bombarded by meteorites, the rocky debris left over from the formation of the solar system. The impact craters created during this period of heavy bombardment are still visible on the Moon’s surface, which is unchanged. Earth’s craters, however, were long ago erased by weathering, erosion, and mountain building. Because the Moon has no atmosphere, its surface has not been subjected to weathering or erosion. Thus, the evidence of meteorite bombardment remains.

Energy released from the meteorite impacts created extremely high temperatures on Earth that melted the outer part of the planet and created the crust. By four billion years ago, both the oceanic and continental crust had formed, and the oldest rocks were created. These rocks are known as the Acasta Gneiss and are found in the Canadian territory of Nunavut. Due to the meteorite bombardment, the early Earth was too hot for liquid water to exist and so existing was impossible for life.

Geologists divide the history of the Earth into three eons: the Archaean Eon, which lasted from around four billion to 2.5 billion years ago; the Proterozoic Eon, which lasted from 2.5 billion to 543 million years ago; and the Phanerozoic Eon, which lasted from 543 million years ago to the present. Each eon is subdivided into different eras. For example, the Phanerozoic Eon includes the Paleozoic Era, the Mesozoic Era, and the Cenozoic Era. In turn, eras are further divided into periods. For example, the Paleozoic Era includes the Cambrian, Ordovician, Silurian, Devonian, Carboniferous, and Permian Periods.

The Archaean Eon is subdivided into four eras, the Eoarchean, the Paleoarchean, the Mesoarchean, and the Neoarchean. The beginning of the Archaean is generally dated as the age of the oldest terrestrial rocks, which are about four billion years old. The Archaean Eon came to an end 2.5 billion years ago when the Proterozoic Eon began. The Proterozoic Eon is subdivided into three eras: the Paleoproterozoic Era, the Mesoproterozoic Era, and the Neoproterozoic Era. The Proterozoic Eon lasted from 2.5 billion years ago to 543 million years ago when the Phanerozoic Eon began. The Phanerozoic Eon is subdivided into three eras: the Paleozoic Era from 543 million to 248 million years ago, the Mesozoic Era from 248 million to 65 million years ago, and the Cenozoic Era from 65 million years ago to the present.

Geologists base these divisions on the study and dating of rock layers or strata, including the fossilized remains of plants and animals found in those layers. Residing until the late 1800s scientists could only determine the relative ages of rock strata. They knew that overall the top layers of rock were the youngest and formed most recently, while deeper layers of rock were older. The field of stratigraphy shed much light on the relative ages of rock layers.

The study of fossils also enabled geologists to set the relative ages of different rock layers. The fossil record helped scientists determine how organisms evolved or when they became extinct. By studying rock layers around the world, geologists and paleontologists saw that the remains of certain animal and plant species occurred in the same layers, but were absent or altered in other layers. They soon developed a fossil index that also helped determine the relative ages of rock layers.

Beginning in the 1890s, scientists learned that radioactive elements in rock decay at a known rate. By studying this radioactive decay, they could detect an absolute age for rock layers. This type of dating, known as radiometric dating, confirmed the relative ages determined through stratigraphy and the fossil index and assigned absolute ages to the various strata. As a result scientists can assemble Earth’s geologic time scale from the Archaean Eon to the present.

The Precambrian is a time span that includes the Archaean and Proterozoic eons began roughly four billion years ago. The Precambrian marks the first formation of continents, the oceans, the atmosphere, and life. The Precambrian represents the oldest chapter in Earth’s history that can still be studied. Very little remains of Earth from the period of 4.6 billion to about four billion years ago due to the melting of rock caused by the early period of meteorite bombardment. Rocks dating from the Precambrian, however, have been found in Africa, Antarctica, Australia, Brazil, Canada, and Scandinavia. Some zircon mineral grains deposited in Australian rock layers have been dated to 4.2 billion years.

The Precambrian is also the longest chapter in Earth’s history, spanning a period of about 3.5 billion years. During this time frame, the atmosphere and the oceans formed from gases that escaped from the hot interior of the planet because of widespread volcanic eruptions. The early atmosphere consisted primarily of nitrogen, carbon dioxide, and water vapour. As Earth continued to cool, the water vapour condensed out and fell as precipitation to form the oceans. Some scientists believe that much of Earth’s water vapour originally came from comets containing frozen water that struck Earth during meteorite bombardment.

By studying 2-billion-year-old rocks found in northwestern Canada, as well as 2.5-billion-year-old rocks in China, scientists have found evidence that plate tectonics began shaping Earth’s surface as early as the middle Precambrian. About a billion years ago, the Earth’s plates were entered around the South Pole and formed a super-continent called Rodinia. Slowly, pieces of this super-continent broke away from the central continent and travelled north, forming smaller continents.

Life originated during the Precambrian. The earliest fossil evidence of life consists of Prokaryotes, one-celled organisms that lacked a nucleus and reproduced by dividing, a process known as asexual reproduction. Asexual division meant that a prokaryote’s hereditary material was copied unchanged. The first Prokaryotes were bacteria known as archaebacteria. Scientists believe they came into existence perhaps as early as 3.8 billion years ago, by 3.5 billion years ago, and where anaerobic—that is, they did not require oxygen to produce energy. Free oxygen barely existed in the atmosphere of the early Earth.

Archaebacteria were followed about 3.46 billion years ago by another type of prokaryote known as Cyanobacteria or blue

-green algae. These Cyanobacteria gradually introduced oxygen in the atmosphere because of photosynthesis. In shallow tropical waters, Cyanobacteria formed mats that grew into humps called stromatolites. Fossilized stromatolites have been found in rocks in the Pilbara region of western Australia that are more than 3.4 billion years old and in rocks of the Gunflint Chert region of northwest Lake Superior that are about 2.1 billion years old.

For billions of years, life existed only in the simple form of Prokaryotes. Prokaryotes were followed by the relatively more advanced eukaryotes, organisms that have a nucleus in their cells and that reproduces by combining or sharing their heredity makeup rather than by simply dividing. Sexual reproduction marked a milestone in life on Earth because it created the possibility of hereditary variation and enabled organisms to adapt more easily to a changing environment. The inordinate branch of Precambrian time occurred some 560 million to 545 million years ago and seeing an appearance of an intriguing group of fossil organisms known as the Ediacaran fauna. First discovered in the northern Flinders Range region of Australia in the mid-1940s and subsequently found in many locations throughout the world, these strange fossils may be the precursors of many fossil groups that were to explode in Earth's oceans in the Paleozoic Era.

At the start of the Paleozoic Era about 543 million years ago, an enormous expansion in the diversity and complexity of life occurred. This event took place in the Cambrian Period and is called the Cambrian explosion. Nothing like it has happened since. Most of the major groups of animals we know today made their first appearance during the Cambrian explosion. Most of the different ‘body plans’ found in animals today-that is, the way of an animal’s body is designed, with heads, legs, rear ends, claws, tentacles, or antennae-also originated during this period.

Fishes first appeared during the Paleozoic Era, and multicellular plants began growing on the land. Other land animals, such as scorpions, insects, and amphibians, also originated during this time. Just as new forms of life were being created, however, other forms of life were going out of existence. Natural selection meant that some species can flourish, while others failed. In fact, mass extinctions of animal and plant species were commonplace.

Most of the early complex life forms of the Cambrian explosion lived in the sea. The creation of warm, shallow seas, along with the buildup of oxygen in the atmosphere, may have aided this explosion of life forms. The shallow seas were created by the breakup of the super-continent Rodinia. During the Ordovician, Silurian, and Devonian periods, which followed the Cambrian Period and lasted from 490 million to 354 million years ago, some continental pieces that had broken off Rodinia collided. These collisions resulted in larger continental masses in equatorial regions and in the Northern Hemisphere. The collisions built several mountain ranges, including parts of the Appalachian Mountains in North America and the Caledonian Mountains of northern Europe.

Toward the close of the Paleozoic Era, two large continental masses, Gondwanaland to the south and Laurasia to the north, faced each other across the equator. Their slow but eventful collision during the Permian Period of the Paleozoic Era, which lasted from 290 million to 248 million years ago, assembled the super-continent Pangaea and resulted in some grandest mountains in the history of Earth. These mountains included other parts of the Appalachians and the Ural Mountains of Asia. At the close of the Paleozoic Era, Pangaea represented more than 90 percent of all the continental landmasses. Pangaea straddled the equator with a huge mouth-like opening that faced east. This opening was the Tethys Ocean, which closed as India moved northward creating the Himalayas. The last remnants of the Tethys Ocean can be seen in today’s Mediterranean Sea.

The Paleozoic ended with a major extinction event, when perhaps as many as 90 percent of all plant and animal species died out. The reason is not known for sure, but many scientists believe that huge volcanic outpourings of lavas in central Siberia, coupled with an asteroid impact, were joint contributing factors.

The Mesozoic Era, beginning 248 million years ago, is often characterized as the Age of Reptiles because reptiles were the dominant life forms during this era. Reptiles dominated not only on land, as dinosaurs, but also in the sea, as the plesiosaurs and ichthyosaurs, and in the air, as pterosaurs, which were flying reptiles.

The Mesozoic Era is divided into three geological periods: the Triassic, which lasted from 248 million to 206 million years ago; the Jurassic, from 206 million to 144 million years ago; and the Cretaceous, from 144 million to 65 million years ago. The dinosaurs emerged during the Triassic Period and was one of the most successful animals in Earth’s history, lasting for about 180 million years before going extinct at the end of the Cretaceous Period. The first birds and mammals and the first flowering plants also appeared during the Mesozoic Era. Before flowering plants emerged, plants with seed-bearing cones known as conifers were the dominant form of plants. Flowering plants soon replaced conifers as the dominant form of vegetation during the Mesozoic Era.

The Mesozoic was an eventful era geologically with many changes to Earth’s surface. Pangaea continued to exist for another 50 million years during the early Mesozoic Era. By the early Jurassic Period, Pangaea began to break up. What is now South America begun splitting from what is now Africa, and in the process the South Atlantic Ocean formed? As the landmass that became North America drifted away from Pangaea and moved westward, a long Subduction zone extended along North America’s western margin. This Subduction zone and the accompanying arc of volcanoes extended from what is now Alaska to the southern tip of South America. A great deal of this featured characteristic is called the American Cordillera, and exists today as the eastern margin of the Pacific Ring of Fire.

During the Cretaceous Period, heat continued to be released from the margins of the drifting continents, and as they slowly sank, vast inland seas formed in much of the continental interiors. The fossilized remains of fishes and marine mollusks called ammonites can be found today in the middle of the North American continent because these areas were once underwater. Large continental masses broke off the northern part of southern Gondwanaland during this period and began to narrow the Tethys Ocean. The largest of these continental masses, present-day India, moved northward toward its collision with southern Asia. As both the North Atlantic Ocean and South Atlantic Ocean continued to open, North and South America became isolated continents for the first time in 450 million years. Their westward journey resulted in mountains along their western margins, including the Andes of South America.

The Cenozoic Era, beginning about 65 million years ago, is the period when mammals became the dominant form of life on land. Human beings first appeared in the later stages of the Cenozoic Era. In short, the modern world as we know it, with its characteristic geographical features and its animals and plants, came into being. All of the continents that we know today took shape during this era.

A single catastrophic event may have been responsible for this relatively abrupt change from the Age of Reptiles to the Age of Mammals. Most scientists now believe that a huge asteroid or comet struck the Earth at the end of the Mesozoic and the beginning of the Cenozoic eras, causing the extinction of many forms of life, including the dinosaurs. Evidence of this collision came with the discovery of a large impact crater off the coast of Mexico’s Yucatán Peninsula and the worldwide finding of iridium, a metallic element rare on Earth but abundant in meteorites, in rock layers dated from the end of the Cretaceous Period. The extinction of the dinosaurs opened the way for mammals to become the dominant land animals.

The Cenozoic Era is divided into the Tertiary and the Quaternary periods. The Tertiary Period lasted from about 65 million to about 1.8 million years ago. The Quaternary Period began about 1.8 million years ago and continued to the present day. These periods are further subdivided into epochs, such as the Pleistocene, from 1.8 million to 10,000 years ago, and the Holocene, from 10,000 years ago to the present.

Early in the Tertiary Period, Pangaea was completely disassembled, and the modern continents were all clearly outlined. India and other continental masses began colliding with southern Asia to form the Himalayas. Africa and a series of smaller micro-continents began colliding with southern Europe to form the Alps. The Tethys Ocean was nearly closed and began to resemble today’s Mediterranean Sea. As the Tethys continued to narrow, the Atlantic continued to open, becoming an ever-wider ocean. Iceland appeared as a new island in later Tertiary time, and its active volcanism today shows that sea-floor spreading is still causing the country to grow.

Late in the Tertiary Period, about six million years ago, humans began to evolve in Africa. These early humans began to migrate to other parts of the world between two million and 1.7 million years ago.

The Quaternary Period marks the onset of the great ice ages. Many times, perhaps at least once every 100,000 years on average, vast glaciers 3 km (2 mi) thick invaded much of North America, Europe, and parts of Asia. The glaciers eroded considerable amounts of material that stood in their paths, gouging out U-shaped valleys. Anically modern human beings, known as Homo sapiens, became the dominant form of life in the Quaternary Period. Most anthropologists (scientists who study human life and culture) believe that Anically modern humans originated only recently in Earth’s 4.6-billion-year history, within the past 200,000 years.

With the rise of human civilization about 8,000 years ago and especially since the Industrial Revolution in the mid-1700s, human beings began to alter the surface, water, and atmosphere of Earth. In doing so, they have become active geological agents, not unlike other forces of change that influence the planet. As a result, Earth’s immediate future depends largely on the behaviour of humans. For example, the widespread use of fossil fuels is releasing carbon dioxide and other greenhouse gases into the atmosphere and threatens to warm the planet’s surface. This global warming could melt glaciers and the polar ice caps, which could flood coastlines around the world and many island nations. In effect, the carbon dioxide removed from Earth’s early atmosphere by the oceans and by primitive plant and animal life, and subsequently buried as fossilized remains in sedimentary rock, is being released back into the atmosphere and is threatening the existence of living things.

Even without human intervention, Earth will continue to change because it is geologically active. Many scientists believe that some of these changes can be predicted. For example, based on studies of the rate that the sea-floor is spreading in the Red Sea, some geologists predict that in 200 million years the Red Sea will be the same size as the Atlantic Ocean is today. Other scientists predict that the continent of Asia will break apart millions of years from now, and as it does, Lake Baikal in Siberia will become a vast ocean, separating two landmasses that once made up the Asian continent.

In the far, far distant future, however, scientists believe that Earth will become an uninhabitable planet, scorched by the Sun. Knowing the rate at which nuclear fusion occurs in the Sun and knowing the Sun’s mass, astrophysicists (scientists who study stars) have calculated that the Sun will become brighter and hotter about three billion years from now, when it will be hot enough to boil Earth’s oceans away. Based on studies of how other Sun-like stars have evolved, scientists predict that the Sun will become a red giant, a star with a very large, hot atmosphere, about seven billion years from now. As a red giant the Sun’s outer atmosphere will expand until it engulfs the planet Mercury. The Sun will then be 2,000 times brighter than it is now and so hot it will melt Earth’s rocks. Earth will end its existence as a burnt cinder.

Three billion years is the life span of millions of human generations, however. Perhaps by then, humans will have learned how to journey through and beyond the solar system and begin to colonize other planets in our galaxy, and find yet of another place to call ‘home’.

The Cenozoic era (65 million years ago to the present time) is divided into the Tertiary period (65 million to 1.6 million years ago) and the Quaternary period (1.6 million years ago to the present). However, because scientists have so much more information about this era, they tend to focus on the epochs that make up each period. During the first part of the Cenozoic era, an abrupt transition from the Age of Reptiles to the Age of Mammals occurred, when the large dinosaurs and other reptiles that had dominated the life of the Mesozoic era disappeared

Index fossils of the Cenozoic tend to be microscopic, such as the tiny shells of foraminifera. They are commonly used, along with varieties of pollen fossils, to date the different rock strata of the Cenozoic era.

The Paleocene epoch (65 million to 55 million years ago) marks the beginning of the Cenozoic era. Seven groups of Paleocene mammals are known. All of them appear to have developed in northern Asia and to have migrated to other parts of the world. These primitive mammals had many features in common. They were small, with no species exceeding the size of a small modern bear. They were four-footed, with five toes on each foot, and they walked on the soles of their feet. Most of them had slim heads with narrow muzzles and small brain cavities. The predominant mammals of the period were members of three groups that are now extinct. They were the creodonts, which were the ancestors of modern carnivores; the amblypods, which were small, heavy-bodied animals; and the condylarths, which were light-bodied herbivorous animals with small brains. The Paleocene groups that have survived are the marsupials, the insectivores, the primates, and the rodents

During the Eocene epoch (55 million to 38 million years ago), most direct evolutionary ancestors of modern animals appeared. Among these animals-all of which were small in stature-were the horse, rhinoceros, camel, rodent, and monkey. The creodonts and amblypods continued to develop during the epoch, but the condylarths became extinct before it ended. The first aquatic mammals, ancestors of modern whales, also appeared in Eocene times, as did such modern birds as eagles, pelicans, quail, and vultures. Changes in vegetation during the Eocene epoch were limited chiefly to the migration of types of plants in response to climate changes.

During the Oligocene epoch (38 million to 24 million years ago), most of the archaic mammals from earlier epochs of the Cenozoic era disappeared. In their place appeared representatives of many of modern mammalian groups. The creodonts became extinct, and the first true carnivores, resembling dogs and cats, evolved. The first anthropoid apes also lived during this time, but they became extinct in North America by the end of the epoch. Two groups of animals that are now extinct flourished during the Oligocene epoch: the titanotheres, which are related to the rhinoceros and the horse; and the oreodonts, which were small, dog-like, grazing animals.

The development of mammals during the Miocene epoch (24 million to five million years ago) was influenced by an important evolutionary development in the plant kingdom: the first appearance of grasses. These plants, which were ideally suited for forage, encouraged the growth and development of grazing animals such as horses, camels, and rhinoceroses, which were abundant during the epoch. During the Miocene epoch, the mastodon evolved, and in Europe and Asia a gorilla-like ape, Dryopithecus, was common. Various types of carnivores, including cats and wolflike dogs, ranged over many parts of the world.

The paleontology of the Pliocene epoch (five million to 1.6 million years ago) does not differ much from that of the Miocene, although the period is regarded by many zoologists as the climax of the Age of Mammals. The Pleistocene Epoch (1.6 million to 10,000 years ago) in both Europe and North America was marked by an abundance of large mammals, most of which were basically modern in type. Among them were buffalo, elephants, mammoths, and mastodons. Mammoths and mastodons became extinct before the end of the epoch. In Europe, antelope, lions, and hippopotamuses also appeared. Carnivores included badgers, foxes, lynx, otters, pumas, and skunks, as well as now-extinct species such as the giant saber-toothed tiger. In North America, the first bears made their appearance as migrants from Asia. The armadillo and ground sloth migrated from South America to North America, and the musk-ox ranged southward from the Arctic regions. Modern human beings also emerged during this epoch.

The Cenozoic Era, beginning about 65 million years ago, is the period when mammals became the dominant form of life on land. Human beings first appeared in the later stages of the Cenozoic Era. In short, the modern world as we know it, with its characteristic geographical features and its animals and plants, came into being. All of the continents that we know today took shape during this era.

A single catastrophic event may have been responsible for this relatively abrupt change from the Age of Reptiles to the Age of Mammals. Most scientists now believe that a huge asteroid or comet struck the Earth at the end of the Mesozoic and the beginning of the Cenozoic eras, causing the extinction of many forms of life, including the dinosaurs. Evidence of this collision came with the discovery of a large impact crater off the coast of Mexico’s Yucatán Peninsula and the worldwide finding of iridium, a metallic element rare on Earth but abundant in meteorites, in rock layers dated from the end of the Cretaceous Period. The extinction of the dinosaurs opened the way for mammals to become the dominant land animals.

The Cenozoic Era is divided into the Tertiary and the Quaternary periods. The Tertiary Period lasted from about 65 million to about 1.8 million years ago. The Quaternary Period began about 1.8 million years ago and continued to the present day. These periods are further subdivided into epochs, such as the Pleistocene, from 1.8 million to 10,000 years ago, and the Holocene, from 10,000 years ago to the present.

Early in the Tertiary Period, Pangaea was completely disassembled, and the modern continents were all clearly outlined. India and other continental masses began colliding with southern Asia to form the Himalayas. Africa and a series of smaller micro-continents began colliding with southern Europe to form the Alps. The Tethys Ocean was nearly closed and began to resemble today’s Mediterranean Sea. As the Tethys continued to narrow, the Atlantic continued to open, becoming an ever-wider ocean. Iceland appeared as a new island in later Tertiary time, and its active volcanism today suggests that sea-floor spreading be still causing the country to grow.

Late in the Tertiary Period, about six million years ago, humans began to evolve in Africa. These early humans began to migrate to other parts of the world between two or 1.7 million years ago.

The Quaternary Period marks the onset of the great ice ages. Many times, perhaps at least once every 100,000 years on average, vast glaciers 3 km (2 mi) thick invaded much of North America, Europe, and parts of Asia. The glaciers eroded considerable amounts of material that stood in their paths, gouging out U-shaped valleys. Anically modern human beings, known as Homo sapiens, became the dominant form of life in the Quaternary Period. Most anthropologists (scientists who study human life and culture) believe that Anically modern humans originated only recently in Earth’s 4.6-billion-year history, within the past 200,000 years.

Most biologists agree that animals evolved from simpler single-celled organisms. Exactly how this happened is unclear, because few fossils have been left to record the sequence of events. Faced with this lack of fossil evidence, researchers have attempted to piece together animal origins by examining the single-celled organisms alive today.

Modern single-celled organisms are classified into two kingdoms: the Prokaryotes and protists. Prokaryotes, which include bacteria, are very simple organisms, and lack many features seen in animal cells. Protists, on the other hand, are more complex, and their cells contain all the specialized structures, or organelles, found in the cells of animals. One protist group, the choanoflagellates or collar flagellates, contains organisms that bear a striking resemblance to cells that are found in sponges. Most choanoflagellates live on their own, but significantly, some form permanent groups or colonies.

This tendency to form colonies are widely believed to have been an important stepping stone on the path to animal life. The next step in evolution would have involved a transition from colonies of independent cells to colonies containing specialized cells that were dependent on each other for survival. Once this development had occurred, such colonies would have effectively become single organisms. Increasing specialization among groups of cells could then have created tissues, triggering the long and complex evolution of animal bodies.

This conjectural sequence of events probably occurred along several parallel paths. One path led to the sponges, which retain a collection of primitive features that set them apart from all animals. Another path led to two major subdivisions of the animal kingdom: the Protostomes, which include arthropods, annelid worms, mollusks, and cnidarians; and the deuterostomes, which include echinoderms and chordates. Protostomes and deuterostomes differ fundamentally in the way they develop as embryos, strongly suggesting that they split from each other a long time ago.

Animal life first appeared perhaps a billion years ago, but for a long time after this, the fossil record remains almost blank. Fossils exist that seem to show burrows and other indirect evidence for animal life, but the first direct evidence of animals themselves appears about 650 million years ago, toward the end of the Precambrian period. At this time, the animal kingdom stood on the threshold of a great explosion in diversity. By the end of the Cambrian Period, 150 million years later, all of the main types of animal life existing today had become established.

When the first animals evolved, dry land was probably without any kind of life, except possibly bacteria. Without terrestrial plants, land-based animals would have had nothing to eat. Nevertheless, when plants took up life on land more than 400 million years ago, that situation changed, and animals evolved that could use this new source of food. The first land animals included primitive wingless insects and probably a range of soft-bodied invertebrates that have not left fossil remains. The first vertebrates to move onto land were the amphibians, which appeared about 370 million years ago.

For all animals, life on land involved meeting some major challenges. Foremost among these was the need to conserve water and the need to extract oxygen from the air. Another problem concerned the effects of gravity. Water buoys of living things, but air, which is 750 times less dense than water, generates almost no buoyancy at all. To function effectively on land, animals needed support.

In soft-bodied land animals such as earthworms, this support is provided by a hydrostatic skeleton, which works by internal pressure. The animal's body fluids press out against its skin, giving the animal its shape. In insects and other arthropods, support is provided by the exoskeleton (external skeletons), while in vertebrates it is provided by bones. Exoskeletons can play a double role by helping animals to conserve water, but they have one important disadvantage: unlike an internal bony skeleton, their weight increases very rapidly as they get bigger, eventually making them too heavy to move. This explains why insects have all remained relatively small, while some vertebrates have reached very large sizes.

Like other living things, animals evolve by adapting to and exploiting their surroundings. In the billion-year history of animal life, this process could use resources in a different way. Some of these species are surviving today, but these are a minority; an even greater number are extinct, having lost the struggle for survival

Speciation, the birth of new species, usually occurs when a group of living things becomes isolated from others of their kind. Once this has occurred, the members of the group follow their own evolutionary path and adapt in ways that make them increasingly distinct. After a long period-typically thousand of the years-unique features were to mean that they can no longer breed within the former circle of relative relations. At this point, a new species comes into being.

In animals, this isolation can come about in several different ways. The simplest form, geographical isolation, occurs when members of an original species become separated by a physical barrier. One example of such a barrier is the open sea, which isolates animals that have been accidentally stranded on remote islands. As the new arrivals adapt to their adopted home, they become ever more distinct from their mainland relatives. Sometimes the result is a burst of adaptive radiation, which produces several different species. In the Hawaiian Islands, for example, 22 species of honey-creepers have evolved from a single pioneering species of a finch-like bird.

Another type of isolation is thought to occur where there is no physical separation. Here, differences in behaviour, such as mate selection, may sometimes help to split a single species into distinct groups. If the differences persist for a some duration, in that they live long enough new species are created.

The fate of a new species depends very much on the environment in which it evolved. If the environment is stable and no new competitors appear on the scene, an animal species may change very little in hundreds of thousands of years. Nevertheless, if the environment changes rapidly and competitors arrive from outside, the struggle for survival is much more intense. In these conditions, either a species change, or it eventually becomes extinct.

During the history of animal life, on at least five occasions, sudden environmental change has triggered simultaneous extinction on a massive scale. One of these mass extinctions occurred at the end of the Cretaceous Period, about 65 million years ago, killing all dinosaurs and perhaps two-thirds of marine species. An even greater mass extinction took place at the end of the Permian Period, about 200 million years ago. Many biologists believe that we are at present living in a sixth period of mass extinction, this time triggered by human beings.

Compared with plants, animals make up only a small part of the total mass of living matter on earth. Despite this, they play an important part in shaping and maintaining natural environments.

Many habitats are directly influenced by the way animals live. Grasslands, for example, exist partly because grasses and grazing animals have evolved a close partnership, which prevents other plants from taking hold. Tropical forests also owe their existence to animals, because most of their trees rely on animals to distribute their pollen and seeds. Soil is partly the result of animal activity, because earthworms and other invertebrates help to break down dead remains and recycle the nutrients that they contain. Without its animal life, the soil would soon become compacted and infertile.

By preying on each other, animals also help to keep their own numbers in check. This prevents abrupt population peaks and crashes and helps to give living systems a built-in stability. On a global scale, animals also influence some of the nutrient cycles on which almost all life depends. They distribute essential mineral elements in their waste, and they help to replenish the atmosphere's carbon dioxide when they breathe. This carbon dioxide is then used by plants as they grow.

Until relatively recently in human history, people existed as nomadic hunter-gatherers. They used animals primarily as a source of food and for raw materials that could be used for making tools and clothes. By today's standards, hunter-gatherers were equipped with rudimentary weapons, but they still had a major impact on the numbers of some species. Many scientists believe, for example, that humans were involved in a cluster of extinctions that occurred about 12,000 years ago in North America. In less than a millennium, two-thirds of the continent's large mammal species disappeared.

This simple relationship between people and animals changed with domestication, which also began about 12,000 years ago. Instead of being actively hunted, domesticated animals were slowly brought under human control. Some were kept for food or for clothing, others for muscle power, and some simply for companionship.

The first animal to be domesticated was almost certainly the dog, which was bred from wolves. It was followed by species such as the cat, horse, camel, llama, and aurochs (a species of wild cattle), and by the Asian jungle fowl, which is the ancestor of today's chickens. Through selective breeding, each of these animals has been turned into forms that are particularly suitable for human use. Today, many domesticated animals, including chickens, vastly outnumber their wild counterparts. Sometimes, such as the horse, the original wild species has died out together.

Over the centuries, many domesticated animals have been introduced into different parts of the world only to escape and establish themselves in the wild. With stowaway pests such as rats, these ‘feral’ animals have often affected native wildlife. Cats, for example, have inflicted great damage on Australia's smaller marsupials, and feral pigs and goats continue to be serious problems for the native wildlife of the Galápagos Islands.

Despite the growth of domestication, humans continue to hunt some wild animals. Some forms of hunting are carried out mainly for sport, but others provide food or animal products. Until recently, one of the most significant of these forms of hunting was whaling, which reduced many whale stocks to the brink of extinction. Today, highly efficient sea fishing threatens some species of fish with the same fate since the beginning of agriculture. The human population has increased by more than two thousand times. To provide the land needed for growing food and housing people, large areas of the earth's landscapes have been completely transformed. Forests have been cut down, wetlands drained, and deserts irrigated, reducing these natural habitats to a fraction of their former extent.

Some species of animals have managed to adapt to these changes. A few, such as the brown rat, raccoon, and house sparrow, have benefited by exploiting the new opportunities that have opened and have successfully taken up life on farms, or in towns and cities. Nonetheless, most animals have specialized ways of life that make them dependent on a particular kind of habitat. With the destruction of their habitats, their number inevitably declines.

In the 20th century, animals have also had to face additional threats from human activities. Foremost among these are environmental pollution and the increasing demand for resources such as timber and fresh water. For some animals, the combination of these changes has proved so damaging that their numbers are now below the level needed to guarantee survival.

Across the world, efforts are currently underway to address this urgent problem. In the most extreme cases, gravely threatened animals can be helped by taking them into captivity and then releasing them once breeding programs have increased their number. One species saved in this way is the Hawaiian mountain goose or nē? nē? . In 1951, its population had been reduced to just 33. Captive breeding has since increased the population to more than 2500, removing the immediate threat of extinction.

While captive breeding is a useful emergency measure, it cannot assure the long-term survival of a species. Today animal protection focuses primarily on the preservation of entire habitats, an approach that maintains the necessary links between the different species the habitats support. With the continued growth in the world's human population, habitat preservation will require a sustained reduction in our use of the world's resources to minimize our impact on the natural world.

Paleontologists gain most of their information by studying deposits of sedimentary rocks that formed in strata over millions of years. Most fossils are found in sedimentary rock. Paleontologists use fossils and other qualities of the rock to compare strata around the world. By comparing, they can determine whether strata developed during the same time or in the same type of environment. This helps them assemble a general picture of how the earth evolved. The study and comparison of different strata are called stratigraphy.

Fossils provide for most of the data on which strata are compared. Some fossils, called index fossils, are especially useful because they have a broad geographic range but a narrow temporal one-that is, they represent a species that was widespread but existed for a brief period of time. The best index fossils tend to be marine creatures. These animals evolved rapidly and spread over large areas of the world. Paleontologists divide the last 570 million years of the earth's history into eras, periods, and epochs. The part of the earth's history before about 570 million years ago is called Precambrian time, which began with the earth's birth, probably more than four billion years ago.

The earliest evidence of life consists of microscopic fossils of bacteria that lived as early as 3.6 billion years ago. Most Precambrian fossils are very tiny. Most species of larger animals that lived in later Precambrian time had soft bodies, without shells or other hard body parts that would create lasting fossils. The first abundant fossils of larger animals date from about 600 million years ago.

At first glance, the sudden jump from 8000 Bc to 10,000 years ago looks peculiar. On reflection, however, the time-line has clearly not lost 2,000 years. Rather, the time-line has merely shifted from one convention of measuring time to another. To understand the reasons for this shift, it will help to understand some of the different conventions used to measure time.

All human societies have faced the need to measure time. Today, for most practical purposes, we keep track of time with the aid of calendars, which are widely and readily available in printed and computerized forms throughout the world. However, long before humans developed any formal calendar, they measured time based on natural cycles: the seasons of the year, the waxing and waning of the moon, the rising and setting of the sun. Understanding these rhythms of nature was necessary for humans so they could be successful in hunting animals, catching fish, and collecting edible nuts, berries, roots, and vegetable matter. The availability of these animals and plants varied with the seasons, so early humans needed at least a practical working knowledge of the seasons to eat. When humans eventually developed agricultural societies, it became crucial for farmers to know when to plant their seeds and harvest their crops. To ensure that farmers had access to reliable knowledge of the seasons, early agricultural societies in Mesopotamia, Egypt, China, and other lands supported specialists who kept track of the seasons and created the world’s first calendars. The earliest surviving calendars date from around 2400 Bc.

As societies became more complex, they required increasingly precise ways to measure and record increments of time. For example, some of the earliest written documents recorded tax payments and sales transactions, and indicating when they took place was important. Otherwise, anyone reviewing the documents later would find it impossible to determine the status of an individual account. Without any general convention for measuring time, scribes (persons who wrote documents) often dated events by the reigns of local rulers. In other words, a scribe might indicate that an individual’s tax payment arrived in the third year of the reign (or third regnal years) of the Assyrian ruler Tiglath-Pileser. By consulting and comparing such records, authorities could determine if the individual were up to date in tax payments.

These days, scholars and the public alike refer to time on many different levels, and they consider events and processes that took place at any times, from the big bang to the present. Meaningful discussion of the past depends on some generally observed frames of reference that organize time coherently and allow us to understand the chronological relationships between historical events and processes.

For contemporary events, the most common frame of reference is the Gregorian calendar, which organizes time around the supposed birth date of Jesus of Nazareth. This calendar refers to dates before Jesus’ birth as Bc (‘before Christ’) and those afterwards as ad (anno Domini, Latin for ‘in the year of the Lord’). Scholars now believe that Jesus was born four to six years before the year recognized as ad one in the Gregorian calendar, so this division of time is probably off its intended mark by a few years. Nonetheless, even overlooking this point, the Gregorian calendar is not meaningful or useful for references to events in the so-called deep past, a period so long ago that to be very precise about dates is impossible. Saying that the big bang took place in the year 15,000,000,000 Bc would be misleading, for example. No one knows exactly when the big bang took place, and even if someone did, there would be little point in dating that moment and everything that followed from it according to an event that took place some 14,999,998,000 years later. For purposes of dating events and processes in the deep past and remote prehistory, then, scientists and historians have adopted different principles of measuring time.

In conventional usage, prehistory refers to the period before humans developed systems for writing, while the historical era refers to the period after written documents became available. This usage became common in the 19th century, when professional historians began to base their studies of the past largely on written documentation. Historians regarded written source materials as more reliable than the artistic and artifactual evidence studied by archaeologists working on prehistoric times. Recently, however, the distinction between prehistory and the historical era has become much more blurred than it was in the 19th century. Archaeologists have unearthed rich collections of artifacts that throw considerable light on so-called prehistoric societies. When, contemporary historians realize much better than did their predecessors that written documentary evidence raises as many questions as it does answers. In any case, written documents illuminate only selected dimensions of experience. Despite these nuances of historical scholarship, for purposes of dating events and processes in times past, the distinction between the term’s prehistory and the historical era remains useful. For the deep past and prehistory, establishing precise dates is rarely possible: Only in the cases of a few natural and celestial phenomena, such as eclipses and appearances of comets, are scientists able to infer relatively precise dates. For the historical era, on the other hand, precise dates can be established for many events and processes, although certainly not for all.

Since the Gregorian calendar is not especially useful for dating events in the distant period long before the historical era, many scientists who study the deep past refer not to years ‘Bc’ or AD’ but to years ‘before the present’. Astronomers and physicists, for example, believe the big bang took place between 10 billion and 20 billion years ago, and that planet Earth came into being about 4.65 billion years ago. When dealing with Earth’s physical history and life forms, geologists often dispense with year references together and divide time into alternate spans of time. These time spans are conventionally called eons (the longest span), eras, periods, and epochs (the shortest span). Since obtaining precise dates for distant times is impossible, they simply refer to the Proterozoic Eon (2.5 billion to 570 million years ago), the Mesozoic Era (240 million to 65 million years ago), the Jurassic Period (205 million to 138 million years ago), or the Pleistocene Epoch

(1.6 million to 10,000 years ago).

Because the Pleistocene Epoch is a comparatively recent time span, archaeologists and pre historians are frequently able to assign at least approximate year dates to artifacts from that period. As with all dates in the distant past, however, it would be misleading to follow the principles of the Gregorian calendar and refer to dates’ Bc. As a result, archaeologists and pre-historians often call these dates’ bp (‘before the present’), with the understanding that all dates bp are approximate. Thus, scholars date the evolution of The Homo sapiens to about 130,000 bp and the famous cave paintings at Lascaux in southern France to about 15,000 Bc.

The Dynamic Timeline, of which all date before 8000 Bc refers to dates before the present, and all dates since 8000 Bc categorizes time according to the Gregorian calendar. Thus, a backward scroll in the time-line will take users from 7700 Bc to 7800 Bc, 7900 Bc, and 8000 Bc to 10,000 years ago. Note that the time-line has not lost 2,000 years! To date events this far back in time, the Dynamic Timeline has simply switched to a different convention of designating the dates of historical events.

Written documentation enables historians to establish relatively precise dates of events in the historical era. However, placing these events in chronological order requires some agreed upon starting points for a frame of reference. For purposes of maintaining proper tax accounts in a Mesopotamian city-state, dating an event in relation to the first year of a king’s reign might be sufficient. For purposes of understanding the development of entire peoples or societies or regions, however, a collection of dates according to the regnal years of many different local rulers would quickly become confusing. Within a given region there might be many different local rulers, so efforts to establish the chronological relationship between events may entail an extremely tedious collation of all the rulers’ regnal years. Thus, to facilitate the understanding of chronological relationships between events in different jurisdictions, some larger frame of reference is necessary. Most commonly these larger frames of reference take the form of calendars, which not only make it possible to predict changes in the seasons but also enable users to organize their understanding of time and appreciate the relationships between datable events.

Different civilizations have devised thousands of different calendars. Of the 40 or so calendars employed in the world today, the most widely used is the Gregorian calendar, introduced in 1582 by Pope Gregory XIII. The Gregorian calendar revised the Julian calendar, instituted by Julius Caesar in 45 Bc, to bring it closer in line with the seasons. Most Roman Catholic lands accepted the Gregorian calendar upon its promulgation by Gregory in 1582, but other lands adopted it much later: Britain in 1752, Russia in 1918, and Greece in 1923. During the 20th century it became the dominant calendar throughout the world, especially for purposes of international business and diplomacy.

Despite the prominence of the Gregorian calendar in the modern world, millions of people use other calendars as well. The oldest calendar still in use is the Jewish calendar, which dates’ time from the creation of the world in the (Gregorian) year 3761 Bc, according to the Hebrew scriptures. The year 2000Bc. in the Gregorian calendar thus corresponding to the year am 5761 in the Jewish calendar (am stands for anno Mundi, Latin for ‘the year of the world’). The Jewish calendar is the official calendar of Israel, and it also serves as a religious calendar for Jews worldwide.

The Chinese use another calendar, which, as tradition holds, takes its point of departure in the year 2697 Bc in honour of a beneficent ruler’s work. The year AD 2000 of the Gregorian calendar, and with that it corresponds to the year 4697 in the Chinese calendar. The Maya calendar began even earlier than the Chinese-August 11, 3114 Bc. Maya scribes calculated that this is when the cycle of time began. The Maya actually used two interlocking calendars-one a 365-day calendar based on the cycles of the sun, the other a sacred almanac used to calculate auspicious or unlucky days. Despite the importance of these calendars to the Maya civilization, the calendars passed out of general use after the Spanish conquest of Mexico in the 16th century AD.

The youngest calendar in widespread use today is the Islamic lunar calendar, which begins the day after the Hegira, Muhammad’s migration from Mecca to Medina in ad 622. The Islamic calendar is the official calendar in many Muslim lands, and it governs religious observances for Muslims worldwide. Since it reckons time according too lunar rather than solar cycles, the Islamic calendar does not neatly correspond to the Gregorian and other solar calendars. For example, although there were 1,378 solar years between Muhammad’s Hegira and AD 2000, that year corresponds to the year 1420 in the Islamic calendar. Like the Gregorian calendar and despite their many differences, the Jewish, Chinese, and Islamic calendars all make it possible to place individual datable events in proper chronological order.

Recently, controversies have arisen concerning the Gregorian calendar’s designation of Bc and ad to indicate years before and after the birth of Jesus Christ. This practice originated in the 6th century ad with a Christian monk named Dionysius Exiguus. Like other devout Christians, Dionysius regarded the birth of Jesus as the singular turning point of history. Accordingly, he introduced a system that referred to events in time based on the number of years they occurred before or after Jesus’ birth. The system caught on very slowly. Saint Bede the Venerable, a prominent English monk and historian, employed the system in his own works in the 8th century ad, but the system came into general use only about AD 1400. (Until then, Christians generally calculated time according to regnal years of prominent rulers.) When Pope Gregory XIII ordered the preparation of a new calendar in the 16th century, he intended it to serve as a religious calendar as well as a tool for predicting seasonal changes. As leader of the Roman Catholic Church, Pope Gregory considered it proper to continue recognizing Jesus’ birth as the turning point of history.

As lands throughout the world adopted the Gregorian calendar, however, the specifically Christian implications of the term’s Bc and ad did not seem appropriate for use by non-Christians. Really, they did not even seem appropriate to many Christians when dates referred to events in non-Christian societies. Why should Buddhists, Hindus, Muslims, or others date time according to the birth of Jesus? In saving the Gregorian calendar as a widely observed international standard for reckoning time, while also avoiding the specifically Christian implications of the qualification’s Bc and ad, scholars replaced the birth of Jesus with the notion of ‘the common era’ and began to qualify dates as BCE (‘before the common era’) or Ce (“in the common era”). For the practical purpose of organizing time, BCE is the exact equivalent of Bc, and Ce is the exact equivalent of AD, but the term’s BCE and Ce have very different connotations than do Bc and AD.

The qualification’s BCE and Ce first came into general use after World War II (1939-1945) among biblical scholars, particularly those who studied Judaism and early Christianity in the period from the 1st century Bc (or BCE) and the 1st century ad (or Ce). From their viewpoint, this “common era” was an age when proponents of Jewish, Christian, and other religious faiths intensively interacted and debated with one another. Using the designations, BCE and Ce enabled them to continue employing a calendar familiar to them all while avoiding the suggestion that all historical time revolved around the birth of Jesus Christ. As the Gregorian calendar became prominent throughout the world in the 20th century, many peoples were eager to find terms more appealing to them than Bc and ad, and accordingly, the BCE and Ce usage became increasingly popular. This usage represents only the most recent of many efforts by the world’s peoples to devise meaningful frameworks of time.

Most scientists believe that the Earth, Sun, and all of the other planets and moons in the solar system formed about 4.6 billion years ago from a giant cloud of gas and dust known as the solar nebula. The gas and dust in this solar nebula originated in a star that ended its life in an explosion known as a supernova. The solar nebula consisted principally of hydrogen, the lightest element, but the nebula was also seeded with a smaller percentage of heavier elements, such as carbon and oxygen. All of the chemical elements we know were originally made in the star that became a supernova. Our bodies are made of these same chemical elements. Therefore, all of the elements in our solar system, including all of the elements in our bodies, originally came from this star-seeded solar nebula.

Due to the force of gravity tiny clumps of gas and dust began to form in the early solar nebula. As these clumps came together and grew larger, they caused the solar nebula to contract in on itself. The contraction caused the cloud of gas and dust to flatten in the shape of a disc. As the clumps continued to contract, they became very dense and hot. Eventually the s of hydrogen became so dense that they began to fuse in the innermost part of the cloud, and these nuclear reactions gave birth to the Sun. The fusion of hydrogen s in the Sun is the source of its energy.

Many scientists favour the planetesimal theory for how the Earth and other planets formed out of this solar nebula. This theory helps explain why the inner planets became rocky while the outer planets, except Pluto, are made up mostly of gases. The theory also explains why all of the planets orbit the Sun in the same plane.

According to this theory, temperatures decreased with increasing distance from the centre of the solar nebula. In the inner region, where Mercury, Venus, Earth, and Mars formed, temperatures were low enough that certain heavier elements, such as iron and the other heavy compounds that make up rock, could condense out-that is, could change from a gas to a solid or liquid. Due to the force of gravity, small clumps of this rocky material eventually came with the dust in the original solar nebula to form protoplanets or planetesimals (small rocky bodies). These planetesimals collided, broke apart, and re-formed until they became the four inner rocky planets. The inner region, however, was still too hot for other light elements, such as hydrogen and helium, to be retained. These elements could only exist in the outermost part of the disc, where temperatures were lower. As a result two of the outer planets-Jupiter and Saturn-are mostly made of hydrogen and helium, which are also the dominant elements in the atmospheres of Uranus and Neptune.

Within the planetesimal Earth, heavier matter sank to the centre and lighter matter rose toward the surface. Most scientists believe that Earth was never truly molten and that this transfer of matter took place in the solid state. Much of the matter that went toward the centre contained radioactive material, an important source of Earth’s internal heat. As heavier material moved inward, lighter material moved outward, the planet became layered, and the layers of the core and mantle were formed. This process is called differentiation.

Not long after they formed, more than four billion years ago, the Earth and the Moon underwent a period when they were bombarded by meteorites, the rocky debris left over from the formation of the solar system. The impact craters created during this period of heavy bombardment are still visible on the Moon’s surface, which is unchanged. Earth’s craters, however, were long ago erased by weathering, erosion, and mountain building. Because the Moon has no atmosphere, its surface has not been subjected to weathering or erosion. Thus, the evidence of meteorite bombardment remains.

Energy released from the meteorite impacts created extremely high temperatures on Earth that melted the outer part of the planet and created the crust. By four billion years ago, both the oceanic and continental crust had formed, and the oldest rocks were created. These rocks are known as the Acasta Gneiss and are found in the Canadian territory of Nunavut. Due to the meteorite bombardment, the early Earth was too hot for liquid water to exist and so existing was impossible for life.

Geologists divide the history of the Earth into three eons: the Archaean Eon, which lasted from around four billion to 2.5 billion years ago; the Proterozoic Eon, which lasted from 2.5 billion to 543 million years ago; and the Phanerozoic Eon, which lasted from 543 million years ago to the present. Each eon is subdivided into different eras. For example, the Phanerozoic Eon includes the Paleozoic Era, the Mesozoic Era, and the Cenozoic Era. In turn, eras are further divided into periods. For example, the Paleozoic Era includes the Cambrian, Ordovician, Silurian, Devonian, Carboniferous, and Permian Periods.

The Archaean Eon is subdivided into four eras, the Eoarchean, the Paleoarchean, the Mesoarchean, and the Neoarchean. The beginning of the Archaean is generally dated as the age of the oldest terrestrial rocks, which are about four billion years old. The Archaean Eon ended 2.5 billion years ago when the Proterozoic Eon began. The Proterozoic Eon is subdivided into three eras: the Paleoproterozoic Era, the Mesoproterozoic Era, and the Neoproterozoic Era. The Proterozoic Eon lasted from 2.5 billion years ago to 543 million years ago when the Phanerozoic Eon began. The Phanerozoic Eon is subdivided into three eras: the Paleozoic Era from 543 million to 248 million years ago, the Mesozoic Era from 248 million to 65 million years ago, and the Cenozoic Era from 65 million years ago to the present.

Geologists base these divisions on the study and dating of rock layers or strata, including the fossilized remains of plants and animals found in those layers. Until the late 1800s scientists could only determine the relative age of rock strata, or layering. They knew that overall the top layers of rock were the youngest and formed most recently, while deeper layers of rock were older. The field of stratigraphy shed much light on the relative ages of rock layers.

The study of fossils also enabled geologists to determine the relative ages of different rock layers. The fossil record helped scientists determine how organisms evolved or when they became extinct. By studying rock layers around the world, geologists and paleontologists saw that the remains of certain animal and plant species occurred in the same layers, but were absent or altered in other layers. They soon developed a fossil index that also helped determine the relative ages of rock layers.

Beginning in the 1890s, scientists learned that radioactive elements in rock decay at a known rate. By studying this radioactive decay, they could determine an absolute age for rock layers. This type of dating, known as radiometric dating, confirmed the relative ages determined through stratigraphy and the fossil index and assigned absolute ages to the various strata. As a result scientists were able to assemble Earth’s geologic time scale from the Archaean Eon to the present.

The Precambrian is a time span that includes the Archaean and Proterozoic eons and began about four billion years ago. The Precambrian marks the first formation of continents, the oceans, the atmosphere, and life. The Precambrian represents the oldest chapter in Earth’s history that can still be studied. Very little remains of Earth from the period of 4.6 billion to about four billion years ago due to the melting of rock caused by the early period of meteorite bombardment. Rocks dating from the Precambrian, however, have been found in Africa, Antarctica, Australia, Brazil, Canada, and Scandinavia. Some zircon mineral grains deposited in Australian rock layers have been dated to

4.2 billion years.

The Precambrian is also the longest chapter in Earth’s history, spanning a period of about 3.5 billion years. During this time frame, the atmosphere and the oceans formed from gases that escaped from the hot interior of the planet because of widespread volcanic eruptions. The early atmosphere consisted primarily of nitrogen, carbon dioxide, and water vapour. As Earth continued to cool, the water vapour condensed out and fell as precipitation to form the oceans. Some scientists believe that much of Earth’s water vapour originally came from comets containing frozen water that struck Earth during meteorite bombardment.

By studying 2-billion-year-old rocks found in northwestern Canada, as well as 2.5-billion-year-old rocks in China, scientists have found evidence that plate tectonics began shaping Earth’s surface as early as the middle Precambrian. About a billion years ago, the Earth’s plates were entered around the South Pole and formed a super-continent called Rodinia. Slowly, pieces of this super-continent broke away from the central continent and travelled north, forming smaller continents.

Life originated during the Precambrian. The earliest fossil evidence of life consists of Prokaryotes, one-celled organisms that lacked a nucleus and reproduced by dividing, a process known as asexual reproduction. Asexual division meant that a prokaryote’s hereditary material was copied unchanged. The first Prokaryotes were bacteria known as archaebacteria. Scientists believe they came into existence perhaps as early as 3.8 billion years ago, but certainly by 3.5 billion years ago, and where anaerobic-that is, they did not require oxygen to produce energy. Free oxygen barely existed in the atmosphere of the early Earth.

Archaebacteria were followed about 3.46 billion years ago by another type of prokaryote known as Cyanobacteria or blue-green algae. These Cyanobacteria gradually introduced oxygen in the atmosphere because of photosynthesis. In shallow tropical waters, Cyanobacteria formed mats that grew into humps called stromatolites. Fossilized stromatolites have been found in rocks in the Pilbara region of western Australia that are more than 3.4 billion years old and in rocks of the Gunflint Chert region of northwest Lake Superior that are about 2.1 billion years old

The colonization of Australia/New Guinea was not achieved until the time to which took off around 50,000 years ago. Another extension of human range that soon followed as the one into th coldest parts of Eurasia. While Neanderthals lived in glacial times and were adapted to the cold, they penetrated no farther north than Germany and Kiev. That’s not surprising, since Neanderthals apparently lacked needles, sewn clothing, warm houses, and other technological essentials of survival in the coldest climates. Anatomically modern peoples who possess such technology had expanded into Siberia by around 20,000 years ago (there are the usual much olde disputed claims). That expansion may have been responsible for the extinctions of Eurasia’s wooly mammoth and wooly rhinoceroses.

With the settlement of Austral/New Guinea, humans now occupied three of the five habitable continents. However, Antartica because it was not reached by humans until the 19th century and has two continents, North America and South America. That left only two continents, North America and South America. They were surely the last ones settled, for the obvious reason tat reaching the Americas fro the Old World required either boats (for which there is no evidence even in Indonesia until 40,000 years ago and nine in Europe until much later) in order to cross by sea, or else it required the occupation of Siberia (unoccupied until about 20,000 years ago) ib order to cross the Bering land bridge.

However, it is uncertain when, between about 14,000 and 35,000 years ago, the Americas were first colonized. The oldest unquestionable human remains in the Americas are at sites in Alaska dated around 12,000 Bc., followed by a profusion of sites in the United States south of the Canadian border and in Mexico in the centuries just before 11,000 Bc. The latter sites are called Clovis sites, named just after the type site near the town of Clovis, New Mexico, where there characteristic large stone spearpoints were first recognized. Hundreds of Clovis sites are now known, blanketing all 48 of the lower U.S. states south into Mexico. Unquestioned and in Patagonia. These facts suggest the interpretation that Clovis sites document the America’s first colonized by people, who quickly multiplied, expanded, and filled the two continents.

Nevertheless, it may be all the same, that differences between the long-term histories of peoples of the different continents have been due not to innate differences in the people themselves but to differences in their environments. That is to say, that if the populations of Aboriginal Australia and Eurasia could have been interchanged during the Late Pleistocene, the original Aboriginal Australia would no be the ones occupying most of the Americas and Australia, we well as Eurasia, while the original Aboriginal and Australia, as well as Eurasia, while the original Aboriginal Eurasians would be the ones now reduced to a downtrodden population fragment in Australia. One might at first be inclined to dismiss this assertion as meaningless, because the excrement is imaginary and claims itself its outcome that cannot be verified, but historians are nonetheless able to evaluate related hypotheses by retrospective tests. For instance, one can examine what did happen when European farmers were transplanted to Greenland or the U.S. Great Plains, and when farmers stemming ultimately from China emigrated to the Chatha Islands, the rain forests of Borneo, or the volcanic soil o Java or Hawaii. These tests confirm that the same ancestral peoples either ended up extinct, or returned to living as hunter-gatherers, or went on to build complex states, depending on their environments., similarly, Aboriginal Australian hunter-gatherers, variously transplanted to Finders Island, Tasmania, or southeastern Australia, ended up extinct, or as canal builders intensively managing a productive fishery, depending on their continents.

Of course, the continents differ in innumerable environmental features affecting trajectories of human societies. But merely a laundry list of ever possible difference does not constitute any one answer. Just four sets of differences appear as considered being the most important ones.

The fist set consists of continental difference in the wild plant and anal species available as starting materials for domestication. That’s because food production was critical for the accumulation of food surpluses that could feed non-food producing specialists, and for the buildup of large populations enjoying a military advantage though mere numbers even before they had developed any technological or political advantage.

On each continent, animal and plant domestication was concentrated in a few especially favourable homelands’ accounting for only a small fraction of the continent’s total area. In the case of technical innovations and political institutions as well, most societies acquire much more from other societies than they invent themselves. Thus diffusion and migration within a continent contribute importantly the development of its societies, which tend in the log run to share each other’s development (insofar as environments permit) because of the processes illustrated in much more form by Maori New Zealand’s Musket Wars. That is, societies initially lacking an advantage ether acquire it from societies possessing it or (if they fail to do so) are replaced by those other society.

Even so, for billions of years, life existed only in the simple form of Prokaryotes. Prokaryotes were followed by the relatively more advanced eukaryotes, organisms that have a nucleus in their cells and that reproduces by combining or sharing their heredity makeup rather than by simply dividing. Sexual reproduction marked a milestone in life on Earth because it created the possibility of hereditary variation and enabled organisms to adapt more easily to a changing environment. The latest part of Precambrian time some 560 million to 545 million years ago saw the appearance of an intriguing group of fossil organisms known as the Ediacaran fauna. First discovered in the northern Flinders Range region of Australia in the mid-1940s and subsequently found in many locations throughout the world, these strange fossils are the precursors of many fossil groups that were to explode in Earth's oceans in the Paleozoic Era.

At the start of the Paleozoic Era about 543 million years ago, an enormous expansion in the diversity and complexity of life occurred. This event took place in the Cambrian Period and is called the Cambrian explosion. Nothing like it has happened since. Almost all of the major groups of animals we know today made their first appearance during the Cambrian explosion. Almost all of the different ‘body plans’ found in animals today-that is, the way and animal’s body is designed, with heads, legs, rear ends, claws, tentacles, or antennae-also originated during this period.

Fishes first appeared during the Paleozoic Era, and multicellular plants began growing on the land. Other land animals, such as scorpions, insects, and amphibians, also originated during this time. Just as new forms of life were being created, however, other forms of life were going out of existence. Natural selection meant that some species were able to flourish, while others failed. In fact, mass extinctions of animal and plant species were commonplace.

Most of the early complex life forms of the Cambrian explosion lived in the sea. The creation of warm, shallow seas, along with the buildup of oxygen in the atmosphere, may have aided this explosion of life forms. The shallow seas were created by the breakup of the super-continent Rodinia. During the Ordovician, Silurian, and Devonian periods, which followed the Cambrian Period and lasted from 490 million to 354 million years ago, some of the continental pieces that had broken off Rodinia collided. These collisions resulted in larger continental masses in equatorial regions and in the Northern Hemisphere. The collisions built many mountain ranges, including parts of the Appalachian Mountains in North America and the Caledonian Mountains of northern Europe.

Toward the close of the Paleozoic Era, two large continental masses, Gondwanaland to the south and Laurasia to the north, faced each other across the equator. They’re slow but eventful collision during the Permian Period of the Paleozoic Era, which lasted from 290 million to 248 million years ago, assembled the super-continent Pangaea and resulted in some of the grandest mountains in the history of Earth. These mountains included other parts of the Appalachians and the Ural Mountains of Asia. At the close of the Paleozoic Era, Pangaea represented more than 90 percent of all the continental landmasses. Pangaea straddled the equator with a huge mouth like opening that faced east. This opening was the Tethys Ocean, which closed as India moved northward creating the Himalayas. The last remnants of the Tethys Ocean can be seen in today’s Mediterranean Sea.

The Paleozoic ended with a major extinction event, when perhaps as many as 90 percent of all plant and animal species died out. The reason is not known for sure, but many scientists believe that huge volcanic outpourings of lavas in central Siberia, coupled with an asteroid impact, were joint contributing factors.

The most notable of the Mesozoic reptiles, the dinosaur, first evolved in the Triassic period (240 million to 205 million years ago). The Triassic dinosaurs were not as large as their descendants in later Mesozoic times. They were comparatively slender animals that ran on their hind feet, balancing their bodies with heavy, fleshy tails, and seldom exceeded 4.5 m’s (15 ft) in length. Other reptiles of the Triassic period included such aquatic creatures as the ichthyosaurs, and a group of flying reptiles, the pterosaurs.

The first mammals also appeared during this period. The fossil remains of these animals are fragmentary, but the animals were apparently small in size and reptilian in appearance. In the sea, Teleostei, the first ancestors of the modern bony fishes, made their appearance. The plant life of the Triassic seas included a large variety of marine algae. On land, the dominant vegetation included various evergreens, such as ginkgos, conifers, and palms. Small scouring rushes and ferns still existed, but the larger members of these groups had become extinct.

The Mesozoic Era is divided into three geological periods: the Triassic, which lasted from 248 million to 206 million years ago; the Jurassic, from 206 million to 144 million years ago; and the Cretaceous, from 144 million to 65 million years ago. The dinosaurs emerged during the Triassic Period and was one of the most successful animals in Earth’s history, lasting for about 180 million years before going extinct at the end of the Cretaceous Period. The first and mammals and the first flowering plants also appeared during the Mesozoic Era. Before flowering plants emerged, plants with seed-bearing cones known as conifers were the dominant form of plants. Flowering plants soon replaced conifers as the dominant form of vegetation during the Mesozoic Era.

The Mesozoic was an eventful era geologically with many changes to Earth’s surface. Pangaea continued to exist for another 50 million years during the early Mesozoic Era. By the early Jurassic Period, Pangaea began to break up. What is now South America begun splitting from what is now Africa, and in the process the South Atlantic Ocean formed? As the landmass that became North America drifted away from Pangaea and moved westward, a long Subductions zone extended along North America’s western margin. This Subductions zone and the accompanying arc of volcanoes extended from what is now Alaska to the southern tip of South America. A great deal of this feature, called the American Cordillera, exists today as the eastern margin of the Pacific Ring of Fire.

During the Cretaceous Period, heat continued to be released from the margins of the drifting continents, and as they slowly sank, vast inland seas formed in much of the continental interiors. The fossilized remains of fishes and marine mollusks called ammonites can be found today in the middle of the North American continent because these areas were once underwater. Large continental masses broke off the northern part of southern Gondwanaland during this period and began to narrow the Tethys Ocean. The largest of these continental masses, present-day India, moved northward toward its collision with southern Asia. As both the North Atlantic Ocean and South Atlantic Ocean continued to open, North and South America became isolated continents for the first time in 450 million years. Their westward journey resulted in mountains along their western margins, including the Andes of South America.

The Cenozoic Era, beginning about 65 million years ago, is the period when mammals became the dominant form of life on land. Human beings first appeared in the later stages of the Cenozoic Era. In short, the modern world as we know it, with its characteristic geographical features and its animals and plants, came into being. All of the continents that we know today took shape during this era.

A single catastrophic event may have been responsible for this relatively abrupt change from the Age of Reptiles to the Age of Mammals. Most scientists now believe that a huge asteroid or comet struck the Earth at the end of the Mesozoic and the beginning of the Cenozoic eras, causing the extinction of many forms of life, including the dinosaurs. Evidence of this collision came with the discovery of a large impact crater off the coast of Mexico’s Yucatán Peninsula and the worldwide finding of iridium, a metallic element rare on Earth but abundant in meteorites, in rock layers dated from the end of the Cretaceous Period. The extinction of the dinosaurs opened the way for mammals to become the dominant land animals.

The Cenozoic Era is divided into the Tertiary and the Quaternary periods. The Tertiary Period lasted from about 65 million to about 1.8 million years ago. The Quaternary Period began about 1.8 million years ago and continued to the present day. These periods are further subdivided into epochs, such as the Pleistocene, from 1.8 million to 10,000 years ago, and the Holocene, from 10,000 years ago to the present.

Early in the Tertiary Period, Pangaea was completely disassembled, and the modern continents were all clearly outlined. India and other continental masses began colliding with southern Asia to form the Himalayas. Africa and a series of smaller micro-continents began colliding with southern Europe to form the Alps. The Tethys Ocean was nearly closed and began to resemble today’s Mediterranean Sea. As the Tethys continued to narrow, the Atlantic continued to open, becoming an ever-wider ocean. Iceland appeared as a new island in later Tertiary time, and its active volcanism today indicates that sea-floor spreading is still causing the country to grow.

Late in the Tertiary Period, about six million years ago, humans began to evolve in Africa. These early humans began to migrate to other parts of the world between two million and 1.7 million years ago.

The Quaternary Period marks the onset of the great ice ages. Many times, perhaps at least once every 100,000 years on average, vast glaciers 3 km (2 mi) thick invaded much of North America, Europe, and parts of Asia. The glaciers eroded considerable amounts of material that stood in their paths, gouging out U-shaped valleys. Anically modern human beings, known as Homo sapiens, became the dominant form of life in the Quaternary Period. Most anthropologists (scientists who study human life and culture) believe that Anically modern humans originated only recently in Earth’s 4.6-billion-year history, within the past 200,000 years.

With the rise of human civilization about 8,000 years ago and especially since the Industrial Revolution in the mid 1700s, human beings began to alter the surface, water, and atmosphere of Earth. In doing so, they have become active geological agents, not unlike other forces of change that influence the planet. As a result, Earth’s immediate future depends largely on the behaviour of humans. For example, the widespread use of fossil fuels is releasing carbon dioxide and other greenhouse gases into the atmosphere and threatens to warm the planet’s surface. This global warming could melt glaciers and the polar ice caps, which could flood coastlines around the world and many island nations. In effect, the carbon dioxide removed from Earth’s early atmosphere by the oceans and by primitive plant and animal life, and subsequently buried as fossilized remains in sedimentary rock, is being released back into the atmosphere and is threatening the existence of living things.

Even without human intervention, Earth will continue to change because it is geologically active. Many scientists believe that some of these changes can be predicted. For example, based on studies of the rate that the sea-floor is spreading in the Red Sea, some geologists predict that in 200 million years the Red Sea will be the same size as the Atlantic Ocean is today. Other scientists predict that the continent of Asia will break apart millions of years from now, and as it does, Lake Baikal in Siberia will become a vast ocean, separating two landmasses that once made up the Asian continent.

In the far, far distant future, however, scientists believe that Earth will become an uninhabitable planet, scorched by the Sun. Knowing the rate at which nuclear fusion occurs in the Sun and knowing the Sun’s mass, astrophysicists (scientists who study stars) have calculated that the Sun will become brighter and hotter about three billion years from now, when it will be hot enough to boil Earth’s oceans away. Based on studies of how other Sun-like stars have evolved, scientists predict that the Sun will become a red giant, a star with a very large, hot atmosphere, about seven billion years from now. As a red giant the Sun’s outer atmosphere will expand until it engulfs the planet Mercury. The Sun will then be 2,000 times brighter than it is now and so hot it will melt Earth’s rocks. Earth will end its existence as a burnt cinder.

Or, perhaps, that a single catastrophic event had been responsible for this relatively abrupt change from the Age of Reptiles to the Age of Mammals. Most scientists now believe that a huge asteroid or comet struck the Earth at the end of the Mesozoic and the beginning of the Cenozoic eras, causing the extinction of many forms of life, including the dinosaurs. Evidence of this collision came with the discovery of a large impact crater off the coast of Mexico’s Yucatán Peninsula and the worldwide finding of iridium, a metallic element rare on Earth but abundant in meteorites, in rock layers dated from the end of the Cretaceous Period. The extinction of the dinosaurs opened the way for mammals to become the dominant land animals.

The Cenozoic Era is divided into the Tertiary and the Quaternary periods. The Tertiary Period lasted from about 65 million to about 1.8 million years ago. The Quaternary Period began about 1.8 million years ago and continued to the present day. These periods are further subdivided into epochs, such as the Pleistocene, from 1.8 million to 10,000 years ago, and the Holocene, from 10,000 years ago to the present.

Early in the Tertiary Period, Pangaea was completely disassembled, and the modern continents were all clearly outlined. India and other continental masses began colliding with southern Asia to form the Himalayas. Africa and a series of smaller micro-continents began colliding with southern Europe to form the Alps. The Tethys Ocean was nearly closed and began to resemble today’s Mediterranean Sea. As the Tethys continued to narrow, the Atlantic continued to open, becoming an ever-wider ocean. Iceland appeared as a new island in later Tertiary time, and its active volcanism today indicates that sea-floor spreading is still causing the country to grow.

Late in the Tertiary Period, about six million years ago, humans began to evolve in Africa. These early humans began to migrate to other parts of the world between two million and 1.7 million years ago.

The Quaternary Period marks the onset of the great ice ages. Many times, perhaps at least once every 100,000 years on average, vast glaciers 3 km (2 mi) thick invaded much of North America, Europe, and parts of Asia. The glaciers eroded considerable amounts of material that stood in their paths, gouging out U-shaped valleys. Anically modern human beings, known as Homo sapiens, became the dominant form of life in the Quaternary Period. Most anthropologists (scientists who study human life and culture) believe that Anically modern humans originated only recently in Earth’s 4.6-billion-year history, within the past 200,000 years.

With the rise of human civilization about 8,000 years ago and especially since the Industrial Revolution in the mid 1700s, human beings began to alter the surface, water, and atmosphere of Earth. In doing so, they have become active geological agents, not unlike other forces of change that influence the planet. As a result, Earth’s immediate future depends mainly on the behaviour of humans. For example, the widespread use of fossil fuels is releasing carbon dioxide and other greenhouse gases into the atmosphere and threatens to warm the planet’s surface. This global warming could melt glaciers and the polar ice caps, which could flood coastlines around the world and many island nations. In effect, the carbon dioxide removed from Earth’s early atmosphere by the oceans and by primitive plant and animal life, and subsequently buried as fossilized remains in sedimentary rock, is being released back into the atmosphere and is threatening the existence of living things.

Even without human intervention, Earth will continue to change because it is geologically active. Many scientists believe that some of these changes can be predicted. For example, based on studies of the rate that the sea-floor is spreading in the Red Sea, some geologists predict that in 200 million years the Red Sea will be the same size as the Atlantic Ocean is today. Other scientists predict that the continent of Asia will break apart millions of years from now, and as it does, Lake Baikal in Siberia will become a vast ocean, separating two landmasses that once made up the Asian continent.

In the far, far distant future, however, scientists believe that Earth will become an uninhabitable planet, scorched by the Sun. Knowing the rate at which nuclear fusion occurs in the Sun and knowing the Sun’s mass, astrophysicists (scientists who study stars) have calculated that the Sun will become brighter and hotter about three billion years from now, when it will be hot enough to boil Earth’s oceans away. Based on studies of how other Sun-like stars have evolved, scientists predict that the Sun will become a red giant, a star with a very large, hot atmosphere, about seven billion years from now. As a red giant the Sun’s outer atmosphere will expand until it engulfs the planet Mercury. The Sun will then be 2,000 times brighter than it is now and so hot it will melt Earth’s rocks. Earth will end its existence as a burnt cinder.

Three billion years is the life span of millions of human generations, however. Perhaps by then, humans will have learned how to journey beyond the solar system to colonize other planets in the Milky Way Galaxy and find among other different places to call ‘home’.

The dinosaurs were one of a group of extinct reptiles that lived from about 230 million to about sixty-five million years ago. British anist Sir Richard Owen coined the word dinosaur in 1842, derived from the Greek words’ deinos, meaning ‘marvellous’ or ‘terrible’, and sauros, meaning ‘lizard’. For more than 140 million years, dinosaurs reigned as the dominant on land.

Owen distinguished dinosaurs from other prehistoric reptiles by their upright rather than sprawling legs and by the presence of three or more vertebrae supporting the pelvis, or hipbone. They classify dinosaurs into two orders according to differences in pelvic structure: Saurischia, or lizard-hipped dinosaurs, and Ornithischia, or bird-hipped dinosaurs. Dinosaur bones occur in sediments deposited during the Mesozoic Era, the so-called era of middle animals, also known as the age of reptiles. This era is divided into three periods: the Triassic (240 million to 205 million years ago), the Jurassic (205 million to 138 million years ago), and the Cretaceous (138 million to sixty-five million years ago).

Historical references to dinosaur bones may extend as far back as the 5th century Bc. Some scholars think that Greek historian Herodotus was referring to fossilized dinosaur skeletons and eggs when he described griffins—legendary beasts that were part eagle and part lions-guarding nests in central Asia. ‘Dragon bones’ mentioned in a 3rd century ad text from China are thought to refer to bones of dinosaurs.

The first dinosaurs studied by paleontologists (scientists who study prehistoric life) were Megalosaurus and Iguanodon, whose partial bones were discovered early in the 19th century in England. The shape of their bones shows that these animals resembled large, land-dwelling reptiles. The teeth of Megalosaurus, which are pointed and have serrated edges, suggest that this animal was a flesh eater, while the flattened, grinding surfaces of Iguanodon teeth suggest that it was a plant eater. Megalosaurus lived during the Jurassic Period, and Iguanodon lived during the early part of the Cretaceous Period. Later in the 19th century, paleontologists collected and studied more comprehensive skeletons of related dinosaurs found in New Jersey. From these finds they learned that Megalosaurus and Iguanodon walked on two legs, not four, as had been thought.

Some ornithischians quickly became quadrupedal (four-legged) and relied on body armour and other physical defences rather than fleetness for protection. Plated dinosaurs, such as the massive Stegosaurus of the late Jurassic Period, bore a double row of triangular bony plates along their backs. These narrow plates contained tunnels through which blood vessels passed, allowing the animals to radiate excess body heat or to warm themselves in the sun. Many also bore a large spined plate over each shoulder. Stegosaurs resembled gigantic porcupines, and they probably defended themselves by turning their spined tails toward aggressors.

During the Cretaceous Period, stegosaurs were supplanted by armoured dinosaurs such as Ankylosaurus. These animals were similar in size to stegosaurs but otherwise resembled giant horned toads. Some even possessed a bony plate in each eyelid and large tail clubs. Their necks were protected by heavy, bony rings and spines, showing that these areas needed protection from the attacks of carnivorous dinosaurs.

The reptiles were still the dominant form of animal life in the Cretaceous period (138 million to 65 million years ago). The four types of dinosaurs found in the Jurassic also lived during this period, and a fifth type, the horned dinosaurs, also appeared. By the end of the Cretaceous, about 65 million years ago, all these creatures had become extinct. The largest of the pterodactyls lived during this period. Pterodactyl fossils discovered in Texas have wingspreads of up to 15.5 m’s (50 ft). Other reptiles of the period include the first snakes and lizards. Several types of Cretaceous have been discovered, including Hesperornis, a diving bird about 1.8 m’s (about 6 ft) in length, which had only vestigial wings and was unable to fly. Mammals of the period included the first marsupials, which strongly resembled the modern opossum, and the first placental mammals, which belonged to the group of insectivores. The first crabs developed during this period, and several modern varieties of fish also evolved. The most important evolutionary advance in the plant kingdom during the Cretaceous period was the development of deciduous plants, the earliest fossils of which appear in early Cretaceous rock formations. By the end of the period, many modern varieties of trees and shrubs had made their appearance. They represented more than 90 percent of the known plants of the period. Mid-Cretaceous fossils include remains of beech, holly, laurel, maple, oak, plane tree, and walnut. Some paleontologists believe that these deciduous woody plants first evolved in Jurassic times but grew only in upland areas, where conditions were unfavourable for fossil preservation. Becoming the most abundant plant-eating dinosaurs. They ranged in size from small runners that were two m’s (6 ft) long and weighed 15 kg (33 lb), such as Hypsilophodon, to elephantine cows that were (32 ft) long and weighed 4 metric tons, such as Edmontosaurus. These animals had flexible jaws and grinding teeth, which eventually surpassed those of the modern cows in their suitability for chewing fibrous plants. The beaks of ornithopods became broader, earning them the name duck-billed dinosaur. Their tooth batteries became larger, their backs became stronger, and their forelimbs lengthened until their arms became elongated walking sticks, although ornithopods remained bipedal. The nose supported cartilaginous sacks or bony tubes, suggesting that these dinosaurs may have communicated by trumpeting. Fossil evidence from the late Cretaceous Period includes extensive accumulations of bones from ornithopods drowned in floods, indicating that duck-billed dinosaurs often migrated in herds of thousands. A few superbly preserved Edmontosaurus skeletons encased within impressions of skin have been discovered in southeastern Wyoming.

Pachycephalosaurs were small bipedal ornithischians with thickened skulls, flattened bodies, and tails surrounded by a latticework of bony rods. In many of these dinosaurs, such as the Pachycephalosaurus, -a large specimen up to eight m’s (26 ft) a long-the skull was capped by a rounded dome of solid bone. Some paleontologists suggest that males may have borne the thickest domes and butted heads during mating contests. Eroded pachycephalosaur domes are often found in stream deposits from late in the Cretaceous Period.

The quadrupedal ceratopsians, or horned dinosaurs, typically bore horns over the nose and eyes, and had a saddle-shaped bony frill that extended from the skull over the neck. These bony frills were well developed in the late Cretaceous Triceratops, of which is a dinosaur that could reach lengths of up to eight m’s (26 ft) and weighing more than 12 metric tons. The frill served two purposes: It protected the vulnerable neck, and it contained a network of blood vessels on its undersurface to radiate excess heat. Large accumulations of fossil bones suggest that ceratopsians lived in herds.

Controversy surrounds the extinction of the dinosaurs. According to one theory, dinosaurs were slowly driven to extinction by environmental changes linked to the gradual withdrawal of shallow seas from the continents at the end of the dinosaurian era. Proponents of this theory postulate that dinosaurs dwindled in number and variety over several million years.

An opposing theory proposes that the impact of an asteroid or comet caused catastrophic destruction of the environment, leading to the extinction of the dinosaurs. Evidence to support this theory includes the discovery of a buried impact crater (thought to be the result of a large comet striking the earth) that is 200 km (124 mi) in diameter in the Yucatán Peninsula of Mexico. A spray of debris, called an ejecta sheet, which was blown from the edge of the crater, has been found over vast regions of North America. Comet-enriched material from the impact’s fiery explosion was distributed all over the world. With radiometric dating (Radiometric Dating), scientists have used the decay rates of certain s to date the crater, ejecta sheet, and fireball layer. Using similar techniques to date the dramatic changes in the record of microscopic fossils, they have found that the impact and the dinosaur extinction occurred nearly simultaneously.

Although large amounts of ash suggest that most of North and South America was devastated by fire from the impact, the longer-term planetwide environmental effects of the impact were ultimately more lethal to life than the fire. Dust blocked sunlight from the earth’s surface for many months. Scorched sulfur from the impact site, water vapour and chlorine from the oceans, and nitrogen from the air combined to produce a worldwide fallout of intensely acidic rain. Scientists postulate that darkness and acid rain caused plant growth to cease. As a result, both the herbivorous dinosaurs, which were dependent on plants for food, and the carnivorous dinosaurs, which fed on the herbivores, were exterminated. On the other hand, animals such as frogs, lizards, and small insect-eating turtles and mammals, which were dependent on organisms that fed on decaying plant material, were more likely to survive. Their survival indicates that, in most areas, the surface of Earth did not freeze.

Fossilized dinosaur remains are usually buried in sediments deposited on land. These remains are likely to be found in regions where the silt and sands spread by rivers of the Mesozoic Era are exposed. Fossils are easier to find in arid badlands-rugged, rocky areas with little vegetation, where the sediments are not covered by soil. The excavation of large skeletal fossils involves painstaking procedures to protect the fossils from damage. Fewer than 3,000 dinosaur specimens have been collected to date, and only fifty skeletons of the 350 known varieties of dinosaurs are completely known. Probably less than 10 percent of the varieties of dinosaurs that once lived have been identified.

The shape of dinosaur bones provides clues to how these animals interacted with each other. These bones also reveal information about body form, weight, and posture. Surface ridges and hollows on bones indicate the strength and orientation of muscles, and rings within the bones indicate growth rates. Diseased, broken, and bitten bones bear witnesses to the hazards of life during the dinosaurian age. Cavities in bones reflect the shape of the brain, spinal cord, and blood vessels. Delicate ossicles, or small bony structures in the skull, reveal the shape of the eyeball and its pupil. The structure of the skull and fossilized contents of the abdominal region provide clues to diet.

Organic molecules are also preserved within bones in trace quantities. By studying isotopes of s within these molecules, scientists can gather evidence about body-heat flow and about the food and water consumed by dinosaurs. Impressions in sediment depict skin texture and foot shape, and trackways provide evidence about speed and walking habits.

A 113-million-year-old fossil called Scipionyx samniticus, discovered in southern Italy in the late 1980s, is the first fossil identified that clearly shows the structure and placement of internal organs, including the intestines, colon, liver, and muscles. The fossilized internal organs of Scipionyx samniticus give paleontologists information about how dinosaurs metabolized their food, and other general information about dinosaurs.

Beginning in the late 19th century, the field of paleontology grew as scientific expeditions to find fossil remains became more frequent. American paleontologist Othniel Charles Marsh and his collectors explored the western United States for dinosaurian remains. They identified many genera that have since become household names, including Stegosaurus and Triceratops. In the early part of the 20th century, American paleontologist’s Barnum Brown and Charles Sternberg demonstrated that the area now known as Dinosaur Provincial Park in Alberta, Canada, is the richest site for dinosaur remains in the world. Philanthropist Andrew Carnegie-sponsored excavations in the great Jurassic quarry-pits in Utah, which subsequently turned into Dinosaur National Monument. Beginning in 1922, explorer Roy Chapman Andrews led expeditions to Mongolia that resulted in the discovery of dinosaur eggs. More recently, Luis Alvarez, a particle physicist and Nobel laureate, and his son, geologist Walter Alvarez, discovered evidence of the impact of an asteroid or comet debris that coincided with the extinction of the dinosaurs. Among foreign scholars, German paleontologist Werner Janensch, beginning in 1909, led well-organized dinosaur collecting expeditions to German East Africa (modern Tanzania), where the complete skeletal anatomy of the gigantic Brachiosaurus was documented.

One of the most important fossil-rich sites is located in China. In a small town about 400 km (about 200 mi) northeast of Beijing, there is a fossil formation, called the Yixian formation. Of which have yielded many fossilized specimens of primitive and bird-like dinosaurs, and soft parts such as feathers and fur. Some scientists believe these fossils provide evidence that may have evolved from dinosaurs. Among the recent finds in the Yixian formation is an eagle-sized animal with barracuda-like teeth and very long claws named Sinornithosaurus millenii. Although this dinosaur could not fly, it did have a shoulder blade structure in which allowed a wide range of arm motion similar to flapping. Featherlike structures covered most of the animal’s body.

Another important dinosaur discovery made in 1993 strengthens the evolutionary relationship between dinosaurs and, A 14-year-old boy who was hunting for fossils near Glacier National Park in northern Montana found a fossil of a nearly complete skeleton of a small dinosaur, later named Bambiraptor feinbergi. The fossil is of a juvenile dinosaur only one m (3 ft) long with a body that resembles that of a roadrunner. It has several physical features similar to those of early, including long, winglike arms, bird-like shoulders, and a wishbone. Some scientists propose that Bambiraptor feinbergi may be a type of dinosaur similar to those from which evolved. Other scientists believe that the animal lived too late in time to be ancestral to, while still other scientists hypothesize that dinosaurs may have led to flying ancestral dinosaurs, from which more than once in evolutionary time.

Argentina is another area rich in fossils. In 1995 a local auto mechanic in Nequén, a province on the eastern slopes of the Andes in Argentina, found the fossils of Giganotosaurus, a meat-eating dinosaur that may have reached a length of more than 13 m’s (43 ft). Five years later, in a nearby location, a team of researchers unearthed the bones of what could be the largest meat-eating dinosaur. The newly discovered species is related to the Giganotosaurus, but it was larger, reaching a length of 14 m’s (45 ft). This dinosaur was heavier and had shorter legs than the Tyrannosaurus rex. The fossilized bones indicate that the dinosaur’s jaw was shaped like scissors, suggesting it used its teeth to dissect prey.

In early 2000 AD., scientists used X-rays to view the chest cavity of a dinosaur fossil found in South Dakota. Computerized three-dimensional imaging revealed the remains of what is thought to be the first example of a dinosaur heart ever discovered. The heart appears to contain four chambers with a single aorta, a structure that more closely resembles the heart of a bird or mammal than the heart of any living reptile. The structure of the heart suggests that the dinosaur may have had a high metabolic rate that is more like that of an active warm-blooded animal than that of a cold-blooded reptile.

Many unusual dinosaur fossils found in the Sahara in northern Africa might be related to dinosaur fossils discovered in South America, indicating that the two continents were connected through most of the dinosaurian period. These findings, along with other studies of the environments of dinosaurs and the plants and animals in their habitats, help scientists learn how the world of dinosaurs resembled and differed from the modern world.

The ancestors of dinosaurs were crocodile-like-creatures called archosaurs. They appeared early in the Triassic Period and diversified into a variety of forms that are popularly known as the thecodont group of reptiles. Many of these creatures resembled later Cretaceous dinosaurs. Some archosaurs led to true crocodiles. Others produced pterosaurs, flying reptiles that possessed slender wings supported by a single spar-like finger. Still other archosaurs adopted a bipedal (two-legged) posture and developed S-shaped necks, and it was certain species of these reptiles that eventually evolved into dinosaurs.

Fossil evidence of the earliest dinosaurs dates from about 230 million years ago. This evidence, found in Madagascar in 1999, consists of bones of an animal about the size of a kangaroo. This dinosaur was a type of saurischian and was a member of the plant-eating prosauropods, which were related to ancestors of the giant, long-necked sauropods that included the Apatosaurus. Before this discovery, the earliest known dinosaur on record was the Eoraptor, which lived 227 million years ago. Discovered in Argentina in 1992, the Eoraptor was an early saurischian, one m (3 ft) long, with a primitive skull.

Scientists have identified the isolated bones and teeth of a few tiny dinosaurs representing ornithischians dating from the beginning of the Jurassic Period, around 205 million years ago. By the middle of the Jurassic Period, around 180 million years ago, most of the basic varieties of saurischian and ornithischian dinosaurs had appeared, including some that far surpassed modern elephants in size. Dinosaurs had evolved into the most abundant large animals on land, and the dinosaurian age had begun.

Earth’s environment during the dinosaurian era was far different from it is today. The days were several proceeding moments shorter than they are today because the gravitational pull of the sun and the moon have over time had a braking influence on Earth’s rotation. Radiation from the Sun was not as strong as it is today because the Sun has been slowly brightening over time.

Other changes in the environment may be linked to the atmosphere. Carbon dioxide, a gas that traps heat from the Sun in Earth’s atmosphere-the so-called greenhouse effect-was several times more abundant in the air during the dinosaurian age. As a result, surface temperatures were warmer and no polar ice caps could form.

The pattern of continents and oceans was also very different during the age of dinosaurs. At the beginning of the dinosaurian era, the continents were united into a gigantic super-continent called Pangaea (all lands), and the oceans formed a vast world ocean called Panthalassa (all seas). About 200 million years ago, movements of Earth’s crust caused the super-continent to begin slowly separating into northern and southern continental blocks, which broke apart further into the modern continents by the end of the dinosaurian era.

Because of these movements of Earth’s crust, there was less land in equatorial regions than there is at present. Deserts, possibly produced by the warm, greenhouse atmosphere, were widespread across equatorial land, and the tropics were not as rich an environment for life forms as they are today. Plants and animals may have flourished instead in the temperate zones north and south of the equator.

The most obvious differences between dinosaurian and modern environments are the types of life forms present. There were fewer than half as many species of plants and animals on land during the Mesozoic Era than there are today. Bushes and trees appear to have provided the most abundant sources of food for dinosaurs, rather than the rich grasslands that feed most animals today. Although flowering plants appeared during the dinosaurian era, few of them bore nuts or fruit.

The animals of the period had slower metabolisms and smaller brains, suggesting that the pace of life was relatively languid and the behaviour were simple. The more active animals-such as ants, wasps, and mammals-first made their appearance during the dinosaurian era but was not as abundant as they are now.

The behaviour of dinosaurs was governed by their metabolism and by their central nervous system. The dinosaurs’ metabolism-the internal activities that supply the body’s energy needs-affected their activity level. It is unclear whether dinosaurs were purely endothermic (warm-blooded), like modern mammals, or ectothermic (cold-blooded), like modern reptiles. Endotherms regulate their body temperature internally by means of their metabolism, rather than by using the temperature of their surroundings. As a result, they have higher activity levels and higher energy needs than ectotherms. Ectotherms have a slower metabolism and regulate their body temperature by means of their behaviour, taking advantage of external temperature variations by sunning themselves to stay warm and resting in the shade to cool down. By determining whether dinosaurs were warm or cold-blooded, paleontologists could discover whether dinosaurs behaved more like modern mammals or more like modern reptiles.

Gradual changes in dinosaur anatomy suggest that the metabolic rates and activity levels of dinosaurs increased as they evolved, and some scientists believe this indicates that dinosaurs became progressively more endothermic. Overall, dinosaur body size decreased throughout the latter half the dinosaurian era, increasing the dinosaurs’ need for activity and a higher metabolism to maintain warmth. Smaller animals have more surface area in proportion to their volume, which causes them to lose more heat as it radiates from their skin. Well-preserved fossils show that many small dinosaurs were probably covered with hair or feather-like fibres. Dinosaurs’ tooth batteries (many small teeth packed together) became larger, enabling them to chew their food more efficiently, their breathing passages became separated from their mouth cavity, allowing them to chew and breathe while, and their nostrils became larger, making their breathing more efficient. These changes may have helped the dinosaurs digest their food and change it into energy more quickly and efficiently, thereby helping them maintain a higher metabolism.

The central nervous system of dinosaurs affected their behavioural flexibility-how much they could adapt their behaviours to deal with changing situations. Scientists believe that the ratio of dinosaurs’ brain size to their body weight increased as the animals evolved. As a result, their behavioural flexibility increased from a comparable level to that of modern crocodiles, in the primitive dinosaurs, to a level that is comparable to that of modern chickens and opossums, in some small Cretaceous dinosaurs.

Imprints of the skin of large dinosaurs show that the skin had a textured surface without hair or feathers. The eyes of dinosaurs were about twice the diameter of those of modern mammals. The skeleton of one small dinosaur was found preserved in windblown sand. Its head was tucked next to its forelimbs, resembling the posture of a modern bird, and its tail was wrapped around its body, resembling the posture of a cat.

Many, if not all, dinosaurs laid eggs, and extensive deposits of whole and fragmented shells have been found in China, India, and Argentina, suggesting that large nesting colonies were common. A very few eggs have been identified from the skeletons of embryos contained within them. In proportion to the body weight of the mother, dinosaurs laid smaller eggs in greater numbers than do. Scientists have found what they believe is a typical nest dug into Cretaceous streamside clays in Montana. The nest is a craterlike structure about two m’s (6.6 ft) in diameter-thought to be about the diameter of the mother’s body.

The large number of bones of small dinosaurs that have been found in nesting colonies indicates that the mortality rate of juveniles was very high. The growth rings preserved in dinosaur bones suggest that primitive dinosaurs grew more slowly than later dinosaurs. The growth rings in some giant dinosaurs suggest that these dinosaurs may have grown to adulthood rapidly and had shorter life spans than some large modern turtles, such as the giant tortoise, which can live 200 years in captivity.

Saurischian dinosaurs were characterized by a primitive pelvis, with a single bone projecting down and back from each side of the hips. This pelvis construction was similar to that of other ancient reptiles but, unlike other reptiles, saurischians had stronger backbones, no claws on their outer front digits, and forelimbs that were usually much shorter than the hind limbs. There were three basic kinds of saurischians: theropods, prosauropods, and sauropods.

Nearly all theropods were bipedal flesh eaters. Some theropods, such as Tyrannosaurus of the late part of the Cretaceous Period, reached lengths of twelve m’s (39 ft) and weights of 5 metric tons. In large theropods the huge jaws and teeth were adapted to tearing prey apart. Fossil trackways reveal that these large theropods walked more swiftly than large plant-eating dinosaurs and were more direct and purposeful in their movements. Other theropods, such as The Compsognathus, were small and gracefully built, resembling modernly running such as the roadrunner. Their heads were slender and often beaked, suggesting that these theropods fed on small animals such as lizards and infant dinosaurs. Some of them possessed brains as large as those of modern chickens and opossums.

Other theropods, called raptors, bore powerful claws, like those of an eagle, on their hands and feet and used their flexible tails as balancing devices to increase their agility when turning. These animals appear to have hunted in packs. Many paleontologists believe that may have arisen from small, primitive theropods that were also ancestors of the raptors. Evidence for this theory has been augmented by the discovery of an Oviraptor nest in the Gobi Desert. The nest contains the fossil bones of an Oviraptor sitting on its brood of about fifteen eggs, exhibiting behaviours remarkably similar to that of modern.

Unlike the primitive theropods, the prosauropods had relatively small skulls and spoon-shaped, rather than blade-shaped, teeth. Their necks were long and slender and, because they were bipedal, the prosauropods could browse easily on the foliage of bushes and trees that were well beyond the reach of other herbivores. A large clawed, a hook-like thumb was probably used to grasp limbs while feeding. The feet were broad and heavily clawed. When prosauropods appeared in the fossil record along with the earliest known theropods, they had already reached lengths of three m’s (10 ft). By the end of the Triassic Period, the well-known Plateosaurus had attained a length of nine m’s (30 ft) and a weight of 1.8 metric tons. During the late Triassic and early Jurassic periods, prosauropods were the largest plant-eating dinosaurs.

Sauropods, which include giants such as Apatosaurus (formerly known as Brontosaurus) and Diplodocus, descended from prosauropods. By the middle of the Jurassic Period they had far surpassed all other dinosaurs in size and weight. Some sauropods probably reached lengths of more than twenty-five m’s (82 ft) and weighed about 90 metric tons. These dinosaurs walked on four pillar-like legs. Their feet usually bore claws on the inner toes, although they otherwise resembled the feet of an elephant. The sauropod backbone was hollow and filled with air sacks similar to those in a bird’s vertebrae, and the skull was small in proportion to the animals’ size. The food they ate was ground by stones in their gizzard, a part of their digestive tract. Indeed, sauropods may be compared with gigantic elephants, with the sauropods’ long necks performing the function of an elephant’s trunk, and their gizzard stones acting as the strong teeth of an elephant. Some sauropods, such as the late Jurassic Apatosaurus, used their long, thin tails as a whip for defence, while others used their tails as clubs.

In ancestral ornithischians the bony structure projecting down and back from each side of the hips was composed of two bones, so that their hips superficially resembled the hips of. Early ornithischians were small bipedal plant eaters, about one m (3 ft) in length. These animals led to five kinds of descendants: stegosaurs, ankylosaurs, ornithopods, pachycephalosaurs, and ceratopsians.

Paleontology helps to study the prehistoric animal and plant life through the analysis of fossil remains. The study of these remains enables scientists to trace the evolutionary history of extinct and living organisms. Paleontologists also play a major role in unravelling the mysteries of the earth's rock strata (layers). Using detailed information on how fossils are distributed in these layers of rock, paleontologists help prepare accurate geologic maps, which are essential in the search for oil, water, and minerals.

Most people did not understand the true nature of fossils until the beginning of the 19th century, when the basic principles of modern geology were established. Since about 1500, scholars had engaged in a bitter controversy over the origin of fossils. One group held that fossils are the remains of prehistoric plants and animals. This group was opposed by another, which declared that fossils were either freaks of nature or creations of the devil. During the 18th century, many people believed that all fossils were relics of the great flood recorded in the Bible.

Paleontologists gain most of their information by studying deposits of sedimentary rocks that formed in strata over millions of years. Most fossils are found in sedimentary rock. Paleontologists use fossils and other qualities of the rock to compare strata around the world. By comparing, they can determine whether strata developed during the same time or in the same type of environment. This helps them assemble a general picture of how the earth evolved. The study and comparison of different strata are called stratigraphy.

Fossils provide most of the data on which strata are compared. Some fossils, called index fossils, are especially useful because they have a broad geographic range but a narrow temporal one-that is, they represent a species that was widespread but existed for a brief period. The best index fossils have a tendency leaning toward being marine creatures. These animals evolved rapidly and spread over large areas of the world. Paleontologists divide the last 570 million years of the earth's history into eras, periods, and epochs. The part of the earth's history before about 570 million years ago is called Precambrian time, which began with the earth's birth, probably more than four billion years ago.

The earliest evidence of life consists of microscopic fossils of bacteria that lived as early as 3.6 billion years ago. Most Precambrian fossils are very tiny. Most species of larger animals that lived in later Precambrian time had soft bodies, without shells or other hard body parts that would create lasting fossils. The first abundant fossils of larger animals date from about 600 million years ago.

Coming to be, is the Paleozoic era, of which it lasted for about 330 million years. It includes the Cambrian, Ordovician, Silurian, Devonian, Carboniferous, and Permian periods. Index fossils of the first half of the Paleozoic era are those of the invertebrates, such as trilobites, graptolites, and crinoids. Remains of plants and such vertebrates as fish and reptiles make up the index fossils of the second half of this era.

At the beginning of the Cambrian period (570 million to 500 million years ago) animal life was entirely confined to the seas. By the end of the period, all the phyla of the animal kingdom existed, except vertebrates. The characteristic animals of the Cambrian period were the trilobites, a primitive form of arthropod, which reached their fullest development in this period and became extinct by the end of the Paleozoic era. The earliest snails appeared in this period, as did the cephalopod mollusks. Other groups represented in the Cambrian period were brachiopods, bryozoans, and Foraminifera. Plants of the Cambrian period included seaweeds in the oceans and lichens on land.

The most characteristic animals of the Ordovician period (500 million to 435 million years ago) were the graptolites, which were small, colonial hemichordates (animals possessing an anatomical structure suggesting part of a spinal cord). The first vertebrates-primitive fish-and the earliest corals emerged during the Ordovician period. The largest animal of this period was a cephalopod mollusk that had a shell about three m’s (about 10 ft) in length. Plants of this period resembled those of the Cambrian periods.

The most important evolutionary development of the Silurian period (435 million to 410 million years ago) was that of the first air-breathing animal, a scorpion. Fossils of this creature have been found in Scandinavia and Great Britain. The first fossil records of vascular plants-that are, land plants with tissue that carries food-appeared in the Silurian period. They were simple plants that had not developed separate stems and leaves.

The dominant forms of animal life in the Devonian period (410 million to 360 million years ago) were fish of various types, including sharks, lungfish, armoured fish, and primitive forms of ganoid (hard-scaled) fish that were probably the evolutionary ancestors of amphibians. Fossil remains found in Pennsylvania and Greenland indicate that early forms of amphibia may already have existed during the Devonian period. Early animal forms included corals, starfish, sponges, and trilobites. The earliest known insect was found in Devonian rock.

The Devonian is the first period from which any considerable number of fossilized plants have been preserved. During this period, the first woody plants developed, and by the end of the period, land-growing forms included seed ferns, ferns, scouring rushes, and scale trees, the modern relative of club moss. Although the present-day equivalents of these groups are mostly small plants, they developed into treelike forms in the Devonian period. Fossil evidence indicates that forests existed in Devonian times, and petrified stumps of some larger plants from the period measure about 60 cm (about twenty-four in) in diameter.

The Carboniferous period lasted from 360 million to 290 million years ago. During the first part of this period, sometimes called the Mississippian period (360 million to 330 million years ago), the seas contained a variety of echinoderms and foraminifer, with most forms of animal life that appeared in the Devonian. A group of sharks, the Cestraciontes-or shell-crushers-were dominant among the larger marine animals. The predominant group of land animals was the Stegocephalia, an order of primitive, lizard-like amphibians that developed from the lungfish. The various forms of land plants became diversified and grew larger, particularly those that grew in low-laying swampy areas.

The second part of the Carboniferous, sometimes called the Pennsylvanian period (330 million to 290 million years ago), saw the evolution of the first reptiles, a group that developed from the amphibians and lived entirely on land. Other land animals included spiders, snails, scorpions, more than 800 species of cockroaches, and the largest insect ever evolved, a species resembling the dragonfly, with a wingspread of about 74 cm (about twenty-nine in.). The largest plants were the scale trees, which had tapered trunks that measured as much as 1.8 m’s (6 ft) in diameter at the base and thirty m’s (100 ft) in height. Primitive gymnosperms known as cordaites, which had pithy stems surrounded by a woody shell, were more slender but even taller. The first true conifers, forms of advanced gymnosperms, also developed during the Pennsylvanian period.

The chief events of the Permian period (290 million to 240 million years ago) were the disappearance of many forms of marine animals and the rapid spread and evolution of the reptiles. Usually, Permian reptiles were of two types: lizard-like reptiles that lived entirely on land, and sluggish, semiaquatic types. A comparatively small group of reptiles that evolved in this period, the Theriodontia, were the ancestors of mammals. Most vegetation of the Permian period was composed of ferns and conifers.

The Mesozoic era is often called the Age of Reptiles, because the reptile class was dominant on land throughout the age. The Mesozoic era lasted about 175 million years, and included the Triassic, Jurassic, and Cretaceous periods. Index fossils from this era include a group of extinct cephalopods called ammonites, and extinct forms of sand dollars and sea urchins

The most notable of the Mesozoic reptiles, the dinosaur, first evolved in the Triassic period (240 million to 205 million years ago). The Triassic dinosaurs were not as large as their descendants in later Mesozoic times. They were comparatively slender animals that ran on their hind feet, balancing their bodies with heavy, fleshy tails, and seldom exceeded 4.5 m’s (15 ft) in length. Other reptiles of the Triassic period included such aquatic creatures as the ichthyosaurs, and a group of flying reptiles, the pterosaurs.

The first mammals also appeared during this period. The fossil remains of these animals are fragmentary, but the animals were apparently small and reptilian in appearance. In the sea, Teleostei, the first ancestors of the modern bony fishes, made their appearance. The plant life of the Triassic seas included a large variety of marine algae. On land, the dominant vegetation included various evergreens, such as ginkgos, conifers, and palms. Small scouring rushes and ferns still existed, but the larger members of these groups had become extinct.

During the Jurassic period (205 million to 138 million years ago), dinosaurs continued to evolve in a wide range of size and diversity. Types included heavy four-footed sauropods, such as Apatosaurus (formerly Brontosaurus); two-footed carnivorous dinosaurs, such as Allosaurus; Two-footed vegetarian dinosaurs, such as Camptosaurus, and four-footed armoured dinosaurs, such as Stegosaurus. Winged reptiles included the pterodactyl, which, during this period, ranged in size from extremely small species to those with wingspreads of 1.2 m’s (4 ft). Marine reptiles included plesiosaurs, a group that had broad, flat bodies like those of turtles, with long necks and large flippers for swimming; Ichthyosauria, which resembled dolphins, and primitive crocodiles, as the mammals of the Jurassic period consisted of four orders, all of which were smaller than small modern dogs. Many insects of the modern orders, including moths, flies, beetles, grasshoppers, and termites appeared during the Jurassic period. Shellfish included lobsters, shrimp, and ammonites, and the extinct group of belemnites, which resembled squid and had cigar-shaped internal shells. Plant life of the Jurassic period was dominated by the cycads, which resembled thick-stemmed palms. Fossils of most species of Jurassic plants are widely distributed in temperate zones and polar regions, indicating that the climate was uniformly mild.

The reptiles were still the dominant form of animal life in the Cretaceous period (138 million to sixty-five million years ago). The four types of dinosaurs found in the Jurassic also lived during this period, and a fifth type, the horned dinosaurs, also appeared. By the end of the Cretaceous, about sixty-five million years ago, all these creatures had become extinct. The largest of the pterodactyls lived during this period. Pterodactyl fossils discovered in Texas have wingspreads of up to 15.5 m’s (50 ft). Other reptiles of the period include the first snakes and lizards. Several types of Cretaceous have been discovered, including Hesperornis, a diving bird about 1.8 m’s (about 6 ft) in length, which had only vestigial wings and was unable to fly. Mammals of the period included the first marsupials, which strongly resembled the modern opossum, and the first placental mammals, which belonged to the group of insectivores. The first crabs developed during this period, and several modern varieties of fish also evolved.

The most important evolutionary advance in the plant kingdom during the Cretaceous period was the development of deciduous plants, the earliest fossils of which appear in early Cretaceous rock formations. By the end of the period, many modern varieties of trees and shrubs had made their appearance. They represented more than 90 percent of the known plants of the period. Mid-Cretaceous fossils include remains of beech, holly, laurel, maple, oak, plane tree, and walnut. Some paleontologists believe that these deciduous woody plants first evolved in Jurassic times but grew only in upland areas, where conditions were unfavourable for fossil preservation.

The Cenozoic era (sixty-five million years ago to the present time) is divided into the Tertiary period (sixty-five million to 1.6 million years ago) and the Quaternary period (1.6 million years ago to the present). However, because scientists have so much more information about this era, they have an aptitude to focus on the epochs that make up each period. During the first part of the Cenozoic era, an abrupt transition from the Age of Reptiles to the Age of Mammals occurred, when the large dinosaurs and other reptiles that had dominated the life of the Mesozoic era disappeared.

Index fossils of the Cenozoic tend to be microscopic, such as the tiny shells of Foraminifera. They are commonly used, along with varieties of pollen fossils, to date the different rock strata of the Cenozoic era.

The Paleocene epoch (sixty-five million to fifty-five million years ago) marks the beginning of the Cenozoic era. Seven groups of Paleocene mammals are known. All of them appear to have developed in northern Asia and to have migrated to other parts of the world. These primitive mammals had many features in common. They were small, with no species exceeding the size of a small modern bear. They were four-footed, with five toes on each foot, and they walked on the soles of their feet. Most of them had slim heads with narrow muzzles and small brain cavities. The predominant mammals of the period were members of three groups that are now extinct. They were the creodonts, which were the ancestors of modern carnivores; the amblypods, which were small, heavy-bodied animals; and the condylarths, which were light-bodied herbivorous animals with small brains. The Paleocene groups that have survived are the marsupials, the insectivores, the primates, and the rodents.

During the Eocene epoch (fifty-five million to thirty-eight million years ago), several direct evolutionary ancestors of modern animals appeared. Among these animals-all of which were small in stature-were the horse, rhinoceros, camel, rodent, and monkey. The creodonts and amblypods continued to develop during the epoch, but the condylarths became extinct before it ended. The first aquatic mammals, ancestors of modern whales, also appeared in Eocene times, as did such modern as eagles, pelicans, quail, and vultures. Changes in vegetation during the Eocene epoch were limited chiefly to the migration of types of plants in response to climate changes.

During the Oligocene epoch (thirty-eight million to twenty-four million years ago), most of the archaic mammals from earlier epochs of the Cenozoic era disappeared. In their place appeared representatives of many modern mammalian groups. The creodonts became extinct, and the first true carnivores, resembling dogs and cats, evolved. The first anthropoid apes also lived during this time, but they became extinct in North America by the end of the epoch. Two groups of animals that are now extinct flourished during the Oligocene epoch: the titanotheres, which are related to the rhinoceros and the horse; and the oreodonts, which were small, dog-like, grazing animals.

The development of mammals during the Miocene epoch (twenty-four million to five million years ago) was influenced by an important evolutionary development in the plant kingdom: the first appearance of grasses. These plants, which were ideally suited for forage, encouraged the growth and development of grazing animals such as horses, camels, and rhinoceroses, which were abundant during the epoch. During the Miocene epoch, the mastodon evolved, and in Europe and Asia a gorilla-like ape, Dryopithecus, was common. Various types of carnivores, including cats and wolflike dogs, ranged over many parts of the world.

The Paleontology of the Pliocene epoch (five million to 1.6 million years ago) does not differ much from that of the Miocene, although the period is regarded by many zoologists as the climax of the Age of Mammals. The Pleistocene Epoch (1.6 million to 10,000 years ago) in both Europe and North America was marked by an abundance of large mammals, most of which were practically modern in type. Among them were buffalo, elephants, mammoths, and mastodons. Mammoths and mastodons became extinct before the end of the epoch. In Europe, antelope, lions, and hippopotamuses also appeared. Carnivores included badgers, foxes, lynx, otters, pumas, and skunks, and now-extinct species such as the giant saber-toothed tiger. In North America, the first bears made their appearance as migrants from Asia. The armadillo and ground sloth migrated from South America to North America, and the muskox ranged southward from the Arctic regions. Modern human beings also emerged during this epoch.

Cave Paint in at Lascaux France, are expressive portions of the cave painting in Lascaux, and was carried out by Palaeolithic artists in or around 13,000 Bc. At the end of the Pleistocene Epoch, the cow and other groups of small horses were painted with red and yellow. Whereas, ochre colours were either blown through reeds onto the wall or mixed with animal fat to apply in the squirted by reeds or thistles. It is believed that prehistoric hunters painted these to gain magical powers that would ensure a successful hunt.

The remains of simple animals provide additional information about climate and climatic change. Because different beetle species are especially well suited for either warm or cool climates, the presence of fossils of a particular type of beetle can give scientists clues to the climate of the region. Fossil algae reveal much about water acidity or alkalinity, water temperature, and the speed of water movement. The Ocean Drilling Program, which collects samples from the sea-floor, has collected enough data to show that the distribution of marine organisms changed significantly during the Pleistocene Epoch.

Invertebrates-animals without backbones, such as shellfish and insects-and plant communities survived the glacial cycles of the Pleistocene Epoch as moderately unscathed. Some animal and plant groups, such as the beetles, moved vast distances but underwent little evolution. Pleistocene mammals, on the other hand, underwent important changes, probably because climate changes affected mammals more than they did invertebrates. Many mammals have evolved significantly since the Pleistocene. Some changes in familiar animals include greater numbers of species of mice and rats and the appearance of modern species of the dog family.

Many mammalian species have become extinct since the Pleistocene. A few of the spectacular mammals that disappeared during the last 20,000 years include the woolly rhinoceros, the giant ground sloth, the saber-toothed tigers, the giant cave bear, the mastodon, and the woolly mammoth. These animals existed just when early humans, and drawings of them exist on cave walls in Europe. Recent theories suggest that these huge mammals could not reproduce quickly enough to replace the number of animals that humans killed for food, and were therefore driven to extinction by human hunting.

Humans continued to evolve during the Pleistocene Epoch. Two genera, Australopithecus and Homo, existed during the early Pleistocene. The last Australopithecines disappeared about one million years before present. Several species of the genus Homo existed during the Pleistocene. Modern humans (Homo sapiens sapient) probably arose from The Homo erectus, which are thought to have evolved from Homo habilis. Paleontologists have found fossils that support the transition between Homo erectus and Homo sapiens dating from about 500,000 years before present to about 200,000 years before present. Anatomically modern humans (Homo sapiens) arose from an earlier human species that lived in Africa. A likely ancestor, known as Homo ergaster, evolved from around 1.9 million years ago. This ancestor arose from an earlier Pleistocene species in Africa, perhaps one known as Homo rudolfensis. Anatomically a modern Homo sapiens appears to have evolved by 130,000 years ago, if not earlier. For a time our species also coexisted in parts of Eurasia with another species of The Homo, Homo neanderthalensis, until between 35,000 and 30,000 years ago. Since then only, our species has survived.

Evidence from both lands and sea environments shows that, at least before the human-induced global warming of the last two centuries, the worldwide climate has been cooling naturally for several thousand years. Ten thousand years have already passed since the end of the last glaciation, and 18,000 years have passed since the last maximum. This may suggest that Earth have entered the beginning of the next worldwide glaciation.

Several possible causes of ice ages exist. Scientists have proposed many theories to explain their occurrence. In the 1920s Yugoslav scientist Milutin Milankovitch proposed the Milankovitch Astronomical Theory, variations in Earth’s position can cause which state’s climatic fluctuations and the onset of glaciation compared with the Sun. Milankovitch calculated that this deviation of Earth’s orbit from its almost circular path occurs every 93,408 years. We have also linked the movement of Earth’s crustal plates, called plate tectonics, to the occurrence of ice ages. The positions of the plates in polar regions may contribute to ice ages. Changes in global sea level may affect the average temperature of the planet and lead to cooling that may cause ice ages. Separate theories explaining the causes of ice ages such as the substantial variations of heat output of the Sun, or the bearing of interplanetary dust cloud, that absorb the sun’s heat for reaching the Earth, and, perhaps from a meteorite affect-have not yet been supported by any solid evidence.

The Milankovitch Astronomical Theory best explains regular climatic fluctuations. The theory is based on three variations in the position of Earth compared with the Sun: the eccentricity (elongation or circularity of the shape) of Earth's orbit, the tilt of Earth's axis toward or away from the Sun, and the degree of wobble of Earth's axis of rotation. The total effect of these changes causes one region of Earth-latitude 60° to seventy north, near the Arctic Circle-to receive low amounts of summer radiation about once every 100,000 years. These cool summer periods last several hundred to several thousand years and thus provide sufficient time to allow snowfields to expand and merge into glaciers in this area, signalling the beginning of glaciation.

When glaciers expand during an ice age, the sea level drops because the water that forms glaciers ultimately comes from the oceans. Global sea level affects the overall temperature of the planet because solar radiation, or heat, is better absorbed by water than by land. When sea levels are low, more land surfaces becomes exposed. Since the land is not able to absorb as much solar radiation as the water can, the overall average temperature of the planet decreases, or cools, and may contribute to the onset of an ice age.

A map showing Earth during an ice age would look very different from a map of contemporaneousness resulting to our world divergence. During the Wisconsin glaciation of 115,000 to 10,000 years ago, two ice sheets, the Laurentides and the Cordilleran, covered the northern two-thirds of North America, including most of Canada, with ice. Other parts of the world, including Eurasia and parts of the North Atlantic Ocean, were also blanketed in sheets of ice

The Laurentides continental ice sheet extended from the eastern edge of the Rocky Mountains to Greenland. The separate Cordilleran Ice Sheet was composed of mountains and ice cap valley glaciers. Flowing onto the surrounding lowlands, in as much as these partial reservoirs were equally part of northern Alaska, and of the Sierra Nevada. The Cascade Range and the Rocky Mountains and as far south as New Mexico contains the whole from which begins their feeding waters from the Cordilleran Ice sheet. Where the continental shelf between Alaska and Siberia was uncovered, the Bering land bridge formed. In northern Eurasia, continental ice extended from Great Britain eastward to Scandinavia and Siberia. Separate mountains, and glacial systems covered the Alps, the Himalayas, and the Andes. The extensive ice sheets on Antarctica and Greenland did not expand very much during to each of glaciation. Sea ice grew worldwide, particularly in the North Atlantic Ocean.

Years of investigation and research, coupled with resolution and courage to follow wherever truth might lead, have established the certainty of a future world cataclysm during which most of the earth's population will be destroyed in the same manner as the mammoths of prehistoric times were destroyed. Such an event has occurred each time that one or two polar ice caps grew to maturity; a recurrent event in global history is clearly written in the rocks of a very old earth.

The earth is approximately four ½ billion years old. Human beings have been living on it for at least 500,000 years and perhaps even one million years. To appreciate the immensity of these figures, one might imagine the age of the earth represented by the period of about one week; the duration of our own epoch, 7,000 years, is then but one second! By a similar analogy, men have lived on an earth that is one week old for just two minutes. Evidently, our own epoch is but a very short and insignificant period in the life of our planet and our species.

In past epochs there have been ice caps at one or both of the geographical poles. The heat of the sun caused these ice caps to grow larger. As the sun heats the air of the hemisphere, the heated air expands, becomes lighter, and rises. The updrafts are greatest in the tropics. As the earth is virtually spherical, the currents of warm air converge at the poles. Meeting head-on from every direction, they create areas of air pressure, become colder and heavier, turn downward, reversing the direction of their flow, and pour back toward the Equator from the polar centres with high velocities. Thus, there is a continuous circulation of raising humid warm air journeying poll ward and a down draft of cold dehumidified air returning from the poles at low or ground altitudes. Air acts like a sponge. When warm, it absorbs water, when cold it cannot hold much water, and in cooling releases any surplus moisture to fall as rain or snow.

Most of the snow that falls in the polar regions does not melt; the air temperature is too low. Instead, the snow is stored, changing to glacial ice. As this process continues through time, the ice masses at the poles constantly grow in volume.

As the prehistoric ice caps grew larger, they’re lent of an ectomorphic drift turning the rotating planet off balance because of the wobble of the earth, causing the earth to roll around sideways to its direction of rotation.

Another analogy will make this clear. When you place a weight at the end of a string and then rotate the string in a circle, the weighted end of the string rises to a horizontal plane. Now, imagine yourself and the string as the earth, the weight at the end of the string as the weight of a growing ice cap, and imagine that, instead of intentionally swinging the weighted string, the rotational motion encompasses you, the string and the weight, as though you were standing on a rotating platform. In this depiction, then, your body represents both the present Axis of Spin and Axis of Figure of the earth. Your body does not move; the Axis of Spin remains the same. However, your arm and the weighted extend like a string, here representing a radius of the earth, rise from the vertical (directed toward the pole) to the horizontal (directed toward the Equator). The sphere of which your arm and the weighted string are some radiuses is rolled sideways; the weight, representing the imbalance of an ice cap, rotates from a polar position to an equatorial position. The Axis of Figure, previously represented by your vertical arm, is now changed; the old Axis of Figure is now perpendicular to the Axis of Spin.

The rotating equilibrium thrown off balance by the weight of the growing ice caps, causes the spinning globe to roll over on its side. Yet such an event does not occur lightly. The oceans, like water in a bowl that is suddenly moved, are cast from their basins to flood the land. The winds, previously settled into patterns dependent upon a stable globe, are whipped asunder by the sudden shifting of the globe. The sudden meeting of warm and cold air creates great pressure zones that spawn new rains and hurricanes to sweep across the earth. The forces of nature, loosed from their equilibrium, range wildly in search of new equilibrium. Bringing us to lay upon the Stone Age, of which this period of human technological development characterized by using stone as the principles raw material for tools. In a given geographic region, the Stone Age normally predated the invention or spread of metalworking technology. Human groups in different parts of the world began using stone tools at different times and abandoned stone for metal tools at different times. Broadly speaking, however, the Stone Age generated by times generations estimate the concurrent evidence around 2.5 million years ago. Ending of a partial differentiation that globally extends by 5,000 years ago, moreover, the world’s regional intervals became intermittently more recent. Today only a few isolated human populations rely largely on stone for their technologies, and that reliance is rapidly vanishing with the introduction of tools from the modern industrialized world.

Human ancestors living before the Stone Age likely used objects as tools, a behaviour that scientists find today among chimpanzees. Wild chimpanzees in Africa exhibit a range of tool-using behaviours. For example, they used bent twigs to fish for termites, chewed wads of leaves to soak up liquid, and branches and stones as hammers, anvils, missiles, or clubs. However, when prehistoric humans began to make stone tools they became dramatically distinct from the rest of the animal world. Although other animals may use stone objects as simple tools, the intentional modification of stone into tools, and using tools to make other tools, is behaviourally unique to humans. Although those Africans of 100,000 years ago had more modern skeletons than did their Neanderthal contemporaries, they made essentially the same crude stone tools as Neanderthals, still lacking standardized shapes. This stone Toolmaking and tool-uses were a haling behaviour that became indispensable to the way early humans adapted to their environment and partially affected human evolution

Human technology developed from the first stone tools, in use by two and a half million years ago, to the 1996 laser printer that replaced the outdated 1992 laser printer and used to print out manuscripts depicted of these pages. The rate of development was undetectably slow at the beginning, when hundreds of thousands of years passed with no discernible change in our stone tools and with no surviving evidence for artifacts made of other materials. Today, technology advances so rapidly that it is reported in the daily newspaper.

Archaeologists believe the Stone Age began in the vicinity to 2.5 million years ago because that marks the age of the earliest stone tool remnants ever discovered. The earliest recognizable stone artifacts mark the beginnings of the archaeological record-that is, material remnants of ancient human activities. As recently as 5,000 years ago all human societies on the face of the earth were essentially still living in the Stone Age. Therefore, more than 99.8 percent of humans’ time as Toolmaker-from 2.5 million years ago to 5,000 years ago-took places during the Stone Age. During the Stone Age our ancestors went through many different stages of biological and cultural evolution. It was long after our lineage became anatomically modern that we began to experiment with innovations such as metallurgy, heralding the end of the Stone Age.

The term Stone Age has been used since the early 1800s as a designation for an earlier, prehistoric stage of human culture, one in which stone rather than metal tools were used. By the early 1800s different archaeological sites had equalled uncovered those European involvements embedded by some mysterious components from apparently foregoing of prehistoric intervals. Christian Thomsen, curator of the National Museum in Copenhagen, Denmark, developed a classification scheme to organize the museum’s growing collections into three successive technological stages in the human past: Stone Age, Bronze Age, and Iron Age. The three set -value-class were quickly adopted and spread throughout the museums in Europe. In addition, among excavators, who were to their finding results of remnant categories, that each set class was of a constant basis as justified freely by these three set -stages. The fact that Stone Age remnants were found at the bottom layers showed that they were the oldest

The study of the Stone Age falls under the fields of anthropology, which is the study of human life and cultural origins of human life up to the present, and Archaeology, which is the study of the material remains of humans and human ancestors. Archaeologists seek out, explore, and study archaeological sites, locations around the world where historic or prehistoric people left behind traces of their activities. Archaeologists use the data collected to make theories about how human ancestors lived.

Archaeologists normally use the term artifact to refer to objects modified by human action, either intentionally or unintentionally. The term tool is used to refer to something used by a human or a human ancestor for some purpose and may be modified or not. For instance, a thrown rock is a tool, even if it were not modified. Giving a demonstration of a particular stone artifact is usually difficult as once used as a prehistorical tool, so in practice, archaeologists prefer to use the term artifact instead. In relation to the earlier stages of the Stone Age the unused debris or waste from the manufacture of stone tools is also considered artifactual.

Stone artifacts are important to archaeologists who study prehistoric humans, because they can yield a wide range of information about ancient peoples and their activities. Stone artifacts are, in fact, often the principle archaeological remnants that persist after the passage of time and as such can give important clues as to the presence or absence of ancient human populations in any given region or environment. Careful analysis of Stone Age sites can yield crucial information regarding the technology of prehistoric Toolmaker. Yet leaving to no doubt that we are dealing with biologically and behaviourally modern humans, and, in turn are we to give anthropologists insight into the levels of cognitive (thinking) ability at different stages of human evolution.

Cro-Magnon garbage heaps yield not only stone tools but also tools of bone, whose suitability for shaping, for instance, into fish hooks has apparently gone unrecognized by previous humans. Tools were produced for their adaptivity and distinctive adjustive measures of shapes, their modernity for functions as needles, awls, engraving tools, and so on obviously tell their own story. Instead of only single-piece tools such as hand-held scrapers, Multi-piece tools made their appearance. Recognizable Multi-piece weapons at Cro-Magnon suites include harpoons, spear-throwers, and eventually the bow and arrows, the precursors of rifles and other Multi-piece modern weapons. Those efficient means of killing at a safe distance permitted the hunting of such dangerous prey as rhinos and elephants, while the invention of rope for nets, lines, and snares allowed the addition of fish and to our dirt. Remains of houses and sewn clothing testify to a greater improved ability to survive in cold climates.

During the Stone Age, Earth experienced the most recent in a succession of ice ages, in which glaciers and sea ice covered a large portion of Earth’s surface. The most recent ice age period lasted from 1.6 million to 10,000 years ago, a period of glacial and warmer interglacial stages known as the Pleistocene Epoch. The Holocene Epoch began at the end of the ice age 10,000 years ago and continued to the present time.

Early hominids made stone artifacts either by smashing rocks between a hammer and anvil (known as the bipolar technique) to produce usable pieces or by acceding to a greater extent the controlled process termed flaking, in which stone chips were fractured away from a larger rock by striking it with a hammer of stone or other hard material. Subsequently, throughout the last 10,000 years, additional techniques of producing stone artifacts were to include pecking, grinding, sawing, and boring, in so that it turns into other traditional standards. The most excellent rock for flaking lean of a hard, fine-grained, or amorphous (having no crystal structure) rocks, including lava, obsidian, ignimbrites, flint, chert, quartz, silicified limestone, quartzite, and indurated shale. Ground stone tools could be made on a wider range of raw material types, including coarser grained rock such as granite.

Flaking produces several different types of stone artifacts, which archaeologists look forward to at prehistoric sites. The parent pieces of rock from which chips have been detached are called cores, and the chips removed from cores are called flakes. A flake that has had yet smaller flakes removed from one or more edges to become sharper or form the contours of the known of a retouched piece. The stone used to knock flakes from cores is called a hammerstone or a precursor. Other flaking artifacts include fragments and chunks, most of which are broken cores and flakes.

The terms culture and industries both refer to a system of technology (Toolmaking technique, for example) shared by different Stone Age sites of the same broad time. Experts now prefer to use the term industry instead of culture to refer to these shared Stone Age systems.

Archaeologists have divided the Stone Age into different stages, each characterized by different types of tools or tool-manufacturing techniques. The stages also imply broad time frames and are perceived as stages of human cultural development. The most widely used designations for the successive stages are Palaeolithic (Old Stone Age), Mesolithic (Middle Stone Age), and Neolithic (New Stone Age). British naturalist Sir John Lubbock in 1865 defined the Palaeolithic stage as the period in which stone tools were chipped or flaked. He defined the Neolithic as the stage in which ground and polished stone axes became prevalent. These two stages also were associated with different economic and subsistence strategies: Palaeolithic peoples were hunters-gatherers while Neolithic peoples were farmers. Archaeologists subsequently identified a separate stage of stone tool working in Eurasia and Africa between the Palaeolithic and the Neolithic, called the Mesolithic. This period is characterized by the creation of microliths, small, geometric-shaped stone artifacts attached to wood, antler, or bone to form tools such as arrows, spears, or scythes. Microliths began appearing between 15,000 and 10,000 years ago at the end of the Pleistocene Ice Age.

The Palaeolithic/Mesolithic/Neolithic division system was first applied only to sites in Europe, but is now widely used (with some modification) to refer to prehistoric human development in much of Asia, Africa, and Australasia. Different terminology is often used to describe the cultural-historical chronology of the Americas, which humans did not reach until some point between 20,000 and 12,000 years ago. However, there is a general similarity, the transitional form of flaked stone tools are associated with prehistoric hunters-gatherers to both flaked and ground stone tools associated with the rise of early farming communities. The period in the Americas up to the end of the Pleistocene Ice Age about 10,000 years ago, when most humans were hunters-gatherers, is convened as Paleo-Indian and the subsequent, post-glacial period is known as Archaic.

Archaeologists subdivide the Palaeolithic into the Lower Palaeolithic (the earliest phase), Middle Palaeolithic, and Upper Palaeolithic (the later phase), based upon the presence or absence of certain classes of stone artifacts.

The Lower Palaeolithic dates from approximately 2.5 million years ago until about 200,000 years ago and include the earliest record of human Toolmaking and documents much of the evolutionary history of the genus Homo from its origins in Africa to its spread into Eurasia. Two successive Toolmaking industries characterize the Lower Palaeolithic: the Oldowan and the Acheulean.

The Oldowan industry was named by British Kenyan anthropologists Louis Leakey and Mary Leakey for early archaeological sites found at Olduvai Gorge in northern Tanzania. It is also sometimes called the chopper-core or pebble tool industry. Simple stone artifacts made from small stones or blocks of stone characterize the Oldowan industry. Mary Leakey classified Oldowan artifacts as either heavy-duty tools or light-duty tools, as both their classifications deemed to be heavy-duty tools, which include core types such as choppers, discoids, polyhedrons, and heavy-duty scrapers. Many of these cores may have been produced to generate sharp-edged flakes, but some could have been used for chopping or scraping activities as well. Light-duty tools include retouched forms such as smaller scrapers, awls (sharp, pointed tools for punching holes in animal hides or wood), and burins (chisel like flint tools used for engraving and cutting). Oldowan techniques of manufacturing included hard hammer percussion, or detaching flakes from cores with a stone hammer; the anvil technique, striking a core on a stationary anvil to detach flakes; and bipolar technique, detaching flakes by placing the core between an anvil and the hammerstone.

Early humans probably also made tools from a wide range of materials other than stone. For example, they probably used wood for simple digging sticks, spears, clubs, or probes, and they probably used shell, hide, bark, or horn to fashion containers. Unfortunately, organic materials such as these do not normally survive from earlier Stone Age times, so archaeologists can only speculate about whether such tools were used.

Two of the antiquated Oldowan settings are in Ethiopia, overcoming (as, arrested nearly 2.5 million years ago) and formerly (2.3 million years ago), as this differential studies of the Oldowan localities include Lokalalei (2.3 million years ago), Koobi Fora (1.9 million to 1.4 million years ago) in Kenya, Olduvai Gorge (1.9 million to 1.2 million years ago) in Tanzania, and Ain Hanech (possibly about 1.7 million years ago) in Algeria. The cave deposits at Sterkfontein and Swartkrans (estimated to be from 2.0 million to 1.5 million years ago), in South Africa.

Theories about the intelligence and culture of prehistoric man are beginning to be drastically revised. Accumulated evidence now depicts European men living between 100,000 and 10,000 years ago as communal men who were skilled hunters and Toolmaker, who had developed formal burial rites for members of their tribes and perhaps the orienting initiation among those of whom gainfully employed them as the first religiousities to express a newer beginning of beliefs with the ritual burials for an animals and fellow tribesman alike. Who had some belief in an afterlife, who took excellent care of they’re sick and elderly, and who, in their heyday, carried around pocket sized calendars of their own making.

A ten-member international expedition, led by Ralph S. Solecki of Columbia University, found the bones of a dismembered deer ritually buried by Neanderthal men about 50,000 years ago. The bones of the deer's foot, jaw, and back, its shoulder blades, and the top of its skull were found buried 5 feet deep in the Nahr Ibrahim Cave, north of Beirut, Lebanon. The presence of the skull, the bed of stones on which the bones were placed, and the red earth colouring of the bones, which was not native to the cave, said that a ritual known as hunters' magic was involved in the burial. Solecki interpreted the burial as an attempt “to ensure a successful hunt by the ceremonial treatment of an animal.” Although evidence existed show that bears were ritually treated by Neanderthal men, this was the first discovery of a lone deer buried in this manner.

An American expedition, also led by Solecki, excavated a mountain cave near Shanidar in Iraqui Kurdistan and discovered evidence that Neanderthals practiced a form of religious burial suggesting a belief in an afterlife: at least one actualized in totality from nine skeletons uncovered in the cave was buried with flowers. Also found in the cave was the skeleton of a man of about forty, comparable to a modern age of eighty, who had been born with a deformed right arm. A Neanderthal doctor had skilfully amputated the arm above the elbow, and judging by his death at a ripe old age, the man was carefully cared for from his boyhood until he died because of a rock fall inside the cave, a common peril then.

Recent pale ontological examinations of skeletons suggest that the Neanderthals' stooped posture was the result of a vitamin D deficiency. Lack of sunlight during the Ice Age might have caused their upright posture to become deformed by rickets.

In January it was revealed that a sophisticated system of notation charting the phases of the moon was used throughout most of Europe during the last Ice Age, beginning around 34,000 years ago. Convincingly between such as scribbles and gouges on pieces of bones, antlers, and stone, may have previously seemed regarded as decorations, but were manifested to be representations of the lunar calender. Alexander Marshack, a research associate at the Peabody Museum of Archaeology and Ethnology at Harvard University, began investigating the markings in 1964 and published the results of his study this year in France. The inscribed objects he studied represented all cultural levels from 34,000 to 10,000 years ago. All were pocket sized, and as many as twenty-four tools were used to cut a single sequence, some covering a year or more. This system of notation seems to anticipate the development of a calendar, the idea of number, and the use of abstract symbols. It had been thought that such cognitive abilities developed only after the start of an agricultural society, less than 10,000 years ago.

A tribe of about twenty-four people living a Stone Age way of life was found in the Tasaday Forest on the southern Philippines' Mindanao Island in July. Anthropologists speculate that the tribe has been cut off from the rest of the world for at least 400 years and maybe as much as 2,000 years.

The tribe was first discovered five years ago by an official conducting a census survey. He described the finding a tribe of "jungle people so mysterious that they were known only as the bird who walks the forest like the wind." A long search led to the Tasaday. Interpreters at first had trouble understanding the tribe's language, which is related to Manubo, a native Filipino tongue in the Malayo-Polynesian family.

As communication became easier, it was found that the tribe calls itself the Tasaday because "the man who owns the forest in which they live told their ancestors in a dream to call themselves Tasadays, after a mountain." When asked whether they had ever been off the island, the Tasadays replied that they did not know leaving was possible; in fact, it was found that they had never even seen the ocean. The Tasadays are monogamous in mating but communal in all other ways, have no leader, know no other tribe, have known no unfriendly people, and have never heard of fighting.

The Tasadays committing not to encourage the famished foods but endeavouring to venture as afar from their clearing, consisting of the food, which subsists easily, and founded in the flourishing vegetation of the forest, through which they dwell. The staple of their diet is the pith of the wild palm. To supplement this, they catch tadpoles and small fish with their hands from the nearby streams. Monkey meat is considered a delicacy. After the monkey's hair is singed in a fire and cut away with bamboo blades sharpened by small stones, the meat is roasted.

The group includes six families with thirteen children, nine of whom are boys. All matters of mutual concern, such as food gathering, are decided in an open meeting.

New information about the Mayan civilization, the most highly developed civilization in the New World before the arrival of the white man, was gained from the discovery of an 11 -page codex fragment of a Mayan calendar book. (A codex is a manuscript copy of an ancient text.) The fragment is said to be part of a larger book about twenty pages long. The three other known codices were brought to Europe during the Spanish conquest but did not emerge as important historical material until the 1900's. The newly discovered codex is the first to be found in over a century.

Composed of bark cloth, like the other three, the 11- page codex is expected to reveal "pictorial information on the Venus calendar and its influence on Mayan religion and astrology," according to Michael D. Co., professor of anthropology at Yale University. The fragment dates to the late Mayan period, between 1400 AD. and 1500 AD. The new fragment reveals that the Mayans viewed all four phases of the Venus cycle as threatening. Previously, only the first phase was thought to have been considered sinister.

All four cycles of Venus as seen from the earth were measured by Mayan priests, who calculated that each cycle took 584 days to be completed. Modern astronomers calculate 583.92 days for each complete cycle. The complete 20- page codex would have covered sixty-five Venus cycles.

The co. believes the fragment to be authentic "because it is on bark cloth, [because of] the condition of the fragment. In addition, none of the applicative material duplicates or imitate anything we know about, being identical to the Venus calendar. Lastly, because no forger could be unscrupulous enough to invent material displaying so much knowledge of Mayan life."

Early Slavic tribes formed an organized state in the fourth to sixth centuries, about 500 years earlier than was believed, according to evidence reported in Tass, the Soviet press agency. Arkady Bugai, the Ukrainian archaeologist accredited with the discovery, predicated his determination on radiocarbon dating of charred wood detected in the remains of so named Serpentine Wall, a 500-mile network of defensive earthen works that once encircled the present site of Kiev, the Ukrainian capital. The charred wood used in the radiocarbon tests was from what is believed to be the remains of trees burned to clear ground for the wall. Bugai reasoned that a highly organized state was required to move the seven billion cubic feet of earth that made up the wall, which rises to a height of thirty to 35 feet and is 50 feet wide at its base.

The Serpentine Wall, which enclosed a roughly triangular area, was assumed to have been built to defend the Kiev area from hostile tribes. Ukrainian scholars now believe that the area must have had a population of approximately one million people of whom all was constructed. It was formerly believed that the first consolidation of Russian tribes occurred around the tenth century, during the rise of Kievan Russia.

An expedition bent on disproving the theory that the American man came to North America by crossing a land bridge over what is now the Bering Strait began in September. Gene Savoy, the American explorer who is known for his 1964 discovery of the ruined Inca city of Vilcabamba in Peru, believes that American man originated in the jungles east of the Andes Mountains in South America, where he thinks advanced civilizations flourished as long ago as 1500 Bc. The discovery of a new species of human ancestors and of fossils of the oldest human beings yet to be found in Europe dominated the news in anthropology in 1995.

The discovery of fossils of a new species of a human ancestor-Australopithecus Anamensis-at sites near Lake Turkana in Kenya was announced in August. Anamensis, a small-scale brained upright walker resembling the famous Lucy skeleton (identified with the species’ Australopithecus afarensis), weighed about 110 pounds. The complete upper and lower jaws, a set of lower teeth, a skull fragment, the teeth of several individuals, and a shinbone were dated to between 4.1 million and 3.9 million years ago, according to Meave Leakey (wife of Richard Leakey), one of the accorded archeological researchers.

Anamensis: investigations point to what may seem directly ancestral too consequential afarensis (dated at 3.6 million years old). The shinbone abides of the oldest overseer was evidentially discovered for uprightness: a bipedal locomotion with the ability to walk on two legs: a defining trait of humans. The earliest known evidence before this was the track (3.7 million years old) of three humanlike individuals, probably australopithecines, who strolled across a bed of fresh volcanic ash in what is now Laetoli, Tanzania.

The relationship between Anamensis (from anam, a native Kenyan term for ‘lake’) and an even older species whose discovery was announced in 1994 was unclear. The older species, found in the Middle Awash region of Ethiopia, was first named Australopithecus ramidus. The genus name was later changed to Ardipithecus (‘ground apes’). The teeth and scanty bone fragments of Ardipithecus ramidus were dated at 4.4 million years old.

Fragmentary fossil remains of at least four humans thought to be intermediate between Homo erectus and archaic forms of The Homo sapiens, the later species to which all modern humans belong, were found in caves in Atapuerca in northern Spain, according to a report published in August. Dated as at least 780,000 years old by means of a Paleomagnetic dating technique, the stone tools and skeletal fragments -including some from an adolescent and some from a child-of skulls, hands, and feet represent the oldest humans yet discovered in Europe. The researchers who found the fossils said they could possibly be distant ancestors of the Neanderthals who appeared in Europe hundreds of thousands of years later.

The Spanish fossils partly fill a gap in the history of human evolution and expansion around the world. Previously, the oldest human fossils found in Europe, dating back 500,000 years, belonged to Heidelberg man, a likely ancestor of the Neanderthals, found at the Mauer site in Germany near the French border. It is known, however, that descendants of the earliest humans had spread from Africa to Asia well more than a million years ago. Among reasons given by anthropologists for the late occupation of Europe by Homo is the harshness of Europe's Ice Age climate.

Finds of Neanderthaloid skulls and skeletons continue to be reported from widely separated areas. Digging in a cave at Mount Circeo on the Tyrrhenian sea, 50 miles south of Rome, Italy, Alberto Carlo Blanc uncovered an almost perfectly preserved Neanderthal skull, perfect except a fracture in the right temporal region. It is the third of this type found in Italy. The two skulls previously reported were found in 1929 and 1935 in the Sacopastore region, near Rome, but in not nearly so well preserved a condition as the present find. No other human bones were found here, but the skull was accompanied by fossilized bones of elephants, rhinoceri, and giant horses, all fractured, thus giving some evidence of the mode of life of Neanderthal man. Professor Sergio Sergi, of the Institute of Anthropology at the Royal University of Rome, who has studied this skull in detail believes it to be 70,000 to 80,000 years old. He concludes also that Neanderthal man walked comparably as modern man and not with head thrust forward as had previously been assumed.

Another Neanderthal skeleton is reported to have been found in a cave in Middle Asia by A. P. Okladnikoff of the Anthropological Institute of Moscow University and the Leningrad Institute of Anthropology. The bones of the skeleton were badly shattered, but the jaw and teeth of the skull, it crushed at the back, were almost complete

Hominids that were contemporary with Oldowan sites included two major lineages. Its first functional lines are the robust australopithecines (so called because their cheek teeth were larger than those of other australopithecines). This robust australopithecines-such as Australopithecus aethiopicus and Australopithecus boisei in East Africa, and Australopithecus robustus in South Africa-were bipedal and had small brains, large jaws, and large molars. The other lineage is made up of bipedal, larger brain size, and smaller toothed early members of the genus Homo, such as the Homo habilis, Homo’s rudolfensis, and early Homo erectus. The oldest fossils of The Homo erectus (sometimes called Homo ergaster) found in Africa dates back to about 1.85 million years ago. This species is characterized by an even larger brain and smaller teeth than earlier hominids and by a larger body size. (In 1984 anthropologists in Kenya found a nearly complete skeleton of an adolescent Homo erectus who would have been 1.8 m.’s

(6 ft.) tall as an adult.)

Experts are intuitively certain that these species were responsible for individual Oldowan sites. These species may have made and used Oldowan style stone tools to varying degrees. However, anthropologists have long suspected that the larger-brained and smaller-toothed Homo was probably a more habitual toolmaker. It is likely that Homo erectus was responsible for many Oldowan sites more recent than 1.85 million years ago. In any case, by one million years ago, all these species but The Homo erectus had gone extinct, so researchers can be certain that at least the Homo’s lineage was involved in using and making stone tools. The Homo erectus appears to have moved out of Africa and into Eurasia sometime before one million years ago, although some anthropologists think this geographic spread of hominids may have occurred nearly two million years ago.

The everyday life of Oldowan hominids is largely a matter of archaeological conjecture. Most sites in East Africa are found near lakes or along streams, suggesting that they preferred to live near water sources. Studies of rock sources call to mind that Oldowan hominids sometimes transported stone several kilometres to the sites where stone artifacts are found. Well-preserved sites often have collections of stone artifacts and fragmented fossil animal bones associated together, often in dense concentrations of several thousand specimens. Scholars disagree regarding the nature of these sites. Some archaeologists interpret them as camps, home bases, or central foraging places, similar to those formed by modern hunter-gatherers during their daily activities. Others think that such sites represent scavenging stations where hominids were primarily involved in processing and consuming animal carcasses. Still others view these accumulations as stone caches where hominids collected stone in areas where such raw materials did not occur naturally.

Fossil remains from some Oldowan sites suggest that Oldowan hominids used stone tools to process meat and marrow from animal carcasses, some weighing several hundred pounds. Although some archaeologists have argued that large game hunting may have occurred in the Oldowan, many Oldowan specialists believe these early Stone Age hominids likely obtained most of their meat from large animals primarily through scavenging. The early hominids may have hunted smaller animals opportunistically, however. Modern experiments have shown that sharp Oldowan flakes are especially useful for the processing of animal carcasses -for example, skinning, dismembering, and non-anaesthetized defleshing. The bulk of early hominid diet likely consisted of a variety of plant foods, such as berries, fruits, nuts, leaves, flowers, roots, and tubers, but there are little archaeological records of such perishable foodstuffs.

The term Acheulean was first used by 19th- century French archaeologist Gabriel de Mortillet to refer to remnants of a prehistoric industry found near the town of Saint Acheul in northern France. The distinguishing feature of this site is an abundance of stone hand axes, tools more sophisticated than those found at Oldowan sites. The term Acheulean is now used to refer to hand axe industries in Africa, the Near East, Europe, and Asia dating near 1.5 million years ago to 200,000 years ago and spanning human evolution from A Homo erectus to early archaic Homo sapiens.

The characteristic Acheulean hand axe is a large, pointed or oval shaped form. These hand axes were often made by striking a blank (a rough chunk of rock) from a larger stone and then shaping the blank by carefully removing flakes around its perimeter. Usually, both sides, or faces, of the blank were flaking, a process called bifacial flaking. Later Acheulean hand axes may have been produced by the soft hammer technique, in which a softer hammer of stone, bone, or antler produced thinner, more carefully shaped forms. Other associated forms include cleavers, bifacial artifacts with a sharp, guillotine like bit at one end, its profound thickness and directed artifacts known as picks. Simpler, typical Oldowan artifacts are usually also found at Acheulean sites, and a range of retouched flake tools such as scrapers. Experiments have implicated that Acheulean hand axe and cleavers are excellent tools for heavy-duty slaughtering activities, such as severing animal limbs. Some archaeologists, however, believe they may have served other functions, or perhaps were general, all purpose tools.

Acheulean tools did not entirely replace Oldowan tools. Archaeologists have discovered many sites where Oldowan tools were used throughout the Acheulean time, sometimes in the same geographic region as Acheulean industries. Interestingly, the Acheulean might be especially restricted to Africa, Europe, and western Asia, with few sites in East Asia of stone industries with typical Acheulean hand axes and cleavers during the Lower Palaeolithic. Most of the industries found in East Asia have tendencies toward being simpler, Oldowan like technologies that can be seen at sites at Nihewan and the cave of Zhoukoudian in northern China.

Well-studied Acheulean sites include those at Olduvai Gorge and Isimila, in Tanzania, Olorgesailie, in Kenya, Konso Gardula and Melka Kunture, in Ethiopia: Kalambo Falls, in Zambia, Montagu Cave in South Africa, and Tabun and Gesher Benot Ya'aqov in Israel, Abbeville and Saint Acheul in France, and Swanscombe and Boxgrove, in England, Torralba and Ambrona, of Spain. Most anthropologists think that Acheulean populations of The Homo erectus and early Homo sapiens were probably more efficient hunters than Oldowan hominids. Recently discovered wooden spears from about 400,000 years ago at Schöningen, Germany, and a 300,000-year old wooden spear tips from Clacton, England, suggest that the hominids who made applicable may have hunted game extensively.

Experts disagree about whether Acheulean hominids and their contemporaries harnessed the use of fire. Archaeologists have found evidence such as apparent burnt bone and stone, discoloured sediment, and the presence of charcoal or ash at most sites, including Cave of Hearths, in South Africa, Zhoukoudian, in China, and Terra Amata in France. Discrete fireplaces (hearths), however, may be quite rare. Similarly, there is only questionable evidence of huts or other architectural features.

The Middle Palaeolithic Epoch extends from around 200,000 years ago until about 30,000 years ago. It is also called the Mousterian Industry in Europe, the Near East, and North Africa and called the Middle Stone Age in sub-Saharan Africa.

Toolmaker in the Middle Palaeolithic used a range of retouched flake tools, especially sided scrapers, serrated scrapers, backed knives (blade tools with the non-blade and very much as to be dull sided in the apparatuses to fit comfortably in the hand), and points. Experts believe these tools were used to work animal hides, to shape wood tools, and as projectile points. This period is also characterized by specially prepared cores. Using the disc core method, a circular core could produce many flakes to serve as blanks for retouched tools. With the Levallois method (named after a suburb of Paris, France, where the first such artifacts were discovered), flakes of a predetermined shape were removed from specially prepared cores. This process resulted in ovally-shaped flakes or large, triangular points, depending on the type of Levallois core. Levallois core and flakes are first seen at some late Acheulean sites but become much more common in the Middle Palaeolithic/Middle Stone Age.

Some regional variation can be seen among Middle Palaeolithic industries. A North African variant known as African produced tools and point characterized by tangs (stems projecting from the base of the tool or point, to allow the tool to be attached to a handle or shaft). In Eastern Europe, a variant called Szeletian produced two-sided, leaf-shaped points, a style not usually seen elsewhere until the Upper Palaeolithic Age. In Central Africa, a variant called the Sangoan produced a range of heavy-duty picks and axes.

Middle Palaeolithic/Middle Stone Age archaeological sites are often found in the deposits of caves and rock shelters. Well-studied caves include Pech de l'Aze, Combe Grenal, La Ferrassie, La Quina, and Combe Capelle, in France. Tabun, Kebara, Qafzeh, and Skhul, in Israel, Shanidar, in Iraq, Haua Fteah, in Libya and Klasies River Mouth, in South Africa, also in East Asia, site that are contemporary with the Middle Palaeolithic Ages, are often too, exhibit a simpler Toolmaking technology, without as much standardization of the flake tool forms as in much of the rest of Eurasia and Africa.

Hominids associated with the Middle Palaeolithic Epoch is to include Neanderthals and other archaic Homo sapiens (Homo sapiens predating anatomically modern humans, who lived from about 200,000 to 35,000 years ago). In Europe, the Middle Palaeolithic Age is associated with Homo sapiens neanderthalensis, or Neanderthals, who lived from about 200,000 to 35,000 years ago. Neanderthals were short, robust humans with fully modern cranial capacity. They had more jutting faces, more prominent brow ridges, thicker cranial bones, and larger nose cavities than modern humans. Skeletal remains show that Neanderthals were very robust and muscular. Healed injuries to some skeletons suggest that Neanderthals led stressful, rigorous lives. Famous Neanderthal discoveries include Neander Valley, in Germany, La Chapelle-aux-Saints and La Ferrassie, in France, Krapina, in Croatia, Monte Circeo and Saccopastore, in Italy, Shanidar, in Iraq and Tabun and Amud, in Israel. Fossils of an archaic Homo sapiens from this time have been found at sites such as Dali and Maba, in China and at Florisbad, in South Africa, and Ngaloba, in Tanzania. In addition, fossils interpreted as early anatomically modern humans have been found at some Middle Palaeolithic/Middle Stone Age sites in parts of Africa and the Near East, such as at Qafzeh and Skhul, in Israel, and Klasies River Mouth, in South Africa.

Middle Palaeolithic hominids appear to have been more successful hunters than their predecessors. Abundant animal remains suggest that these hominids ate many kinds of large mammals. It is unknown, however, how much of the meat consumed was obtained through hunting, as opposed to scavenging. Accumulations of remains at some sites show that some animals were of a common species and were adults in their prime, which some researchers suggest is an indication of efficient hunting behaviour. Several sites in Europe that contain the carcass of one or more large animals are believed to be butchery sites, where early humans processed the spoils of kills. Some archaeologists have also argued that some Middle Palaeolithic stone points were probably attached to spears, a development in hunting technology. At Klasies River Mouth Cave in South Africa, archaeologists discovered a buffalo vertebra with a broken tip of what was probably a spearhead embedded in it, which could be evidence that the large mammal was hunted or trapped by hominids.

Middle Palaeolithic hominids have tendencies given to evidence of more behavioural complexity than their predecessors. For example, although most of the stone found at most Middle Palaeolithic sites are local-its sources within a few kilometres of a site-an increasing percentage is exotic stone, transported from its sources tens of kilometres away. Simple hearths at many Middle Palaeolithic sites suggest habitual fire use and possible firemaking as well. Evidence of housing is still quite uncommon, but is present at some sites. For example, at Molodovo, Ukraine, a circle of mammoth bones has been interpreted as a hut structure. Microscopic studies of residues on Middle Palaeolithic scraper tools suggest that they may have been used for woodworking and to work animal hides for use as clothing or in shelters.

Over the Middle Palaeolithic Epoch, hominids spread across much of Eurasia. The use of fire and clothing and the ability to build more substantial shelters may have helped them survive in cold regions, such as the central Asian steppe. By 40,000 years ago, near the end of the Middle Palaeolithic Age, when humans entered Australia, which apparently would have required traversing some distance of open ocean, probably in some form of craft. Some Middle Palaeolithic sites have skeletal remains interpreted as simple burials. Non-representational art is known from this period, although occasional ornaments such as beads have been found at late Middle Palaeolithic/Middle Stone Age sites.

Opinion is divided among anthropologists about whether Neanderthals and other archaic Homo sapiens had fully modern cognitive abilities, particularly the ability to recognize and are known through symbolic gesture, a skill required to form modern languages. On one hand, the large cranial capacities of these populations might suggest modern human cognitive and behavioural capabilities. On the other hand, their technological development was very slow, and they left behind no trace of the use of symbols, such as representational cave paintings. Archaeologists have found much greater evidence of symbolism and cultural complexity during the Upper Palaeolithic.

The Upper Palaeolithic Age extends from approximately 40,000 years ago until the end of the last ice age, about 10,000 years ago. This era is known as the Paleo-Indian period in the Americas, and as the Later Stone Age in a sub-Saharan Africa, where it extended much longer, even to historical times in parts of the continent. In the Upper Palaeolithic, standardized blade industries appear and become much more widespread than in previous times. The first of these industries to appear in the Near East and Europe is known as Aurignacian. Later Upper Palaeolithic industries include the Perigordian, Solutrean, and Magdalenian. The Upper Palaeolithic Epoch is usually characterized by specially prepared cores from which blades (flakes at least twice since they are wide) were struck off with a bone or antler punch. Upper Palaeolithic humans also developed new forms of scrapers, backed knives, burins, and points. Beautifully made, two-sided, leaf-shaped points are also common in some Upper Palaeolithic industries. Toward the end of the Upper Palaeolithic, microliths (small, geometric shaped blade segments) became increasingly common in many areas.

By the end of the Upper Palaeolithic period and the end of the last ice age about 10,000 years ago, human populations had spread to every continent except Antarctica. Humans had effectively adapted to the northern latitudes of Eurasia and had dispersed into the American continents. The earliest documented occupation of the Americas appears to have been during the late ice age, about 12,000 to 10,000 years ago. The first recognized Paleo-Indian industry is known as Clovis, which was followed by Folsom. These industries produced delicately crafted, bifacial points fluted, meaning that the base of the point is thinned by removing a large flake from one or both sides. Fluted Clovis points have been found at mammoth kill sites, while Folsom points are associated with bison kills, mammoths being extinct by that time.

Famous Upper Palaeolithic occupation sites include Laugerie Haute, La Madeleine, Abri Pataud, and Pincevent, in France; Castillo, Altamira, and El Juyo, in Spain; Dolní Vestonice, in the Czech Republic; Mezhirich, in Ukraine; Sungir and Kostenki, in Russia; Ksar Akil, in Lebanon; Kebara, in Israel; Zhoukoudian Upper Cave, in China, Haua Fteah, in Libya and Taforalt, in Morocco, and the well-known Later Stone Age sites in sub-Saharan Africa includes Lukenya Hill, in Kenya, Kalemba, in Zambia, Rose Cottage Cave, Wilton Cave, Nelson Bay Cave, and Boomplaas in South Africa. The most famous Paleo-Indian sites are those moved to the United States near the eastern New Mexico towns of Clovis and Folsom, which gave the industries their names.

Human fossil identifiers are characterized with the Upper Palaeolithic, Paleo-Indian, and Later Stone Ages are usually anatomically modern human, or The Homo sapiens, when in the 19th century, Homo sapiens skeletal remains were grounded within the associated generation with early Upper Palaeolithic artifacts at the unrefined refuge of Cro-Magnons in southern France. The term Cro-Magnon Man has occasionally been manipulated by the frame reference that is anatomically contemporaneous of the humans existing out of the Upper Palaeolithic Era. Not all humans were anatomically modern in this period, however. In the initial depictions of the Upper Palaeolithic Epoch, the localities that make up the Chatelperronian industry remain associated with late Neanderthals, conceivably by it is unduly influenced persuaded by modern humans participating with Aurignacian technology.

During the Upper Palaeolithic Age, tools of bone, antler, and ivory become common for the first time. These tools include points, barbed harpoons, spear throwers, awls, needles, and tools interpreted as spears shaft straighteners. The presence of eyed needles depicts the use of sewn clothing (presumably of hide and possibly early textiles) or hide coverings for tents or shelters. In some carvings from this period, human figures are depicted wearing hooded parkas or other vestments. Other technological innovations include lamps (in hollowed out stones filled with flammable substances such as oil or animal fat) and probably the bow and arrow (small projectile points have been interpreted as arrowheads). Many Upper Palaeolithic artifacts might be evidence of composite technology, in which multiple components were combined to form one tool or process. For example, spear tips were attached with binding material to spear shafts, which were flung using spear throwers (sometimes called atlatls). A spear thrower usually took the form of a length of wood or bone with a handle on one end and a peg or socket at the other to hold the butt of a spear or dart. When swung overhand together, the spear thrower provided greater thrust on the spear.

Upper Palaeolithic populations appear to have been competent-hunter-gatherers. The use of mechanical devices such as spear throwers and, probably, arm bows and an arrowed weapon allowed them to increase the velocity, penetrating force, and distance of projectiles. Many Upper Palaeolithic sites contain large quantities of mammal bones, often with one species predominating, such as red deer, reindeer, or horse. It is believed that many of these Upper Palaeolithic hunter-gatherers could effectively predict the timing and location of seasonal resources, such as reindeer migrations or salmon runs.

Many Upper Palaeolithic sites feature elements interpreted as evidence of housing. These are commonly patterns of bone or stone concentrations that seem to delineate hut or tent structures. At the sites of Étiolles and Pincevent, in France, the distribution of stone artifacts, animal bones, hearths, and postholes has been interpreted as evidence of clearly defined huts. At Mezhirich, in the Ukraine, and Kostenki, in Russia, hut structures were found made of stacked or aligned mammoth bones. Distinctive hearths, often lined or ringed with rocks, is much more common in the Upper Palaeolithic Period than in earlier times.

Stone for tools was often obtained from more distant sources, sometimes in larger quantities than seen previously in the Stone Age. Occasionally, stone was traded or carried over several hundred kilometres. It seems likely, therefore, that trade and transport routes were more formalized than they had been in earlier times. The Upper Palaeolithic Period, without exception unites the trade of exotic materials, through proper documentation such oceanic marine life and exotic semiprecious stones, for personal ornamentation clears itself through the parchments and leaves such ornaments as beads or necklaces, or attire in needs of immediate purchase.

In the Upper Palaeolithic Period, evidential rituals are held in respective set prioritized classification for those categorical to a burial, least of mention, there is a strong possibility that by some sorted and less than are today’s religious ceremonials that might have had to some beginning during its time. For example, at Sungir, in Russia, three individuals were buried with ivory spears, pendants and necklaces of shells and animal teeth, and thousands of ivory beads that had apparently been sewn into their clothing.

The prehistoric psychological subject of the humanities, in favouring the arts, would paint, sculpture, and engrave. Comparably or by contrast this subject is inclined by such are the interests concerning the mind’s eye. Sites in Europe are famous for their artwork, but prehistoric Stone Age art has also been richly documented in Africa, Australia, and other parts of the world. Animals are common subjects of Upper Palaeolithic art, and human figures and abstract elements such as lines, dots, chevrons, and other geometric designs are also found.

Early humans around the world used natural materials such as red and yellow ochre, manganese, and charcoal to create cave art. Among the hundreds of European sites with Upper Palaeolithic cave paintings, the setting classifications confirming Altamira, in Spain, and Lascaux and the more recently discovered (and archaeologically oldest) Chauvet, in France, gives to a better understanding of imaginative Creationism. Animals such as bison, wild cattle, horses, deer, mammoths, and woolly rhinoceroses are represented in European Upper Palaeolithic cave arts, along with human figures that are uncommon. Later Stone Age paintings of animals have been found at sites such as in Apollo 11 Cave, in Namibia, and stylized engravings and paintings of circles, animal tracks, and meandering patterns have been found in Australia’s Koonalda Cave and Early Man Shelter.

Many small sculptures of human female forms (often called Venus figurines) have been found in many sites in Europe and Asia. Small, stylized ivory animal figures made more than 30,000 years ago were discovered in Vogelherd, Germany, and clay sculptures of bison were found in Le Tuc d̀Audoubert, in the French Pyrenees. In addition, many utilitarian objects-such as spear throwers and batons-were superbly decorated with engravings, sculptures of animals, and other motifs.

The earliest known musical instruments also come from the Upper Palaeolithic. Flutes made from long bones and whistles made from deer foot bones have been found at a number of sites. Some experts believe that Upper Palaeolithic people may have used large bones or drums with skin heads as percussion instruments.

The archaeological record of the Upper Palaeolithic shows a creative explosion of new technological, artistic, and symbolic innovations. There is little doubt that these populations were essentially modern in their biology and cognitive abilities and had fully developed language capabilities. There is a much greater degree of stylistic variation geographically (Some archaeologists have suggested that this be evidence of the emergence of ethnicity) and a more rapid developmental pace during the Upper Palaeolithic Period than in any previous archaeological period. Anthropologists debate whether these new Upper Palaeolithic patterns are due to biological transition or whether they are simply the products of accumulated cultural knowledge and complexity through time.

The Mesolithic (also known as the Epipaleolithic) extends from the end of the Pleistocene Ice Age, about 10,000 years ago, until the period when farming became central to peoples’ livelihood, which occurred at different times around the world. The term Mesolithic is generally applied to the period of post-Pleistocene hunting and gathering in Europe and, sometimes, parts of Africa and Asia. In the Americas the post-glacial hunters-a gatherer stage that predates the dominance of agriculture is usually called the Archaic. In the rest of the world, Mesolithic sites are usually characterized by microliths. Microlithic blade segments were commonly retouched into a range of shapes, including crescents, triangles, rectangles, trapezoids, and rhomboids, and thus the tools are often called geometric microliths. These forms often have multiple sharp edges. Many of these microliths probably served as elements of composite tools, such as barbed or a knife edge-tipped spears or arrows, or wooden-handled knives. The microliths were likely inserted into shafts or handles of wood or antler and reinforced with some type of adhesive.

The end of the ice age brought rapid environmental change in much of the world. With the warmer, post-glacial conditions of the Holocene Epoch, ice sheets retreated and sea levels rose, inundating coastal areas worldwide. Temperate forests spread in many parts of Europe and Asia. As these climatic and vegetative changes occurred, large herds of mammals, such as reindeer, were replaced by more solitary animals, such as a red deer, roe deer, and wild pig. Cold-adapted animals, such as the reindeer, elk, and bison, retreated to the north, while others, such as the mammoth, giant deer, and woolly rhinoceros, went extinct. The rich artistic traditions of Upper Palaeolithic Western Europe declined markedly after the end of the ice age. This may in part be because the changing environment made the availability of food and other resources less predictable, requiring population to spend more time searching for resources, leaving less time to maintain the artistic traditions.

Well-studied Mesolithic/Archaic sites include Star Carr, in England; Mount Sandel, in Ireland; Skara Brae, in Britain’s Orkney Islands; Vedbæk, in Denmark, Lepenski Vir, in Serbia, Jericho, in the West Bank, Nittano, in Japan, Carrier Mills, in Illinois, and Gatecliff Rockshelter, in Nevada. In the sub-Saharan Africa, many Later Stone Age sites of the Holocene Epoch could broadly be termed Mesolithic, due to their geometric microliths and bows and arrow technology.

During the Mesolithic, human populations in many areas began to exploit a much wider range of foodstuffs, a pattern of exploitation known as broad spectrum economy. Intensifying benefits from unidentified foods made up of wild cereals, seeds and nuts, fruits, small game, fish, shellfish, aquatic mammals and, tortoises, and invertebrates such as a snail. Seemed, nonetheless, that domesticated dogs or other smaller mammals were crucial to those societies possessing them, whereas, wolves were domesticated in Eurasia and North America to become, our dogs used as hunting companions, sentinels, and, in some societies, food.

Initially, one of the most puzzling features of animal domestication is the seeming arbitrariness with which some species have been domesticated while their close relatives have not. It turns out that all but a few candidates for domestication have been eliminated by the Anna Karenina principle. Humans and most animal species make for an unhappy marriage, for one or more of many possible reasons: the animal’s diet, growth rates, mating habits, disposition, tendencies to panic, and several distinct features of social organization. Only a few defiant mammal interchanges had ended in happy marriages with humans, by virtue of compatibility on all those separate counts.

To appreciate the changes that have developed under domestication compare the evolution of wolves, the wild ancestors of domestic dogs, particularly placed on parallel with the many breeds of dogs. Some dogs are much bigger than wolves (Great Danes), while others are much smaller (Pekingese). Some are slimmer and built for racing (greyhounds), while others are short-legged and useless for racing (dachshunds). They vary enormously in hair form and colour, sand some are even hairless. Polynesians and Aztecs developed dog breeds specifically raised for food. Comparing a dachshund with a wolf, you would not even suspect that the former had been derived from the latter if you did not already know it. Wolves were independently domesticated to become dogs in the Americas and probably in several different parts of Eurasia, including China and Southwest Asia. Modern pigs are derived from independent sequences of domestication in China, western Eurasia, and possibly other areas well. These examples reemphasize that the same few suitable wild species attracted the attention of many different human societies.

That is, domestication involves wild animals’ being transformed into something in addition as applicatory to humans. Truly domesticated animals differ in two ways from their wild ancestors. These differences result from two processes: human selection of those individual animals more useful to humans than other individuals of the same species, and automatic evolutionary responses of animals to the alternate forces of natural selection operating in human environments as compared with wild environments.

The wild ancestors of the Ancient Species of Big Domestic Herbivorous Mammals were spread unevenly over the globe. South America had only one such ancestor, which gave rise to the Ilama and alpaca. North America, Australia and sub-Saharan Africa had none at all. The lack of domestic mammals indigenous to sub-Saharan Africa is especially astonishing, since a main reason why tourist s visit Africa today is to see its abundant and diverse wild mammals. In contrast, the wild ancestors of the Big Herbivorous Mammals were confined to Eurasia.

We are reminded to the many ways in which big domestic mammals were crucial to those human societies that possessing them. Most notably, they provided meat, milked products, fertilizer. Land transport, leather, military assault vehicles. Plow traction, and wool, as well as germs that killed previously unexposed people s.

In addition, of course, small domestic mammals and domestic birds and insects have been useful to humans. Many birds were domesticated for meat, eggs, and feathers: the chicken in China, various duck and goose species in parts of Eurasia, turkey in Mesoamerica, guinea fowl in Africa, and the Muscovy duck in South America. Wolves were domesticated in Eurasia and North America to become dogs used as hunting companions, sentinel, pets, and in some societies, food. Rodent s and other small mammals domesticated for food include the rabbit in Europe, the guinea pig in the Andes, a giant rat in West Arica, and possibly a rodent called the hutia on Caribbean islands. Ferrets were domesticated in Europe to hunt rabbits, and cats were domesticated in North Africa and Southwest Asia t o hunt rodent pests. Small mammals domesticated as recently as the 19th and 20th centuries include foxes, mink, and chinchillas grown for fur and hamsters kept as pets. Even some insects have been domesticated, notably Eurasia’s honeybee and China’s silkworm moth, kept fo r honey and silk, respectfully.

Many of these small animals thus yielded food, clothing, or warmth. But none of them pulled plows or wagons, none bore riders, none except dogs pulled sleds or became war machines, and none of them have been as important food as have big domestic mammals.

That is, domestication involves wild animals’ being transformed into something more useful to humans. Truly domesticated animals differ in processes: human selection of those individual animals more useful to humans than other individuals of th same species, and automatic evolutionary responses of animals to the altered forces of natural selection operating in human environment ‘s’ as compared with wild environment ‘s’.

The ways in which domesticate animals have diverged from their wild ancestors include many species changing in size: cows, pigs and sheep became smaller under domestication, while guinea pigs became larger. Sheep and alpacas were selected for retention of wool and reduction of loss of hair, while cows have been selected for high milk yields. Several species of domestic animals have smaller brains and less developed sense organs than their wild ancestors, because they no longer need the bigger brains and more developed sense organs on which the r ancestors depended to escape from wild predators.

Dates of domestication provide a line of evidence confirming Galton’s view that early herding-peoples quickly domesticated all big mammal species suitable for being domesticated. All species for whose dates of domestication have archaeological evidence were domesticated between about 8000 and 2500 Bc.-that is, within the first few thousand years of th sedentary farming-herding societies that arose after the end of the last Ice Age. Although the era of big mammal domestication began with the sheep, goat, and pig and ended with camels. Since 2500 Bc., there have been no significant additions.

It’s true, of course, that some small mammals’ wee first domesticated long after 2500 Bc. For example, rabbits were not domesticated for food until the Middle Ages, ice and rats for laboratory research not until the 20th century, and hamsters for pets not until the 1930s. The continuing developments of domesticated small mammals aren’t surprising, because there are literally thousand’s of wild species as candidates, and because they were of too little value to traditional societies to warrant the effort of raising them. But big mammal domestication virtually ended 4,500 years ago. By then, all of the world’s 148 candidate big species must have been tested innumerable times, with the result that only a few passed the test and no other suitable ones remained.

Almost all of domesticated large mammals that live in herds, they maintain a well-developed dominance hierarchy between herd members and the herds occupy overlapping home ranges than mutually exclusive territories. When the herd is on the move, its members maintain a stereotype order: in the rear, the stallion. In the front, the top-ranking female, followed by her foals in order of age, with the youngest first, an behind her, the other mares in order of rank, each followed by her foals in order of age. In this way, many adults coexist in the herd without constant fighting and with each knowing in its rank.

That social structure is ideal for domestication, because of a pack line follow the human leaders, they would normally follow the top-ranking female. Herds o packs of sheep, goat, cow and ancestral dogs (wolves) have a similar hierarch, as young animals grow up in such a herd, they imprint onto the animals that they regularly see nearby. Under wild conditions these are members of their own species, but captive young animals also see humans nearby and imprint on humans as well.

Such social animals lend themselves to herding. Since they are tolerant of each other, they can be bunched up. Since they instinctively follow a dominant leader a will imprint on humans as that leader, they can readily be driven by a shepherd or sheepdog. Herd animals do well when penned in crowed conditions, because they are accustomed to ling on densely packed groups in the wild.

In contrast, members of most solitary territorial animal species cannot be herded. They do not tolerate each other, they do not imprint on humans, and the are not very submissive. Whoever saw a line of cats (solitary and territorial in the wild) following a human or allowing themselves to be herded by a human? Every cat lover knows that cats are not submissive too human in the way dogs instinctively are. Cats and ferrets are the sole territorial mammal species that were domesticated, because our motive for doing so was not to herd them in large roup raised for food but to keep them as solitary hunters or pets.

While most solitary territorial species haven’t been domesticated, it’s not conversely thus haven’t been domesticated. It’s not conversely the case that most herd species can be domesticated Most can’t for one of several additional reasons.

Most herd species don’t have overlapping home ranges but instead domesticated. Most can’t for one several additional reason.

Herds of many species don’t have overlapping home ranges but instead maintain exclusive territories against other herds. It’s n more possible to pen two such herds together than to pen two males in solitary species. Again, many species that live in herds for par of the yer are territorial in the breeding season, when they fight and do not tolerate each other’s presence.

Once, agin, many species that living in herds for part of the year are territorial in the breading season, when they fight and do not tolerate each other’s presence. That’s true of most deer and antelope species (again with the exception of reindeer), and it’s one of the main factors that has disqualified all the social antelope species for which Africa is famous from being domesticated. While one’s first association to African antelope is “vast dense herd spreading across the horizons.” In fact, the males of those herds space themselves into territories and fight fiercely with each other when breeding. Hence those antelope cannot be maintained in crowded encloses in captive, as can sheep or goats or cattle. territorial behaviour similarly y combines with a fierce disposition and a slow growth rate to banish rhinos from the farmyard.

Finally, many herd species, including again most deer and antelope, do not have a well-defined dominance hierarchy and are not instinctively prepared to become imprinted on a dominant leader (hence to become minim-printer on humans). As a result, though many deer and antelope species have been tamed (think of all those true Bambi stories), one never sees such tame deer and antelope driven in herds like sheep. That problem also derailed domestication of North American bighorn sheep, which belong to the same genus as Asiatic mouflon sheep, ancestors of our domestic sheep. Bighorn sheep are suitable to us similar to mouflons in most respects except a crucial one: they lack the mouflon’s stereotypical behaviour whereby some individuals behave submissively toward other individual whose dominance e they acknowledge.

The Fertile Crescent’s biological diversity over small distances contributed to an advantage-its wealth in ancestors not only on valuable crops but also of domesticated big mammals, least of mention, there were few or no wild mammal species suitable for domestication in the other Metatherian zones of California, Chile, southwestern Australia, and South Africa. In contrast, four species of big mammals-the goat, sheep, pig and cow were domesticated very early in the Fertile Crescent, possibly earlier than any other animal except the dog anywhere else in the world. Those species remain today four of the world’s five most important domesticated mammals. Yet, their wild ancestors were commonest in different parts of the Fertile Crescent, so that the four species were domesticated in different places: sheep possibly in the central part, goats either in the eastern part at higher elevation (the Zagros Mountains of Iran) or in the southwestern part (the Levant), pigs in the northern-central part, and cows in the western part, including Anatolia. Nevertheless, although the areas of abundance of these four wild progenitors thus differed, all four lived in sufficiently equivalent proximity that they were readily transferred after domestication from one part of the Fertile Crescent to another, and the whole region ended with all four species.

Agriculture was launched in the Fertile Crescent by the early domestication of eight crops, termed “founder crops” (because they founded agriculture in the region and possibly in the world). Those eight founders were the cereal’s emmer wheat, einkorn wheat, and barley, the pulse’s lentil, pea, chickpea, and bitter vetch, and the fibre crop flax. Of these eight, only two, flax and barley, range in the wild at all widely outside the Fertile Crescent and Anatolia. Two of the founders had very small ranges in the wild, chickpea being confined to southeastern Turkey and emmer wheat to the Fertile Crescent itself. Thus, agriculture could arise in the Fertile Crescent from domestication of locally available wild plants, without having to wait for the arrival of crops derived from plants domesticated elsewhere. Conversely, two of the eight founder crops could not have been domesticated anywhere in the world except in the Fertile Crescent, since they did not occur wild elsewhere.

Early food production in the Fertile Crescent is that it may have faced less competition from the hunter-gatherer lifestyle than that in another area, including the western Mediterranean. Southwest Asia has a few large rivers and only a short coastline, providing merger aquatic resources (in the form or river and coastal fish and shellfish). One of the important mammal species hunted for meat, the gazelle, originally lived in huge herds but was over exploited by the growing human population and reduced to low numbers. Thus, the food production package quickly became superior to the hunter-gatherer package. Sedentary villages based on cereals were already in existence before the rise of food production and predisposed those hunter-gatherers to agriculture and herding. In the Fertile Crescent the transition from hunter-gatherer to food production took place fast: as late as 9000 Bc. People still had no crops and domestic animals and were entirely dependent on wild foods, however by 6000 Bc. Some societies were almost dependent on crops and domestic animals.

The situation in Mesoamerica contrasts strongly: that area provided only two domesticable animals (the turkey and the dog), whose meat yield was far lower than that of cows, sheep, goats, and pigs, and corns, Mesoamerica’s staple grain, was, difficult to domesticate and perhaps slow to develop. As a result, domestication may have begun in Mesoamerica until around 3500 Bc. (The date remains very uncertain): those first developments were undertaken by people who were still nomadic-hunter-gatherers, and settled villages did not arise there until around 1500 Bc.

Our comparison begins with food production, a major determinant of local population size and society complexity-so, an ultimate factor behind the conquest. The most glaring difference between American and Eurasian food production involved big domestic mammal species. The encountering evolutionary principles can be extended to understanding much else about life besides marriage. We have by tendencies to seek easy, single-factor explanations of success. For most important things, though, success actually required avoiding many separate possible causes of failure. The Anna Karenina principle explains a feature o animal domestications that had heavy consequences for human history-namely that so many seemingly suitable big wild mammal species, such as zebra and peccaries, have never been domesticated and that the successful domestications were almost exclusively Eurasian. Having had of happenings, some active events were readily marked by some untold story that so many wild plant species are seemingly suitable for domestication, least of mention, that they were never domesticated.

That has an enormous set of differences between Eurasian and Native American societies due largely to the Late Pleistocene extinction (extermination?) Of most North and South America’s former big wild mammal species, was, that if it had been for those extinctions, modern history might have taken a different course. When Cortés and his bedraggled adventurers landed on the Mexican coast in 1519, they might have been driven into the sea by thousands of Aztec cavalry mounted on domesticated native American horses. Instead of the Aztec’s dying of smallpox, the Spaniards might have been wiped out by American germs transmitted by disease-resistant Aztecs. American civilizations resting on animal power might have been sending their own conquistadores t ravage Europe. However, it is, nonetheless, that these hypothetical outcomes were foreclosed by mammal extinction’s thousands of years earlier.

Apparently standing, it was nonetheless, parts of the explanation for Eurasia’s having been the main site of big mammal domestication is that it was the continent with the most candidates’ species of wild mammals to start with, and lost the fewest candidates to extinction in the last 40,000 years. It is also true that the percentage of candidates was domesticated is highest in Eurasia (18 percent), and is especially low in sub-Saharan Africa (no species domesticated out of fifty-one candidates.) Particularly surprising is the large number of species of African mammals that were never domesticated, despite there having Erasmian findings, under which are some counterparts that were zebras? Why Eurasia’s pigs, but not American peccaries or Africa’s three species of true wild pigs? Why Eurasia’s five species of wild cattle (aurochs, water buffalo, yaks, gaur, the banteng), but not the African buffalo or America bison? Why the Asian mouflon sheep (ancestor of our domestic sheep), bu not North American bighorn sheep.

Nevertheless, no one would seriously describe this evolutionary process as domestication, because and bats and other animal consumers do not fulfill the other part of the definition: they do not, consciously grow plants. In the same way, the early unconscious stages of crop evolution from wild plants consist of plants evolving in ways that attracted humans to eat and disperse their fruit without yet intentionally growing them. Human latrines, like those of aardvarks, may have been a testing ground of the first unconscious crop breeders.

Latrines are merely one many places where we accidentally sow the seeds of wild plants that we eat. When combining edible wild plants, then venturing home with our bounty, some seedlings spill en route or at our house. Some fruit rots while still containing perfectly good seeds, and gets thrown out uneaten into the garbage. As parts of the fruit that we take into our mouths, strawberry seeds are tiny and inevitably swallowed and defecated, but other seeds are large enough to be spat out. Thus, our spittoon and garbage dumps joined our latrines to form the fist agricultural research laboratories.

From your berry-picking days, you know that you select particular berries or berry bushed. Eventually, when the first farmers began to sow seeds deliberately, they would inevitably sow those from the plants they had chosen to gather, though they didn’t understand the genetic principle that big barriers have seeds likely to grow into bushed yielding more big berries. So, then, amid the mosquitoes on a hot humid day, you do not do it for just any strawberry bush. Even if unconsciously, you decide which bush looks most promising, and whether its worth it at all. What are your unconscious criteria?

One criterion, of course, is size. You prefer large berries, because it is not worth your while to getting sunburnt and mosquitos bitten for some lousy little berries. That provides part of the explanation why many crop plants have much bigger fruits than their wild ancestors do. It is especially familiar that modern supermarket strawberries and blaeberries are gigantic compared with wild ones, and those differences arose in recent centuries.

Still another obvious difference between seeds that we grow and many of their wild ancestors are in bitterness. Many wild seeds evolved to be bitter, bad-tasting and some poisonous, to deter animals from eating them. Thus, natural selection acts oppositely on seeds and on fruits. Certain plants, and their maturing fruit and seeds bore of a secured ripening, in that animals and the seeds themselves held within the fruit stayed of a bad-taste. Otherwise, the animal would also chew up the seed, and it could not sprout.

While size and tastiness are the most obvious criteria that human hunter-gatherers select wild plants, other criteria include fleshy or seedless fruits, oily seeds, and long and fibrous wild squashes and pumpkins have almost no fruit around their seeds, however, the preferences of early farmers selected for squashes and pumpkins consisting of far more flesh than seeds. Cultivated bananas were selected long ago to be all flesh and no seed, by that inspiring modern agricultural scientist to develop seedless oranges, grapes, and watermelon as well. Seedlessness provides a good example of how human selection can completely reverse the original evolved function of a wild fruit, which in nature serves as a vehicle for dispersing seeds.

In ancient times many plants were similarly selected for oily fruits or seeds. Among the earliest fruit trees domesticated in the Mediterranean world were olives, cultivated since around 4000 Bc. and used for their oil. Olive crops are not only bigger but also bursting oilers than wild ones. Ancient farmers selected sesame, mustard, poppies, and flax as well for oily seeds, while modern plant scientists have done the same for sunflower, safflower, and cotton.

It seems, least of mention, that’s why Darwin, in his On the Origin of Species, didn’t start with an account of natural selection, but instead smithed a lengthy account of how our domesticated plants and animals arose through artificial selection by humans. Rather than discussing the Galápagos Islands that were usually associated with him, Darwin began by discussing-how farmers develop varieties of gooseberries. He Wrote, “I have seen great surprise expressed in horticultural works at the wonderful skill of gardeners, in having produced such splendid results from such poor materials, but the art g=has been simple and regarding the result, has been followed almost unconsciously. It has consisted in always cultivating the best-known variety, sowing its seeds, and, when a better variety chanced to appear, selecting it, and so onward.” Those principles of crop development by artificial selection still serve as our most understandable model of the origin of species by natural selection.

Nonetheless, it is to say, that conflicting and the uncertainties of environmental characterological are ordinarily subjected to the instability but founded similarities that entice those propensities that these animals revisit their domesticated areas, in that, it is, nevertheless, that they lived independently in the surrounding of several different sites. Such cases can often be detected by analysing the resulting morphological, genetic, or chromosomal differences between specimens of the same crop or doleritic animal in different areas. For instance, India’s zebu breeds of domestic cattle possess humps lacking in western Eurasian cattle’s breeds, and genetic analyses show that the ancestor of modern Indian and western Eurasian cattle breeds diverged from each other hundreds of thousands of years go, long before animals were domesticated anywhere. That is, cattle were domesticated independently in Indian western Eurasia, within the last 10,000 years, starting with wild Indian and western Eurasian cattle subspecies that had diverged hundreds of thousands of years earlier.

Did all those peoples of Africa, the Americas, and Australia, despite their enormous diversity, nonetheless share some cultural obstacles to domestication not shared with Eurasian peoples? For example, did Africa’s abundance of big mammals, available to kill by hunting, make it superfluous for Africans to go to the trouble of tending domestic stock?

The answer to that question is unequivocal: No. The interpretation is refuted by differing types of evidence: rapid acceptance of Eurasian domesticates by non-Eurasian peoples, the universal human penchant for keeping pets, the rapid domestication of the Ancient Species of Big Domestic Mammals, the repeated independent domestications of some of them, and the limited successes of modern efforts art further domestications.

When Eurasia’s domestic mammal reached sub-Saharan Africa, they were adopted by the most diverse African people whenever conditions permitted. Those African herders thereby achieved ca huge advantage over African hunter-gatherers and quickly y displaced them. In particular, Bantgu farmers who acquired cows and sheep spread out of their homeland in Wes t Africa and within a short time overran the former hunter-gatherers in most of the rest of sub-Saharan Africa. Even without acquiring crops, Khoisan peoples who acquired cows and sheep around 2,000 years ago, displaced Khoisan hunter-gatherers over much of southern Africa. The arrival of the domestic horse in West Africa transformed warfare there and turned the are a into a set of kingdoms dependent on cavalry. The only factor that prevented horse s from spreading beyond West Africa was trypanosomic diseases borne by tsetse flies.

The same pattern repeated itself elsewhere in the word, whenever peoples lacking native wild mammal species suitable for domestication finally had the opportunity y to acquire Eurasian domestic animals. European horses were eagerly adopted by Native American in both North and South America, within a generation of the escape of horses from European settlements. For example, by the 19th century y North America’s Great Plain Indians wee famous as expert horse-mounted warriors and bison hunters, bu t they did not eve n obtain horses until the late 17th century, sheep acquired from Spaniards similarly transformed Navajo Indian societies and led to, among other things, th e weaving of beautiful woolen blankets fo r that the Navajo have become renowned. Within a decade of Tasmania’s settlement by Europeans with dogs, Aboriginal Tasmanians, used in hunting. Thus, among the thousands of culturally diverse native peoples of Austral, the Americas, and Africa, no universal cultural taboo stood in the way of animal domestication.

Surely, in some local wild mammal species of those continents had been domesticable, some Australian, American, and African peoples would have domesticated them and gained advantage from them, just as they benefited from the Eurasian domestic animals that they immediately adopted when those became available. For instance, consider all the peoples of sub-Saharan Africa living within the range of wild zebras and buffalo, why wasn’t there at least one African hunter-gatherer tribe that domesticated those zebras and buffalo and that thereby gained sway over other Africans, without having to await the arrival of Eurasian horses and cattle? All these facts indicate that th explanation for the lack of native mammal domestication outside Eurasia lay with the locally available wild mammals themselves, not with the local peoples.

Nonetheless, strong domesticate evidence for the same interpretation comes from pets. Keeping wild animals as pets, and taming them, constitute an initial stage in domestication. But pets have been reported from virtually all traditional human societies on all continents. The variety of wild animals thus tamed is far greater than the variety eventually domesticated, and includes some species that we would scarcely have imagined as pets.

For example, in the New Guinea villages it is often seen in the accompaniment to people with pet kangaroos, possums, and birds ranging from flycatchers to ospreys. Most of these captives are eventually eaten, though some are kept just as pets. New Guineans even regularly capture chicks of wild cassowaries (an ostrich-like large, flightless bid) and raise them to et s a delicacy-even though captive adults’ cassowaries are extremely dangerous and now and then disembowel village people. Some Asian people tame eagles for purposes of hunting, although those powerful pets have also been known o n occasion to kill their human handlers. Ancient Egyptians and Assyrians, and modern Indians, tamed cheetahs for use in hunting. Painting made by ancien Egyptians show that they further tamed the hoofed mammals such as gazelles and harthebeests, birds such as cranes, surprisingly the giraffes (which can be dangerous), and also hyenas. African elephants were tamed in Roman times despite the obvious danger, an Asian elephants are still being tamed today. Perhaps the most unlikely pet is the European brown bear, which the Ainu people of Japan regularly capture as young animals, tamed, and reared to kill and eat in a ritual ceremony.

Thus, many wild animal species reached the sequential sessions of animal-human relations leading to domestication, but only a few emerged at the other end of that sequence as domestic animals. Over a century ago, the British scientist Francis Galton summarized this discrepancy succinctly: “It would appear that every wild animal has had its chance of being domesticated, which [a] few . . . were domesticated long ago, but that the large remainder, who failed sometimes in only one small particular, are destined to perpetual wildness.”

Still. A line of evidence shows that so mammal species are much more suitable than others are provided by the repeated independent domestications of the same species. Genetic evidence based on the portions of our genetic material known as mitochondrial DNA recently confirmed, as had long been suspected, that humped cattle of India and humpless European cattle were derived from two separate populations of wild ancestral cattle that had diverged hundreds of thousands of years ago. This is, Indian peoples domesticated the local Indian subspecies of wild aurochs, Southwest Asians independently domesticated their own Southwest Asian subspecies of aurochs, and North Africa may have independently domesticated the North African aurochs.

Similarly, wolves were independently y domesticated to become dogs in the Americas and probably in several different parts of Eurasia, including China and Southwest Asia. Modern pigs are derived from independent sequences of domestication in China, western Eurasia, and possibly other areas as well. These examples reemphasize that the same few suitable wild species attracted the attention of many different human societies.

The failure of modern efforts provides a final type of evincing that had failure to domesticate the large residue of wild candidate spacies arose from shortcomings o those species, than from shortcomings of ancient humans. Europeans today are heirs to one of the longest traditions of animal domestication on Earth-that which began in Southwest Asia around 10,000 years ago. Since the fifteenth century, Europeans have spread around the globe and encountered wild mammal species not found in Europe. European settlers, such as those that have encountered New Guinea with pet kangaroos and possums, have tamed or made pts of many local mammals, just as have indigenous peoples. European herders and farmers emigrating to other continents have also made serious efforts to domesticate some local species.

In the 19th and 20th centuries at least six large mammals-the eland, elk, moose, musk ox, zebra, and American bison-have been the subjects of especially well-organized projects aimed at domestication, carried out by modern scientific animal breeders and geneticists. For example, eland, the largest African antelope, have been undergoing selection fo meat quality and milk quantifies in the Askaniya-Nova Zoological Park in the Ukraine, as well as in England, Kenya, Zimbabwe and South Africa; an experimental farm for elk (red deer, in British terminology) has been operated by the Rowett Research Institute at Aberdeen Scotland, and an experimental farm for moose has operated in the Pechero-Ilych National Park in Russia. Yet these modern efforts have achieved only very limited successes. While bison meat occasionally appears in some U.S. supermarkets, and while moose has been ridden, milked and used to pull sleds in Sweden and Russia, none of these efforts has yielded a result of sufficient economic value to attract many ranchers. It is especially striking that recent attempts to domesticate elands within Africa itself, where its disease resistance and climate tolerance would give it a big advantage over introduced Eurasian wild stock susceptible to African disease, have not caught on.

Thus, neither indigenous herders with access to candidate species over thousands of years, nor modern geneticists, have succeeded in making useful domesticates of large mammals beyond the Ancient Species of Big Herbivorous Domestic Mammals, which were domesticated by at least 4,500 years ago. Yet scientists today could undoubtedly, if they wished, fulfill of many species in that part of the definition of domestication, in that specifies the control of breeding and food supply. for example, the San Diego and Los Angeles zoos are now subjecting the last surviving California condors to a more draconian control of bleeding than that imposed upon any domesticated species. All individual condors have been genetically identified, and a computer programs determine which male will mate with which female in order to achieve human goals. Zoos are conducting similar breeding program for man y other threatened species, including gorillas and rhinos. But the zoos’ rigorous selection of Californian condors shows no prospect of yielding an economically useful product. Nor do zoos’ efforts with rhinos, although rhinos offer up to more than three tons of meat on the hoof. As we will now see and for the most of other big mammals, presents insuperable obstacles to domestication.

Meanwhile, are areas in which food production arose altogether independently, with the domestication of many indigenous crops (and, sometimes, animals) before the arrival to any crops or animals from other areas. There are only five such areas for which the evidence is now detailed and compelling: Southwest Asia, also known as the Near East or Fertile Crescent, China: Mesoamerica (the term applied to central and southern Mexico and adjacent areas of Central America: , the Andes of South America, and possibly the adjacent Amazon Basin as well: and the eastern United States. Some or all these centres that most comprise several nearby centres where to production often or lest were independent, such as North China’s Yellow River valley and South China’s Yangtze River valley. Besides these five areas where food production definitely arose de novo, four others-Africa’s Sahel zone, tropical West Africa, Ethiopia and New Guinea-are candidates for that distinction. However, there is some uncertainty in each case. Although indigenous wild plants were undoubtedly domesticated in Africa’s Sahel zone just south of the Sahara, cattle herding may have preceded agriculture there, and it is not yet certain whether those were independently domesticated Sahel cattle or, instead, domestic cattle of Fertile Crescent origin whose arrival triggered local plant domestication. It remains similarly uncertain whether the arrival of those Sahel crops then triggered the undoubted local domestication of indigenous wild plants in tropical West Africa, and whether the arrival of Southwest Asian crops is what triggered the local domestication of indigenous wild plants in Ethiopia. As for New Guinea, archaeological studies there have provided evidence of early agriculture well before food production in any adjacent areas, but the crops grown have not been definitely identified

Food production in the Fertile Crescent is that it may have faced less competition from the hunter-gatherer lifestyle that in another area, including the western Mediterranean. Southwest Asia has few late rivers and only a short coastline, providing relatively meagre aquatic resources (as river and coastal fish and small shellfish). One of the important mammal species hunted for meat, the gazelle, originally lived in huge herds but was over exploited by the growing human population and reduced to low numbers. Thus, the food production package quickly became superior to the hunter-gatherer package. Sedentary villagers based on cereals were already in existence before the rise of food production and predisposed those hunter-gatherers for balancing equations of integral separations, included by an absence to no agriculture nor to any herding. In the Fertile Crescent the transition from hunter-gatherer to food production took place as perhaps as late as 9000 Bc. people still had no crops and domestic animals and were entirely dependent on wild foods, however by 6000 Bc. some societies were almost completely dependent on crops and domestic animals.

Evidently, most of the Fertile Crescent’s founder crops were never domesticated again elsewhere after the initial domestication in the Fertile Crescent. Had they been repeatedly domesticated independently, they would exhibit legacies of those multiple origins as varied chromosomal arrangements or varied mutations. Therefore, these are typical examples of the phenomenon of preemptive domestication that have quickly spread of the Fertile Crescent or elsewhere, to domesticate the same wild ancestors. Once the crop had become available, there was no further need to gather it from the wild and by that set it in the path to domestication again.

In addition, of course, small domestic mammals and domestic and insects have also been useful to humans. Many were domesticated for meat, eggs, and feathers: the chicken in China, various duck and goos e species in parts of Eurasia, turkeys in Mesoamerica, guinea fowl in Africa, and the Muscovy duck in South America. Wolves were domesticated in Eurasia and North America to become our dogs used as hunting companions, sentinels, pets, and in some societies, food. Rodents and other small mammals domesticated for food included the rabbit in Europe, the guineapig in the Andes, a giant rat in West Africa, and possibly a rident called the hutia on Caribbean islands. Ferrets were domesticated in Europe to hunt rabbits, and cats were domesticated in North Africa and Southwest Asia to hunt rodent pests. Small mammals domesticated as recently as the 19th and 20th centuries include foxes, mink, and chinchilla grown for fur and hamsters kept as pets. Even some insects have been domesticated, notably Eurasia’s honeybee and China’s silkworm moths, kept for honey and slk, respectfully.

Many of these small animals thus yielded food, clothing, or warmth. However, none of them pulled plows or wagons, non-bore riders, none except dogs pulled sleds or became war machines, and none of them have been as important for food as have big domestic mammals.

It is true, of course, that some small mammals were first domesticated long after 2500 Bc. For example, Rabbits weren’t domesticated for food until the Middle Ages, mice and rats for laboratory research not until the 20th century, and hamsters for pets not until the 1930s. The continuing development of domesticated small mammals is not surprising, because there are literally thousands of wild species as candidates, and because they were of too little value to traditional societies to warrant the effort of raising them. Nonetheless, big mammals domesticated virtually ended 4,500 years ago. By then, all of the world’s 148 candidates big species have been tested innumerable tomes, so that only a few passed the test and no other suitable ones remained.

The failure of modern efforts provides a final type of evidence that past failures to domesticate the large residue of wild candidates species arose from shortcomings of those species, than from shortcomings of ancient humans. Europeans today are heirs to one of the longer traditions of animal domestication on Earth, which began in Southwest Asia around 10,000 years ago. Since the fifteenth century, Europeans have spread around the globe and encountered wild mammals species not found in Europe. European settlers, such as those in New Guinea with pet kangaroos and possums, have tamed or made pets of many local mammals, just as have indigenous people. European herders and farmers emigrating to other continents have also attempted to domesticate some local species.

Africa’s domesticated animal species can be summarized much more quickly than its plants, because there are so few of them. The sole animal that is know for sure was domesticated in Africa, because its wild ancestors are confined there, are some turkey-like bird’s cattle, the guineas fowl. Wild ancestors of domestic cattle, donkeys, pigs, and most domesticate is the dog. The house cat was a native to North Africa but also to Southwest Asia, so we cannot yet be certain where they were first domesticated, although the earliest dates currently known for domestic donkeys and house cats favour Egypt. Recent evidence suggests that cattle may have been domesticated independently in North Africa, Southwest Asia, and India, and that all three of these stocks have contributed to modern African cattle breeds. Otherwise, all the remainders of Africa’s domestic mammals must have been domesticated elsewhere and introduced as domesticates to Africa, because the inheritor’s wild ancestors fared out only in Eurasia. Africa’s sheep and goats were domesticated in Southwest Asia, its chicken in Southeast Asia, its horses in southern Russia, and its camels probably in Arabia.

Like New Guinea, with no domesticable mammals, as the sole foreign domesticated mammal adopted in Australia was the dog, which arrived from Asia (presumably in Austronesian canoes) around 1500 Bc. Achieving the obtainable was to establish itself in the outbacks of primitive Australia of becoming the dingo. Native Australian kept captive dingos as companions, watchdogs, and even as living blankets, leading to the expression ‘five-dogs- night’ to mean a very cold night. However, they did not use dingos/dogs for food, as did Polynesian, or for cooperative hunting of wild animals, as did New Guinea.

Some Mesolithic hunter-gatherers, such as the Natufian of the Near East, appear to have lived in small settlements based on an economy involving gazelle hunting and the harvesting of wild cereals using sickles with flint blade segments inset in bone handles. In the Near East and North Africa, Mesolithic populations processed wild plant foods using grinding stones.

Other Mesolithic technological innovations include the adz and axe (woodworking tools consisting of flaked stone blades set in bored antler sleeves and fastened to wooden handles), fishing weirs and traps, fish hooks, the first preserved bows and arrows, baskets, textiles, sickles, dugout canoes and paddles, sledges, and early skis. The Jomon culture of Japan produced pottery by 10,000 years ago, as did the Ertebølle culture of Scandinavia moderately later.

The development of broad spectrum economies in the post-glacial Mesolithic/Archaic period laid the foundations for the plants and animals, which in turn led to the rise of farming communities in some parts of the world. This development marked the beginning of the Neolithic.

Farming originated at different times in different places as early as about 9,000 years ago in some parts of the world. In some regions, farming arose through indigenous developments, and in others it spread from other areas. Most archaeologists believe that the development of farming in the Neolithic was one of the most important and revolutionary innovations in the history of the human species. It allowed more permanent settlements, much larger and denser populations, the accumulation of surpluses and wealth, the development of more profound status and rank differences within populations, and the rise of specialized crafts.

Neolithic Toolmaking generally shows an advanced portion of technological continuity with the Mesolithic, however, Neolithic industries often include blade and bladelet (small blades) technologies, and sometimes in the accompaniment with microliths. A vast horizon widened by a range-over of retouched tools, including endscrapers (narrower scrapers for working hides). Moreover, be in the back blades or bladelets (some of which were set into handles and used as sickles), and a widened range of activated points. In addition, ground and polished axes and adzes-which would have been used for forest clearance to plant crops, and for woodworking activities—are characteristic of the Neolithic. Such tools, although labour-intensive to manufacture, has a propensity to take a long time without requiring resharpening and consequently were highly prized by these early farmers. Large-scale trade networks of axes and stone are documented in the Neolithic, with artifacts sometimes found hundreds of miles from their sources. Other technological developments in the Neolithic include grinding stones, such as mortars and pestles, for the processing of cereal foods, the widespread use of pottery for surplus food storage and cooking, the construction of granaries for storage of grains, the use of domesticated plant fibres for textiles, and weaving technology.

Archaeologists have several theories to explain why humans began farming. The reasons probably differed moderately from one region to another. Some theories maintain that population pressure or changes in environment may have forced humans to find new economic strategies, which led to farming. Another theory maintains that a population of humans may have lived in a region where domesticating wild plants and animals was easily made in the development of agriculture seemed as an indispensably historical accident. Still another theory proposes that the rise of farming may have varied with social change, as individuals began to use agriculture as a means to take on wealth as food surpluses.

Different plant crops were cultivated in different places, depending on what wild plants grew naturally and how well they responded to cultivation. In the Near East, important crops included wheat, barley, rye, legumes, walnuts, pistachios, grapes, and olives. In China, millet and rice predominated. In Africa, millet, sorghum, African rice, and yams were commonly grown. Rice, plantains, bananas, coconuts, and yams were important in Southeast Asia. Finally, in the Americas, corn, squash, beans, potatoes, peppers, sunflowers, amaranths, and goose-foots were commonly grown.

Domesticated animals also varied from one region to another according, again, to availability and their potential to be domesticated. In Eurasia, Neolithic people domesticated dogs, sheep, goats, cattle, pigs, chickens, ducks, and water buffalo. In the Americas, domesticated animals included dogs, turkeys, llamas, alpacas, and guinea pigs. In Africa, the primary domesticated animals-cattle, sheep, and goats-probably spread from the Near East.

Well-studied early farming sites in Eurasia include Jericho, in the West Bank; Ain Ghazal, in Jordan; Ali Kosh, in Iran; Mehrgarh, in Pakistan; Banpocun (Pan-p’o-ts̀un), in China; and Spirit Cave, in Thailand. Important African sites include Adrar Bous in Niger, Iwo Eleru in Nigeria, and Hyrax Hill and Lukenya Hill in Kenya. In the Americas, sites showing early plant include Guila Naquitz, in Mexico, and Guitarrero Cave, in Peru.

Larger Neolithic settlements show a variety of new architectural developments. For instance, in the Near East, conical beehive-shaped houses or rambling, connected apartments-style housing was often constructed with mud bricks. In Eastern Europe, houses were made with wattle and daub (interwoven twigs plastered with clay) walls, and, in later times, long houses were constructed with massive timbers. In China, some settlements contain semisubterranean houses dug into clay, with evidence of walls and roofs made out of thatch or other materials and supported by poles.

The of plants and animals led to profound social change during the Neolithic. Surpluses of food, such as stored grain or herds of livestock, could become commodities of wealth for some individuals, leading to social differentiation within farming communities. Trade of raw materials and manufactured products between different areas increased markedly during the Neolithic, and many foreign or exotic goods appear to have developed special symbolic value or status. Some Neolithic graves contain rich stores of goods or exotic materials, revealing differentiations in terms of wealth, rank, or power

In certain areas, notably parts of the Near East and Western Europe, Neolithic peoples built massive ceremonial complexes, efforts that would have required extensive, dedicated work forces. Large earthworks and megalithic (‘giant stone’) monuments from the Neolithic (including the Avebury stone circle and the earliest stages of Stonehenge, in England, and the monuments of Carnac, in France), suggest more highly organized political structures and more complex social organization than among most hunters-gatherer populations. In the Americas, sites such as the mounds of Cahokia, in Illinois, also depict more complex, organized political and social order. The technological innovations and economic basis established and spread by Neolithic communities ultimately set the stage for the development of complex societies and civilizations around the world.

Humans produced metal tools and ornaments from beaten copper as early as 12,000 years ago in some parts of the world. By 6,000 years ago, early experiments in metallurgy, particularly extracting metals from copper ore (smelting), were being conducted in some parts of Eurasia, notably in Eastern Europe and the Near East. By 5,000 years ago, copper and tin ores were being smelted and alloyed in some regions, marking the dawn of the Bronze Age. Sculpting of bronze tools, such as axes, knives, swords, spearheads, and arrowheads became increasingly common over time. At first, copper and bronze tools were rare and stone tools were still inordinate common, but as time went on, metal tools gradually replaced stone as the principal raw material for edged tools and weapons.

In Eurasia and parts of Africa, the rise of metallurgical societies appears to coincide with the rise of the earliest state societies and civilizations, such as ancient Egypt, Sumer, Minoan Culture, Mycenae, and China. In the Americas, parts of sub-Saharan Africa, Australia, and the Pacific Islands, societies continued to use stone and other nonmetal materials as the principal raw materials for tools up to the time of European contact, starting in the 15th century ad. Although, technically, populations in these areas could have been said to be Stone Age groups, many had become agricultural societies and had formed flourishing civilizations.

Stone technology enjoyed a brief resurgence within iron-using societies with the coming of flintlock firearms, beginning in the 17th century. Carefully shaped flints-reminiscent of the geometric microliths of the Mesolithic and early Neolithic-were struck against steel to create a spark to ignite the firearm. By the end of the 20th century few human groups had a traditional stone technology, although a few groups on the island of New Guinea still relied on the use of stone adzes. Tools of metal, plastic, and other materials had replaced stone technologies virtually everywhere.

Cave Dwellers, is the term used to designate an ancient people who occupied caves in various parts of the world. Cave dwellers’ date generally from the Stone Age period known as the Palaeolithic, which began as early as 2.5 million years ago. Caves are natural shelters, offering shade and protection from wind, rain, and snow. As archaeological sites, caves are easy to colonize and often provide conditions that encourage the preservation of normally perishable materials, such as bone. As a result, the archaeological exploration of caves has contributed significantly to the reconstruction of the human past.

Cave Painting, of Lascaux, France where some Palaeolithic artists painted scenes in caves more than 15,000 years ago, such as the one here found in Lascaux, France. The leaping cow and group of small horses were painted with red and yellow ochre that were either blown through reeds onto the wall or mixed with animal fat and applied with reeds or thistles. It is believed that prehistoric hunters made these paintings to gain magical powers that would ensure a successful hunt.

Wherever caves were available, prehistoric nomadic hunters and gatherers incorporated them into the yearly cycle of seasonal camps. Most of their activities took place around campfires at the cave mouth, and some caves contain stone walls and pavements providing additional protection from winds and dampness. Hunting, particularly of reindeer, horse, red deer, and bison, was important; many caves are situated on valley slopes providing views of animal migration routes.

Stone Toolmaking Humans first made tools of stone at least 2.5 million years ago, initiating the so-called Stone Age. The Stone Age advanced through three stages over time-the Palaeolithic (which is subdivided into Lower, Middle, and Upper periods), Mesolithic, and Neolithic. Blade Toolmaking, was a development of the Upper Palaeolithic Epoch, which began about 40,000 years ago. This technique produced a far greater variety and higher quality of tools than did earlier methods of Toolmaking.

Artifacts have been found in caves in France, Spain, Belgium, Germany, Italy, and Great Britain. The association of these remains with the bones of extinct animals, such as the cave bear and saber-toothed tiger, makes evidently the great antiquity of many cave deposits. A variety of stone and bone heads were discovered in excavated caves, as by rule their documents the importance of spears until the bow and arrow appeared in the late Palaeolithic era. Other common tools included stone scrapers for working hides and wood, burins for engraving, and knives for butchering and cutting. Throughout the Palaeolithic period such tools became increasingly diverse and well made. Bone needles, barbed harpoons, and spear-throwers were made and decorated with carved designs. Evidence of bone pendants and shell necklaces also exists. Among the caves that have yielded relics of early humans are the Cro-Magnon and Vallonnet in France.

Wall paintings and engravings have been found in more than 200 caves, largely in Spain and France, dating from 25,000 to 10,000 years ago. Frequently found deep inside the caves, and the paintings depict animals, geometric signs, and occasional human figures. In the cave of La Colombière in France, a remarkable series of sketches engraved on bone and smoothed stones was unearthed in 1913. In caves such as Altamira in Spain and Lascaux in France, multicolored animal figures were drawn using mineral pigments mixed with animal fats. Some paintings adorn walls of large chambers suitable for ritual gatherings; others are found in narrow passages accessible only to individuals. Hunting and fertilities seem to have been important artistic themes. The ritual gatherings themselves promoted communication and intermarriage among the normally scattered small groups. Chinese caves contain some earliest evidence of human use of fire

On every continent, prehistoric foragers used caves. In the Zhoukoudian (Chou-k'ou-tien) Cave near Beijing, China, remains of bones and tools of The Homo erectus (Peking Man) have been discovered. Chinese caves contain some earliest evidence of human use of fire, approximately 400,000 years ago. In the Shânîdâr Cave in Iraq, 50,000-year-old Neanderthal skeletons were unearthed in 1957. Ancient pollen buried with them has been interpreted as evidence that these cave dwellers had developed funeral rituals. In the western deserts of North America, caves have been found that contain plant foods, woven sandals, and baskets, representing the desert culture of a belated 9000 years ago. Early inhabitants of Australia, the Middle East, and the Peruvian Andes have also left remains in caves.

Gradually people learned to grow food, rather than forage for it. This was the beginning of the Neolithic age, which, although ending in western Europe some 4500 years ago, continued elsewhere in the world until modern times. Once agriculture became important, people established villages of permanent houses and found new uses for caves, mainly as hunting and herding campsites and for ceremonial activities. In Europe, Asia, and Africa caves continued to be used as shelters by nomadic groups.

Cave Dwellings, as these concave inlets held of themselves the cave dwellings that are found in the Cappadocia region of Central Anatolia Göreme, Turkey. Known as ‘fairy chimneys’, they were carved into soft volcanic rock by anchorite (hermitic) Christian monks in the 4th century AD. Many of these dwellings are still occupied by Göreme Turks, who consider them to have the quality of being healthy and make a reduction in-rate as to call by name the place of to live.

In dry caves, preservation is often excellent, due to moistureless air and limited bacterial activity. Organic remains such as charred wood, nutshells, plant fibres, and bones are sometimes found intact. In wet caves, artifacts and other remains are often found encrusted with, or buried beneath, calcareous unloads of drip-stone. The collected evidence of human habitation on the cave floor was often buried under rock falls from the ceilings of caverns. Intentional burials have also been found in several cave sites.

Because of the unusual preservative nature of caves and the great age of many remains found in them, the fallacious belief has arisen that a race of cave people existed. Most cave sites represent small, seasonal camps. Because prehistoric people spend a copious measurement of the year in open-air camps, the caves contain the remains of only part of a group’s total activities. Also, the cultural remains outside caves were subject to greater decay. Thus, the archaeological record of remote times is better seen in cave deposits.

Caves have been systematically excavated during the past one hundred years. Since they often contain the remains of repeated occupations, caves can document changing cultures. For example, the economic transition from food collecting to agriculture is manifested to finds in highland Mexico and in Southeast Asia. Some caves in the Old World continued to be inhabited even after the close of the Stone Age; relics from the Bronze and Iron ages have been found in cave deposits. On occasion, material dating from the time of the Roman Empire has been recovered. The famous Dead Sea Scrolls, discovered in 1947, were preserved in caves.

In 1935 Doctor F. Kohl-Larsen discovered fragments of two skulls in the gravel at the northeast end of Lake Eyassi, Tanganyika Territory, Africa, in association with fossilized bones of antelopes, pigs, and hyenas resembling types of animals now living in that area. The two hundred fragments of the skulls have been painstakingly assembled by Doctor Hans Weinert of Kiel, Germany, so that there are now available for study the skull cap of one individual and part of the face of another. Though critical study of these East African finds is still far from completion, their closest resemblance may be to Pithecanthropus erectus, the famous Java ape man. These remains have been tentatively dated as 100,000 years ago.

Doctor Robert Broom of the Transvaal Museum, Pretoria, has continued his study of the human-like ape remains found in South Africa. He believes the Australopithecus skulls to be the most definite apes-like, except their teeth, which show a closer similarity to those of man than of the gorilla or chimpanzee, and therefore that they are not actual ancestors of man, but only, survivors of a possible apelike ancestral stock that existed before Ice Age times.

The distal end of a humerus, the proximal ends of an ulna, and the distal phalanx of a toe of Paranthropus robustus, and the distal ends of a femur of Plesianthropus were excavated in the Pleistocene bone breccia of Kromdrai, near Krugersdorp, South Africa, under the direction of Doctor Broom, thus suggesting that this early type of ape-man had to his graduation a diploma in Bipedalism, or he was ably of walking of an erect posture, making a distinct departure from previous assumptions as to posture of this species.

Professor Raymond Dart of Witwatersrand University (South Africa), the discoverer of the controversial Taungs skull (Australopithecus africanus) states that a high culture existed in the present habitat of the Bantu-speaking peoples of South Africa in the Late Stone Age before their coming in that part of Africa. Skeletons associated with the Mapungobwa finds appear to implicate that the civilization cantering to this place was associated with a race said to be intermediate between, and possibly a hybrid of, Cro-Magnon and Neanderthal types, which as known in Europe, are distinct races’

Finds of Neanderthaloid skulls and skeletons continue to be reported from widely separated areas. Digging in a cave at Mount Circeo on the Tyrrhenian sea, 50 miles south of Rome, Italy, Alberto Carlo Blanc uncovered an almost perfectly preserved Neanderthal skull, perfect except a fracture in the right temporal region. It is the third of this type found in Italy. The two skulls previously reported were found in 1929 and 1935 in the Sacopastore region, near Rome, but in not nearly so well preserved a condition as the present find. No other human bones were found here, but the skull was accompanied by fossilized bones of elephants, rhinoceri, and giant horses, all fractured, thus giving some evidence of the mode of life of Neanderthal man. Professor Sergio Sergi, of the Institute of Anthropology at the Royal University of Rome, who has studied this skull in detail believes it to be 70,000 to 80,000 years old. He concludes also that Neanderthal man walked as moderately of an erect positional stance, as modern man and not with its head thrust forward as had previously misfortunes of others.

Another Neanderthal skeleton is reported to have been found in a cave in Middle Asia by A. P. Okladnikoff of the Anthropological Institute of Moscow University and the Leningrad Institute of Anthropology. The bones of the skeleton were badly shattered, but the jaw and teeth of the skull, such as for themselves were crushed at the back, were almost complete.

The famous Chokoutien site near Peking, China, the home of ancient Peking man (Sinanthropus) previously reported, now proves also to have yielded additional more modern type skeletons studied by Doctor Franz Weidenreich and Doctor W. C. Pei, the leaders in research at this site. In the portion of the site known as the upper cave were found the remains of an advanced culture suggesting a resemblance to the Late or Upper Palaeolithic in Europe, thus implying an age of 100,000 to 200,000 years. These cultural remains were accompanied by skeletons of bear, hyena, and ostrich, long extinct forms, and tiger and leopard that longs since disappeared from this part of Asia. The three human skulls in properly positioned placement seemed of its properties that accorded themselves in a very detailed subjective study, so, to implicate that they probably belong to three different racial groups. Of the two female skulls studied, one bears close resemblance to the skulls of modern Melanesians, with frontal deformation, are the second skull deformations to Eskimo skulls who are the first. The brain case of the masculine skull in some valuing quality is much more than is primitive, almost in the Neanderthaloid stage, but in other features is reminiscent of Upper Palaeolithic Man. The face, is similar to, though not identical with, recent Mongolians. From this evidence it seems that racial mixture is no product of modern times, but has its roots in extreme antiquity. It should be noted also that though Mongolian types resembling the modern population of North China were not found in the upper cave, it does not necessarily mean that they were nonexistent during that period. It has been suggested that the population represented in the upper cave may have been a migrating group. Historic and prehistoric American Indian skulls resembling Melanesian, Eskimo, or more primitive types have been reported from time to time in America, so that it would appear from the present finds at Chokoutien that long before migrations from Asia to America are assumed to have taken place, types similar to those composing the native American populations were living permanently, or at least moving around in Eastern Asia.

In the more recent past, the movement and counter-movement of peoples have led to accelerated mixing of stocks and mutual infusion of physical characteristics. Perhaps more important than the transmission of physical characteristics has been the transmission of cultural characteristics. The diffusion of cultures, including tools, habits, ideas, and forms of social organization, was a prerequisite for the development of modern civilization, which would probably have taken place much more slowly if people had not moved from place to place. For instance, use of the horse was introduced into the Middle East by Asian invaders of ancient Sumer and later spread to Europe and the Americas. Even important historical events can be linked to distant migrations; the downfall of the Roman Empire in the 3rd to the 6th century AD, for example, was probably hastened by migrations following the building of the Great Wall of China, which prevented the eastward expansion of Central Asian tribes, thus turning them in the direction of Europe.

A group of people may migrate in response to the lure of a more favourable region or because of some adverse condition or combination of conditions in the home environment. Most historians believe that non-nomadic peoples are disinclined to leave the places to which they are accustomed, and that most historic and prehistoric migrations were stimulated by a deterioration of home conditions. This belief is supported by records of the events preceding most major migrations.

The specific stimuli for migrations may be either natural or social causes. Among the natural causes are changes in climate, stimulating a search for warmer or colder lands; volcanic eruptions or floods that render sizable areas uninhabitable; and periodic fluctuations in rainfall. Social causes, however, are generally considered to have prompted many more migrations than natural causes. Examples of such social causes are an inadequate food supply caused by population increase; defeat in war, as in the forced migration of Germans from those parts of Germany absorbed by Poland after the end of World War II in 1945; a desire for material gain, as in the 13th-century invasion of the wealthy cities of western Asia by Turkish tribes; and the search for religious or political freedom, as in the migrations of Huguenots, Jews, Puritans, Quakers and other groups to North America.

Throughout history, the choice of migratory routes has been influenced by the tendency of groups to seek a type of environment similar to the one they left, and by the existence of natural barriers, such as large rivers, seas, deserts, and mountain ranges. The belts of steppe, forest, and arctic tundra that stretches from central Europe to the Pacific Ocean have been a constant encouragement to east-west migration of groups situated along their length. On the other hand, migrations from tropical to temperate areas, or from temperate to tropical areas, have been rare. The desert regions of the Sahara in northern Africa separated the African from the Mediterranean peoples and prevented the diffusion southward of Egyptian and other cultures, and the Himalayas’ mountain system of South Asia cut off approach to the great subcontinent of India but from its eastern and western borders. Consequently of these and similar barriers, certain mountain passes and land bridges became traditional migratory routes. The Sinai Peninsula in northeastern Egypt, bounded on the east by the Arabian Peninsula, linked Africa and Asia; the Bosporus region of northwestern Turkey connected Europe and the Middle East; the Daryal Gorge in the Caucasus Mountains of Georgia, Armenia, Azerbaijan, and southwestern Russia was used by the successive tribes that poured out of the European steppes into the Middle East; and the broad valley between the Altay Mountains and the Tian Shan mountain system of Central Asia provided the route by which Central Asian peoples swept westward.

Among the distinct effects of migration are the stimulations of further migration through the displacement of other peoples; a reduction in the numbers of the migrating group because of hardship and warfare, changes in physical characteristics through intermarriage with the groups encountered; changes in cultural characteristics by adoption of the cultural patterns of peoples encountered; and linguistic changes, also affected by adoption. Anthropologists and archaeologists have traced the routes of many prehistoric migrations by the current persistence of such effects. Blond physical characteristics among some of the Berbers of North Africa are thought to be evidence of an early Nordic invasion, and the Navajo and Apache of the southwestern United States are believed to be descended from peoples of northwestern Canada, with whom they have a linguistic bond. The effects of migration are particularly evident in North, Central, and South America, where peoples of diverse origins live with common cultures.

Among the most far-reaching series of ancient migrations were those of the peoples who spread the Indo-European family of languages. According to a prevalent hypothesis, a large group of Indo-Europeans migrated from east-central Europe eastward toward the region of the Caspian Sea before 3000 Bc. Beginning shortly after 2000 Bc, the Indo-European people known as the Hittites crossed into Asia Minor from Europe through the Bosporus region, and when the bulk of the Indo-Europeans in the Caspian Sea area turned southward. The ancestors of the Hindus went southeastward into Punjab, in northwestern India, and along the banks of the Indus. and Ganges rivers; the Kassites went south into Babylonia, and the Mitanni of northern Mesopotamia went southwestward into the valleys of the Tigris and Euphrates rivers and other parts of the region between the Persian Gulf and the Mediterranean Sea known as the Fertile Crescent.

A migration of great importance to Western civilization was the invasion of Canaan (later known as Palestine) by the tribes of the Hebrew confederacy, which developed the ideas on which the Jewish, Christian, and Islamic religions are founded. These nomadic Semitic tribes, from the Arabian Peninsula and the deserts southeast of the Jordan River, moved (15th-10th century Bc) into a settled region that was alternately under the control of Egypt and Babylonia.

The civilizations of the ancient world were reached cities and countries situated along the edges of the great European and Asian landmass, around the Mediterranean Sea, in the Middle East, in India, and in China. The huge interior area was crossed and recrossed by nomadic tribes, which periodically overran the coastal settlements. Central Asia was the main reservoir of these nomadic hordes, and from it successive waves of migrations penetrated eastward into China, southward into India, and westward into Europe, driving before them subsidiary waves of displaced tribes and peoples. In the 3rd century Bc, the Xiongnu (Hsiung-nu), who were possibly related to the Huns, advanced eastward from Central Asia toward China and westward toward the Ural Mountains, driving other groups before them.

In another movement the Cimbri, thought to have been a Germanic people, drove southward from the eastern Baltic Sea region and twice entered the Roman Empire in the 2nd century Bc. In the 1st century Bc, Germanic groups from the southwestern Baltic area, possibly as a consequence of Cimbri pressure, also drove down into central Europe, occupying the territory between the Rhine and the Danube rivers. By the 3rd century AD, a newly expanding group, the Mongols, had arisen in Central Asia. Because of their pressure, the Huns invaded China and crossed over the Ural Mountains into the Volga River region. This migration displaced the Goths, who travelled from southwestern Russia toward the European domains of the Roman Empire, and in turn forced the Germanic Vandals into Gaul and Spain at the beginning of the 5th century ad. The Visigoths (western Goths) continued their westward advance through Italy, Gaul, and Spain, driving the Vandals before them into northern Africa and eastward to present-day Tunis. The Ostrogoths (eastern Goths) followed the Visigoths into Italy and settled there. The Huns, who had begun their movement in Central Asia eight centuries earlier, followed the Goths into Europe, after being displaced by the Mongols, and settled in what is now Hungary about the middle of the 5th century. The Mongols also forced many Slavs into eastern Europe. Thus, one of the most momentous and far-reaching events of history, the disintegration of the Roman Empire in the 3rd to the 6th century of the Christian era, was largely caused by migrations.

After the Hun invasions in the 3rd and 5th centuries, a period of equilibrium began. In the East, the Chinese maintained their strength against the nomads. In the West, Europe consolidated its own strength.

The weakness and decay of the Persian and the Byzantine empires encouraged the spread of a new migration out of Semitic Arabia that was far more extensive than that of the Hebrews into Canaan. United under the banner of Islam in the 7th and early 8th centuries, Arab tribes swept eastward through Persia to Eastern Turkistan and into northwest India; westward through Egypt and across northern Africa into Spain and southern France; and northwestwards through Syria into Asia Minor. The Arab penetration into Central Asia stimulated nomadic raids on the frontiers of the Chinese Empire and forced the western Asian Magyar tribes to move in the direction of Europe, crossing the Ural Mountains and southern Russia and finally reaching Hungary, where they settled in the 9th century.

Expansion of Chinese frontiers under the Song (Sung) dynasty in the 11th century forced the Seljuk Turkish tribes out of Central Asia. These tribes moved westward across the Ural Mountains into the Volga River region and thence south into Persia, Armenia, Asia Minor, and Syria, settling among the peoples there. In the 13th century, Mongol tribes under famed conqueror Genghis Khan, in one of the most astounding military migrations of recorded history, swept out of Mongolia and captured China, Turkistan, Afghanistan, Iran, Mesopotamia, Syria, Asia Minor, southern Russia, and even parts of eastern Europe. The Ottoman Turks, forced from their pasturelands in western Asia during the brief period of Mongol supremacy, migrated westward and entered Asia Minor in the 14th century, taking Constantinople (then the capital of the Byzantine Empire, in what is now northwestern Turkey) and advancing as far as Vienna, Austria, in the 15th century.

The maritime region consisting of Scandinavia and other lands bordering the North and the Baltic seas was a subsidiary reservoir of migratory groups. In the 5th and 6th centuries, Angles, Saxons, and Jutes, displaced by the Visigoths, sailed from northwest Germany and overran southern Britain. Norwegian mariners captured the Shetland, Orkney, Faroe, and Hebrides islands in the 7th and 8th centuries. In the 9th century, Swedish fighters poured out of the Baltic region through southern Finland, sweeping down into Russia and through the Ukraine along the Dnieper River. During the 9th century, Norwegians settled in Iceland and in Normandy (Normandie) in France. Icelanders reached Greenland in the late 10th century and established a colony there. Subsequently, they sailed even as far as North America but left no permanent settlers. The growth of the system of nation-states in Europe during the 2nd millennium AD again restored the equilibrium in the West, and no important ethnic invasions occurred thereafter.

More people have moved and resettled during the past 450 years than in any similar period of human history. The migrations preceding this period were collective acts, mostly voluntarily undertaken by the members of a group, but many of the more recent migrations have differed in at least two significant ways: They have been either voluntary individual acts or they have been enforced group movements, entirely against the will of the people who are being moved. The two types of migration began almost simultaneously after Europeans arrived in America in the late 1400s, and they have continued in one form or another up to the present day.

The era of modern migrations that began with the opening of the western hemisphere was continued under the impetus of the Industrial Revolution. Millions of western, and then eastern, Europeans, seeking political or religious freedom or economic opportunity, settled in North and South America, Africa, Australia, New Zealand, and other parts of the globe. As many as 20 million Africans were forcibly carried to the Americas by slave traders and sold into bondage. Millions of Chinese settled in Southeast Asia and moved overseas to work in the Philippine Islands, Hawaii, and the Americas. A large colony of Hindus was established in southern Africa, and many people from Arab lands migrated to North and South America.

The migrations from Europe were principally voluntary, in the sense that the emigrants could have stayed in their respective original homelands if they had accepted certain religions, creeds, political allegiances, or economic privations. The involuntary migrations were primarily those of the Africans captured for slave labour, but slave shipments were halted during the first half of the 19th century. However, a large-scale, essentially forced migration took place from southern Africa to the central and eastern parts of the continent, spurred by the expansionist force of the Zulu. Finally, many of the Chinese, Indian, and other Asian migrations, as well as some of the migrations of eastern and southern Europeans, were not strictly definable as either or free or unfree. The individual migrants signed agreements to travel in consignments of contract labour. Although ultimately many of these labourers settled permanently and with equal rights in the lands to which they went, the terms of their original contracts often severely limited their freedom and, in effect, left them little better than slaves for long periods through time.

It’s migration into the Americas, early movement or movements of humans to the Americas. The first people to come to the Americas arrived in the Western Hemisphere during the late Pleistocene Epoch (1.6 million to 10,000 years before present). Most scholars believe that these ancient ancestors of modern Native Americans were hunter-gatherers who migrated to the Americas from northeastern Asia.

For much of the 20th century it was widely believed the first Americans were the Clovis people, known by their distinctive spearpoints and other tools found across North America. The earliest Clovis sites date to 11,500 years ago. However, recent excavations in South America show that people have lived in the Americas at least 12,500 years. A growing body of evidence-from other archaeological sites to studies of the languages and genetic heritage of Native Americans-suggests the first Americans may have arrived even earlier.

Many details concerning the first settlement of the Americas remain shrouded in mystery. Today the search for answers involves researchers from diverse fields, including archaeology, linguistics, skeletal anatomy, and molecular biology. The challenge for researchers is to find evidence that can help determine when the first settlers arrived, how these people made their way into the Americas, and if migrating groups travelled by different routes and in multiple waves. Some archeologists and physical anthropologists have suggested that one or more of these migrations originated from places outside Asia, although this view is not widely accepted.

Whoever they were and whenever they arrived, the first Americans faced extraordinary challenges. These hardy settlers encountered a vast, trackless new world, one rich in animals and plants and yet entirely without other peoples. As they entered new territories, they had to locate essential resources, such as water, food, and materials to make or repair their tools. They had to learn which of the unfamiliar animals and plants would feed or cure them that might hurt or kill them. Their efforts ultimately proved successful. By the time European exploration of the Americas began in the late 15th century, the descendants of these ancient colonizers numbered in the millions.

From their evolutionary origins in Africa, anatomically modern humans, Homo sapiens, steadily spread out across Earth’s landmasses by 25,000 to 35,000 years ago humans had reached the far eastern reaches of modern Siberia in northeastern Asia-a region believed to be the most likely point of departure for any early migration to North America. Humans arrived in this remote corner of the world during the last major period of the Pleistocene Epoch, or Ice Age. Great glaciers covered much of the Northern Hemisphere at this time. In North America two immense ice sheets, the Laurentide in the east and the Cordilleran in the west, buried much of modern Canada and Alaska, as well as northern portions of the continental United States.

Pleistocene climates and environments were different from they are today, and so too were the Earth’s surfaces. Glaciers had captured a significant amount of the world’s water on land. Because that water no longer drained back to the oceans, worldwide sea levels dropped. Average sea levels were as much as 135 m (440 ft) lower than they are today.

As sea levels fell, large expanses of previously submerged continental shelf became dry land, including the area beneath what is now the Bering Sea. This area formed a 1,600-km- (1,000-mi-) wide land bridge that connected the northeastern tip of Asia and the western tip of modern Alaska. Known as Beringia, this natural land bridge existed from about 25,000 to nearly 10,000 years ago. It was a flat, cold, and dry landscape, covered primarily in grassland, with occasional shrubs and small trees. People and animals could use Beringia to walk from Siberia to Alaska and back.

Migrants from northeastern Asia could have trekked to Alaska with relative ease when Beringia was above sea level. Even travelling south from Alaska to what is now the continental United States posed significant challenges for any would-be colonizers. There were two possible routes south for migrating people: down the Pacific coast, or by way of an interior passage along the eastern flank of the Rocky Mountains. When the Laurentide and Cordilleran ice sheets were at their maximum extent, both routes were likely impassable. The Cordilleran reached across to the Pacific shore in the west and its eastern edge abutted the Laurentide, near the present border between British Columbia and Alberta.

Geological evidence suggests the Pacific coast route was open for overland travel before 23,000 years ago and after 14,000 years ago. During the coldest millennia of the last ice age, roughly 23,000 to 19,000 years ago, lobes of glaciers hundreds of kilometres wide flowed down to the sea. Deep crevasses scarred their surfaces, making travel across them dangerous. Even if people travelled by boat-a claim for which there is currently no direct archaeological evidence-the journey would have been difficult. There were almost certainly fleets of icebergs to outmanoeuvre. Rivers of sediment draining Cordilleran glacial fields severely restricted the availability of near-shore marine life, which early colonizers would have relied on for nourishment. By 14,000 to 13,000 years ago, however, the coast was ice-free. By then, too, the climate had warmed, and coastal lands were covered in grass and trees. Hunter-gatherer groups could have readily replenished their food supplies, repaired clothing and tents, and replaced broken or lost tools.

The warming climate gradually opened a second possible migration route through the massive frozen wilderness in the continental interior. Geologic evidence indicates that by 11,500 years ago the Cordilleran and Laurentide ice sheets had retreated far enough to open a habitable ice-free corridor between them. By then, much of the exposed land was probably restored enough to support plants and animals on which migrating hunter-gatherer peoples depended.

Scientific inquiry into the peopling of the Americas began in the 1870s. Then, many scholars wondered if modern humans had lived in the Americas for as long as they had in Europe, where numerous Stone Age sites indicated a Pleistocene-era occupation. Excavations at these sites revealed hand axes and other relatively simple stone tools, human bones, and the remains of several now-extinct animals, including the woolly mammoth. The discovery of Pleistocene-age animals alongside human bones and artifacts helped 19th-century archeologists establish the age of ancient human encampments in Europe.

Yet, search as they might, American archeologists found no comparable evidence of a Pleistocene-era human presence. Nonetheless, several sites revealed stone artifacts that some scholars believed looked similar to the ancient stone tools found in Europe. On the basis of this similarity, these experts claimed the American artifacts must be as old. By the 1890s, however, other scholars had challenged this claim. They argued the American and European artifacts did not really look alike, and they noted the American artifacts were of uncertain antiquity because none were found securely embedded in Pleistocene-age geological deposits. A lengthy debate ensued between those who saw evidence for ancient human settlement in the Americas and those who did not. This debate-often loud and sometimes bitter-remained unresolved for more than three decades.

In 1927 archaeologists finally demonstrated that humans had occupied the Americas during the Pleistocene. This breakthrough occurred at a site discovered by ranch foreman George McJunkin near Folsom in northeastern New Mexico. Excavations at the site uncovered a stone projectile point embedded in the rib bones of a now-extinct bison, an ancestor of the modern North American buffalo. Clearly, a human hunter had killed this Pleistocene-era animal. The Folsom discovery proved beyond doubt that humans had lived in the Americas since the last ice age.

The spearpoints used to bring down the Folsom bison where distinctive, finely made points possess a flute, or channel, on each face. These Folsom points were quite unlike those of the European Stone Ages. American archaeologists coined the term Paleo-Indian to identify the ancient Pleistocene Americans who had produced these well-crafted artifacts.

In the decade after Folsom, more Paleo-Indian sites were discovered. Some held Folsom spearpoints, but others revealed larger, less finely made fluted points. These large points occasionally appeared with the bones of mammoths. The first such find became known in 1933 at a site near Clovis in eastern New Mexico, where archaeologists found spearpoints and fossils in sediments below those that had produced Folsom artifacts. This meant that the Clovis people, as they came to be known, represented an even older Paleo-Indian culture. Just how much older was determined soon after the development of radiocarbon dating in the late 1940s This modern dating technology showed that the people who made Clovis artifacts had inhabited North America by about 11,500 years ago-some 600 years before the Folsom culture appeared.

The age of the earliest Clovis sites coincided neatly with geological evidence that by 11,500 years ago the Laurentide and Cordilleran ice sheets had retreated far enough to open a habitable ice-free corridor-a fact first recognized by University of Arizona archaeologist C. Vance Haynes. It appeared that Clovis groups had moved south from Alaska through the continental interior right after it became possible to do so. That no excavated site older than Clovis were found, at least initially, seemed to confirm that Clovis people were the first colonizers of the Americas.

Once they had travelled south of the ice sheets, Clovis groups spread rapidly. Soon after 11,500 years ago, Clovis and Clovis-like materials appear throughout North America. The oldest sites are in the Great Plains and the southwestern United States; younger sites are found in eastern North America. No subsequent group would achieve such a wide distribution, but Clovis groups did not stop in North America. According to the Clovis-first theory, they must have continued to South America. As these groups pushed south, the traditional thinking went, they developed different tools and other artifacts that were no longer readily recognizable as Clovis. They arrived at Tierra del Fuego on the southern tip of South America within 1,000 years of leaving Alaska.

The rapid dispersal of Clovis peoples throughout the hemisphere was remarkable given the landscape they traversed. Not only did they travel through desert, plains, and forest, they did so during the environmental upheaval that marked the end of the last Ice Age. Climates were growing warmer-drier in some areas and wetter in others-and the distributions of plants and animals were shifting in complex ways in response to the changing climates. As they entered each new habitat, they must have quickly learned to find suitable plant and animal foods. They would need stone to repair their toolkits, freshwater to drink, and the ability to overcome environmental challenges encountered along the way.

A long-favoured explanation for the rapid spread of Clovis people was that they preyed on large animals, such as mammoth and mastodon. These animals were themselves wide-ranging in their distribution. Archaeologists believed a reliance on big-game hunting meant that Clovis groups would have less need to learn about available local resources.

Archaeologists initially found some support for the big-game hunting hypothesis in archaeological excavations, as well as in the Clovis toolkit itself. Along the San Pedro River in Arizona, for example, are four Clovis sites separated by less than 20 km (12 mi). Each site yielded Clovis points embedded in the skeletons of mammoths. So similar are the points at these sites that they may be the handiwork of a single group, which obviously found good hunting in the area. The artifacts at San Pedro and other Clovis sites include a variety of tools handy for hunting, killing, and butchering game animals. There are the distinctive fluted spearpoints, shown experimentally by University of Wyoming archaeologist George Frison to be capable of bringing down elephant-sized animals. In addition, there are stone knives, scrapers, gravers (tools for scoring bone), drill, and a few preserved artifacts of ivory and bone. These tools, which occur in Clovis sites across North America, support the view that Clovis peoples were practicing the same way of life.

Clovis tools were typically made of superior quality fine-grained stone, including chert, jasper, and chalcedony. Such stone is durable and readily flaked by skilled Toolmakers into a desired, sharp-edged form. More important, it is easily resharpened and reused. That would be important to hunters pursuing wide-ranging big game. They could continue to use their stone tools as they tracked game far from the quarries where they acquired their stone. Analysis of these tools suggests that Clovis groups commonly travelled distances of 300 km (185 mi). In one instance, a dozen Clovis points quarried from the Texas Panhandle were left as a cache in northeastern Colorado, 485 km (300 mi) away. These distances indicate a range of movement across the landscape far greater than is observed in later periods of American prehistory.

The idea that Clovis people where big-game hunters could help explain an unsolved puzzle of the Americas in the late Pleistocene: the catastrophic extinction of dozens of species of large animals. Across the Americas millions of large animals known as megafauna disappeared. These animals included the mammoth, mastodon, and the giant ground sloth, as well as the horse, the camel, and many other herbivores. Some very large and formidable carnivores also died out, including the American lion, the saber-toothed tiger, and the giant short-faced bear. These extinctions were thought to coincide with the arrival of Clovis groups, a chronological coincidence that led University of Arizona ecologist Paul Martin to propose the hypothesis of Pleistocene overkill. This hypothesis, first put forward in 1967, contends that Clovis big-game hunters caused the extinctions. Martin suggested that overkill was especially likely-even inevitable-if Clovis group were the first Americans. For if the megafauna had never before faced human hunters, they would have been especially vulnerable prey to this new, dangerous, two-legged predator.

For decades the Clovis-first theory seemed to fit well with the available geologic and archaeological evidence. However, some archaeologists always harbored doubts about the Clovis-first scenario. These doubts intensified toward the end of the 20th century. A reassessment of Clovis subsistence led many to challenge the traditional view of Clovis people as big-game hunting specialists. In addition, the discovery of a pre-Clovis human presence in the Americas has undermined the claim that Clovis people were the first Americans.

Since the 1980s there has been increasing skepticism about the traditional view that Clovis groups were dependent on big-game hunting. Despite many years of searching, few Clovis archaeological sites have yielded evidence to support this view. The San Pedro Valley sites have proved to be the exception, not the rule. There are scarcely a dozen Clovis big-game kill sites known, mostly in western North America, with two possible kill sites in eastern North America. These contain the skeletal remains of just two of the Pleistocene Megafaunal-mammoth and mastodon. Clovis people did kill big game, but apparently not as often as once supposed.

A broader view of Clovis subsistence now suggests that they often targeted slower, smaller, less dangerous prey. The roasted remains of turtles, for example, have been found at many sites, including Aubrey and Lewisville in Texas, Little Salt Spring in Florida, and even at the original Clovis site in New Mexico. Other sites indicate that the diet in Clovis times included small and medium-sized mammals, such as beaver, snowshoe hare, and caribou, as well as fish and a variety of gathered plants.

Over time, the Pleistocene overkill hypothesis was clearly not strongly supported by the archaeological record. Archaeologists have yet to document a single Clovis sloth kill, horse kill, camel kill, or a kill of any of the other several dozen megafaunal species. Whatever caused the extinction of these animals, it was not human hunting. Scientists are currently pursuing alternative hypotheses to explain megafaunal extinctions, such as the possibility they were caused by late Pleistocene climatic and environmental change, or perhaps disease. The puzzle remains unsolved.

A revised view of Clovis subsistence coincides with a reevaluation of the Clovis toolkit. Analysis of Clovis spearpoints shows they were adequate weapons for bringing down big game, but they were not always used that way. Few spearpoints show the kinds of damage that routinely occurs when stone projectiles meet animal bone. Clovis point, like many items in the Clovis toolkit, were most likely used as multipurpose tools; many spearpoints show wear patterns indicating they were used as knives. There is also more variety in the Clovis toolkit than traditionally supposed. Clovis groups in different areas occasionally fashioned tools needed for particular tasks in the environments in which they found themselves. In addition, they probably made tools-perhaps wooden digging sticks or woven plant fibre nets with which to catch fish or small game, hat has not been preserved from that remote time. A varied, multipurpose toolkit is to be expected of groups that hunt and gather a range of foods.

If they were not pursuing wide-ranging big game, why were Clovis groups moving such great distances across the landscape? The answer may be exploration. Hunter-gatherer peoples need to know where to go when resources in one location begin to diminish, as animals are hunted out or flee and as available plants are gathered up. For colonizers in an unfamiliar landscape, that means ranging widely across newly discovered lands to see what resources occur where, when, and in what abundance. Not knowing where they might encounter stone to refurbish their tools on their journeys, it is not surprising that Clovis explorers selected only the highest quality stone for their toolkits, or that they left caches of tools along their way-as the cache in Colorado demonstrates. They could return to the caches to replace diminished supplies without having to walk all the way back to a distant stone quarry.

Claims of a pre-Clovis human occupation in the Americas have been around for decades. By the 1980s, dozens of such sites had been reported, some estimated to be as much as 200,000 years old.

Archaeologists have carefully scrutinized each site to determine if three basic criteria are present. Sites lacking all three criteria cannot be accepted as valid. First, the site must have genuine artifacts produced by humans or human skeletal remains. Second, these artifacts or remains must be found in unmixed geological deposits to ensure that younger objects are not accidentally buried in older layers of sediment. Third, these artifacts or remains must be accompanied by reliable radiocarbon dates that indicate a pre-Clovis occupation. For decades all sites reputed to be of pre-Clovis age failed to meet these criteria. All, that is, except one.

In the mid-1970s University of Kentucky archaeologist Tom Dillehay began excavating at Monte Verde, a site on the banks of Chinchihuapi Creek in southern Chile. Monte Verde is an extraordinary site. Unusual geological conditions quickly buried the remains of an ancient camp beneath wet, swampy sediments. Since the remains left on the surface by the site’s inhabitants were not exposed to the air, many organic remains-which normally decays and disappear-were preserved.

Dillehay’s team found an astonishing array of organic materials. These included wooden foundation timbers of roughly rectangular huts, finely woven string, and chewed leaves, seeds and other plant parts from nearby species-many with food or medicinal value. In addition, excavations revealed burned bones of mastodon along with pieces of its meat and hide. Some bits of hide still clung to pieces of wooden timbers, the apparent remnants of hide-coverings that once draped over the huts. Also found was the footprint of a child in the once-sticky mud, an assortment of hearths, and hundreds of stone, bone, and wood artifacts. Dillehay’s team firmly radiocarbon dated these organic remains to 12,500 years ago-1, 000 years before Clovis times.

The excavations at Monte Verde lasted nearly a decade, and the laboratory research, analysis, and writing about what Dillehay’s team had found took another dozen years. Dillehay’s findings had to be carefully studied and presented in order to overcome the skepticism of archaeologists who had grown accustomed to seeing pre-Clovis claims fail. When Dillehay’s second book on the results of his investigations appeared in 1997, most archaeologists were convinced; the Clovis barrier had fallen at last.

Since Monte Verde, several new candidates for a pre-Clovis settlement in North America have appeared. The Cactus Hill site in Virginia has yielded artifacts below layers in which Clovis-like fluted points were found. Precisely how old those more deeply buried artifacts might be are uncertain, however. The layer in which they were found has produced widely varying radiocarbon ages, from 16,000 years ago to modern times. It therefore remains unclear how old these artifacts might be. Archaeologists have also refocused attention on the Meadowcroft Rockshelter in Pennsylvania. Excavations at Meadowcroft in the 1970s and 1980s produced unmistakable artifacts in deposits perhaps as much as 14,250 years old. Questions remain, however, about whether the artifacts and organic remains are as old as the radiocarbon-dated charcoal. For the time being, neither site, nor any of the other sprinkling of recent pre-Clovis claims, is fully accepted by a still-cautious archaeological community.

The excavations at Monte Verde conclusively demonstrated that people inhabited the Americas in pre-Clovis times. Yet Monte Verde also raised many new questions about the first Americans. Several new theories have been advanced to explain the identity, antiquity, and entryway of the first Americans.

Most archaeologists believe the first Americans-whether travelling in a single migration or multiple migrations-originated in northeastern Asia. This view is based mainly on geological evidence that a land bridge once connected Asia and North America and on genetic similarities between northeastern Asian peoples and Native Americans. It is not, however, founded on any direct archaeological evidence. The kinds of tools typical of the Monte Verde site or Clovis culture are not found in either northeastern Asia or Beringia. All the same, Monte Verde is far from that region and in a very different environmental setting, so it is not surprising its artifacts are different.

Although archaeologists have yet to find a single Clovis spearpoint in northeastern Asia, one artifact comes close: a stone points from the site of Uptar in Siberia that has a flute on one face. Even so, the age of the spearpoint is unknown, and it is not otherwise similar to Clovis fluted points. There are archaeological sites in Alaska, such as the Nenana Complex, that slightly predates Clovis. However, these sites lack the hallmark of Clovis technology: fluted stone projectile points. A few Clovis-like fluted points have been found in Alaska, but these are younger, not older, than those to the south.

The absence of similar artifacts in Siberia or Alaska is not surprising. Finding archaeological traces of a small group, or several groups, that briefly passed through this vast area is a difficult task. In addition, the most recognizable feature of the first Americans’ eventfully occurred some 2,500 years before a habitable ice-free corridor opened in the North American interior. A coastal migration could explain how people arrived in Monte Verde 12,500 years ago. By the time the interior route opened, the ancient Monte Verdeans had long departed from the banks of Chinchihuapi Creek.

Finding sites occupied by coastal migrants, however, is no easy task. Much of the late Pleistocene-age shoreline along which migrating groups would have travelled was later submerged when the continental ice sheets melted and their waters returned to the sea. To meet this challenge, researchers are using sonar and taking core samples from the sea floor to explore and probe underwater landscapes and coastlines.

Archaeological excavations have occurred at sites on several islands off the coasts of Alaska and British Columbia. The effort has had some initial success. A cave on Prince of Wales Island in southeastern Alaska has yielded artifacts and human remains’ radiocarbon dated to about 10,000 years ago. Bear remains from another part of the same cave are dated to 41,000 years ago. These findings provide tantalizing hope that still older traces of a human presence can be found in this area. Further south, on one of the Channel Islands off the coast of California, and at several coastal Peruvian sites, material as much as 11,000 years old has been found. Still, none of these sites have produced remains old enough to be those of the first Americans.

Some archaeologists believe the first Americans did not come from northeastern Asia, but from Europe, crossing the North Atlantic Ocean by boat. No ancient boats have been found, but proponents note that modern humans travelled by boat to Australia perhaps 30,000 to 40,000 years ago. Archaeological support for this theory is based mainly on similarities observed between Clovis artifacts and those of the Solutrean Period of prehistoric Europe. Some researchers also find support for a North Atlantic route in several ancient human skeletons found in the Americas. These skeletons, proponents argue, appear to have more anatomical similarities with modern Europeans than with modern Native Americans.

Despite the claimed similarities, Solutrean and Clovis artifacts have important differences-in form, method of manufacture, and materials. Most obviously, Solutrean points lack fluting, and Solutrean sites include many stone artifacts and bone tools never found in the Americas. Most archaeologists believe the similarities in artifacts that do exist can be explained as the result of cultural convergence. The concept of cultural convergence suggests that different groups at different times and places might create or use similar materials or tools in similar ways. Solutrean and Clovis cultures are also separated by many thousands of kilometres, most of which is ocean, and by 5,000 years. The Solutrean period ended more than 16,500 years ago, while the earliest Clovis site is only 11,500 years old.

The ancient American skeletons considered by some archaeologists to be anatomically distinct from modern Native American’s also fail to support a North Atlantic route. After more detailed anatomical study, those remains, such as the 8,500-year-old skeleton found in Washington State known as Kennewick Man-proved to be far less similar to Europeans than initially believed. Kennewick Man does differ from modern Native Americans. However, many physical anthropologists believe this individual, as with all other ancient skeletal remains found in the Americas, are ancestral Native Americans. The fact that ancient and modern Native Americans do not precisely resemble each other is not surprising: many thousands of years of anatomical and evolutionary change separate them. In addition, for several thousand years after the Americas were first settled, the human population was small, widely scattered, and groups were relatively isolated for long periods of time. Under these circumstances, variability in anatomical features can emerge. Groups of ancient Americans would not necessarily look alike, let alone resemble their descendants many thousands of years later.

If the first Americans migrated from northeast Asia, then the study of modern Native American people-descendants of the first Americans-may hold vital clues about the number and timing of the ancestral migrations to the Americas. Linguists and geneticists have searched for these clues in the languages and genetic heritage of modern Native Americans.

Linguistic studies are based on the assumption that ancient elements, or ‘echoes’, of an ancestral language can still be heard in the shared words, grammar, sounds, and meanings of the diverse languages spoken by modern Native Americans. By searching for these elements, researchers hope to learn if all Native American languages evolved from a single ancestral tongue. This common ancestral tongue, if present, may be the language spoken by the earliest Americans. If these elements are not present, however, they could indicate the Americas were peopled at different times by groups speaking distant or unrelated languages.

Linguists are still searching for answers. Most linguists, however, believe the sheer number and variety of Native American languages-of which hundreds are known-bespeaks a long period of language diversification. University of California linguist Johanna Nichols estimates that language diversification in the Americas began as early as 35,000 years ago.

Historical studies of the genetic material of modern Native Americans appear to offer additional clues about the earliest Americans. These studies are based on the knowledge that some types of deoxyribonucleic acid (DNA, the chemical that encodes genetic information) are inherited strictly from one parent or the other, but not both. Mitochondrial DNA (mtDNA) is passed from mothers to their offspring, and Y. Chromosome DNA is passed from fathers to sons. Genetic change in these types of DNA is a result of mutation, not recombination of the parents’ DNA. By looking at the genetic difference in mtDNA or Y-chromosome DNA over time, researchers can determine how closely related certain populations are and how much time has elapsed since they were members of the same population.

Genetic studies have shown that virtually all Native Americans share a set of four major mtDNA lineages, and at least two such lineages on their Y chromosome. This indicates these groups are all closely related to one another. The nearest relatives of Native Americans beyond the Americas are the native peoples of northeastern Asia. Native Americans are unrelated genetically to Europeans. Geneticists have variously estimated that peoples of Asia and the Americas were part of the same population from 21,000 to 42,000 years ago.

Geneticists, like linguists, still debate when and how many migratory bands may have trekked from Asia to the Americas. Some scholars believe the evidence indicates a single migration. Others see support for multiple movements of people across Beringia and back. How this is resolved, and how the genetic heritage and languages of modern Native Americans are linked to ancient archaeological data, such as Clovis artifacts, remain important unsolved challenges.

One of the most obvious ways of directly linking ancient and modern Native Americans is by examining the DNA found in prehistoric human skeletal remains. Such remains are extremely rare, however, and recovering DNA from ancient remains can be difficult, if it is even preserved. In the United States the difficulty of linking ancient remains with modern Native Americans might be a strictly scientific concern was it not for legislation that has influenced the progress and conduct of such research.

The Native American Graves Protection and Repatriation Act (NAGPRA), signed into law in 1990, was aimed at righting the wrongs of earlier generations of scientists. In the past, researchers sometimes indiscriminately collected the bones of Native Americans for study and display in museums and universities. Native American peoples were not the only groups to receive such treatment, but their remains and artifacts were gathered in lopsided numbers. To many Native Americans, this was one more instance of mistreatment at the hands of Euro-Americans. In response, NAGPRA required institutions in possession of Native American skeletal remains and artifacts to return them at the request of known lineal descendants.

In the wake of NAGPRA, thousands of skeletons and associated artifacts were returned to Native American peoples. Many of these objects are only a few hundred years old. In such cases, debates over the identity of the descendants have been rare. Other cases, particularly those involving older remains, are more difficult to resolve. Proving lineal descent in cases of greater antiquity is no easy task. This is because descendants of early Americans formed new groups as populations grew, and these groups moved away to settle new lands. A group living 11,000 years ago would almost certainly be ancestral to many modern Native American tribes, not just one. In the future, geneticists may identify sufficiently precise genetic markers to link DNA extracted from ancient human skeletal remains with a group of modern tribes. Nonetheless in most cases, making the link to only one tribe will be difficult.

In one prominent case involving the 8,500-year-old remains of Kennewick Man, the debate over lineal descent ended in a court of law. These remains were found in Washington State in 1996 on property belonging to the federal government. Five Native American tribes living in the area submitted a joint claim under NAGPRA for the return of the remains. A group of archaeologists and physical anthropologists then filed a lawsuit to block the return until detailed scientific studies, including analysis of Kennewick Man’s DNA, could be conducted.

The lawsuit sparked several years of legal and scientific wrangling. The Native American groups felt scientific studies were an unnecessary desecration of the remains. They believed they had lived in the area since the beginning of human prehistory in the Americas; therefore, Kennewick Man must be one of their ancestors. The scientists bringing the lawsuit, however, argued that ancestry could not be ascertained without detailed study. This research, they noted, would also add vital information to the meager knowledge about ancient American peoples. Both sides were well intentioned, and under the ambiguous terms of NAGPRA, both were right. NAGPRA allows lineal descendants to be identified not just by DNA, but also by tribal traditions and geographic proximity. The dispute remains unresolved.

Fortunately, few NAGPRA cases have been as contentious as that surrounding Kennewick Man. The human remains from Prince of Wales Island, found about the same time, were excavated and analysed without pitting science against tribal tradition, or archaeologists against Native Americans. Ensuring there is room for both perspectives remains and important challenge under the framework established by NAPGRA.

Studies of the first Americans entered the 21st century on the cusp of change. The traditional view that the first Americans were fast-moving Clovis big-game hunters who migrated into the North American interior on the heels of retreating ice sheets has been undermined. Evidence from Monte Verde demonstrates that humans arrived in the Western Hemisphere in pre-Clovis times, and a reassessment of Clovis subsistence suggests Clovis people were not the big-game hunting specialists imagined in the past. As yet, no widely accepted theory has arisen to replace the older Clovis-first theory. Researchers are proposing many new ideas. Which of these ideas will succeed or fail remains to be seen.

The instruments of archaeological study continue to improve at a rapid pace. Shovels and trowels, the traditional tools of excavation, are now being used alongside ground-penetrating radar, seismic studies of surface features, and other techniques to find now-buried sites. A variety of new studies are providing information about where the materials to make ancient stone artifacts were acquired, how the artifacts were made, and how they were used. These include studies of the geological sources of stone artifacts, experimental work in stone fracture mechanics better to understand how stone tools were made, and analyses of microscopic wear patterns visible on such artifacts. A batteries of techniques are now available to study the chemical composition of bone, plant, shell, and other organic and inorganic remains, providing archaeologists with a clearer picture of the environments to which the first Americans adapted. New dating techniques under development should allow archaeologists reliably to date sites more than 50,000 years old-the current limits of radiocarbon dating. These techniques could prove useful in the event sites of greater antiquity are eventually found in the Americas.

The time-honoured process of acquiring archaeological evidence through careful and meticulous site excavation continues. Where the oldest preserved sites might be is not yet known. There are obvious places to look, however, including eastern Siberia, which is still relatively unknown to archaeologists. Other promising locations for future research include the remnants of Beringia, coastal islands of the Pacific, the Isthmus of Panama-through that any group headed into South America must have passed, and perhaps places not yet imagined. Some of the most interesting discoveries in years to come may even be made in museums, when new techniques for analysis are applied to old collections of artifacts and human remains. Affirmatively done with the interest and cooperation of Native American groups.

Archaeologists may never find evidence of the very first humans to arrive in the Western Hemisphere. It is, after all, a very big place. However, ongoing research is sure to reveal much about how the first Americans colonized a new world.

In America, the search for additional evidence of Folsom Man continues. Near Fort Collins, Colorado, Doctor Frank H. Roberts, Jr., continued excavation at a camp site, uncovering a variety of tools and weapons, and the first known decorated objects from any Folsom site, two decorated beads. This earliest American. Folsom Man, may have lived contemporaneously with Old World Cro-Magnon Man, or some 25,000 years ago. This tentative date was assigned recently by Doctors Kirk Bryan and Louis L. Ray of Harvard University based on studies made at the Folsom camp site, known as the Lindenmeier site, in northeastern Colorado. Many stone points, identified as typically Folsom were found in an earth stratum above the floor of an ancient valley that is traceable to a terrace on a local stream. The terrace has been dated as of the late Ice Age. The dating is based on the assumed correlation between this late Ice Age stage with the Mankato of the Middle West and the Pomeranian of Europe. From this it appears that the culture-bearing layer of the Lindenmeier site was developed at the end of the glacial advance, or 25,000 years ago.

An attempt has been made to adapt the method of dating ruins by analysis of tree rings, so successfully carried out in America, specifically in Southwestern United States, to Viking ruins in Southern Norway. E. de Geer, who has been carrying on this work, reports, that, from a study of the remaining timbers in a wooden burial chamber in a Viking mound, it was constructed in 931 ad. A Swedish fort in Gotland was found by the same method to have been built in five AD.

The Homo habilis, is an extinct primate in the categorical classification set order of the group called the subfamily. A set-classification of group members made up of the humans. Scientists believe this species lived in Africa between two million and 1.5 million years ago. H. habilis are the earliest known members of the genus Homo, the branch of Hominines believed to have evolved into modern humans. The term Homo habilis means handy man, a name selected for the deposits of primitive tools found near H. habilis discovered fossils.

Scientists distinguish H. habilis from australopithecines, the more primitive Hominines from which it evolved, by analysing key physical characteristics. H. habilis had a larger brain than australopithecines. The braincase of H. habilis measured at least 600 cubic centimetres (thirty-seven cu inches) compared with the 500 cu cm (thirty-one cu in) typical of australopithecines. Australopithecines had long arms and short legs, similar to the limbs of apes. The overall body form of australopithecines was also apelike in having large body bulk to its height. Proportionally, H. habilis resembled modern humans with its limbs and small body bulk compared with its height. H. habilis had smaller cheek teeth (molars) and a less protruding face than earlier Hominines. H. habilis were taller than australopithecines, but shorter than A Homo erectus, a later, more humanlike species.

The use of primitive tools implies that H. habilis had developed a different way of gathering food than earlier Hominines, which fed only on vegetation. H. habilis probably ate meat plus fruits and vegetables. Anthropologists disagree on whether H.habilis obtained this meat through hunting, scavenging, or a combination of both techniques

British-Kenyan anthropologist Louis Leakey discovered the first fossil evidence of H. habilis at Olduvai Gorge in northern Tanzania in 1960. Other anthropologists have since discovered specimens in northern Kenya, South Africa, and Malawi. Although all these specimens had a larger brain than australopithecines, some had especially large brains (almost 800 cu cm or forty-nine cu. in.) and more modern skeletons. However, their large and slightly protruding faces seem more primitive than those of other H. habilis specimens. Most scientists now believe that these fossils represent a distinct species named Homo rudolfensis. Scientists debate over which of these two species evolved into the later, even larger-brained H. erectus. Many consider H.rudolfensis the more likely candidate because of its large brain and more modern skeleton. For anthropology, the science of man, 1964 was an eventful and exciting year. Perhaps the most important development of 1964 was the discovery in Africa of a new humanlike, tool-using species, possibly a direct ancestor of man. This was not the only remarkable thing. The new species, named Homo habilis, was very old, probably 1.75 million years old, which makes him nearly twice as old as any previously known tool-using animal. The appearance of Homo habilis on the scene caused great excitement between paleontologists and physical anthropologists and has led many of them to a major reconsideration of much of man's biological history

The new discovery, like so many other important finds of recent years, was made by the top fossil finder of the 20th century, Louis S. B. Leakey, curator of the Coryndon Museum Centre for Prehistory and Palaeontology, Nairobi, Kenya. Professor Leakey's work is invariably done with his wife. Mary, a geologist, and their three sons, who have also recovered important fossil materials. The finds were made in the incredible fossil-rich Olduvai Gorge, an arid chasm in the Serengeti Desert of mainland Tanzania (formerly Tanganyika). The section of Olduvai Gorge excavated by the Leakeys is the most spectacular single prehistoric site in the world. The gorge cuts directly through four main stratigraphic levels, or beds, and in these four beds there are undisturbed paleontological and archaeological deposits covering a time span of nearly two million years. The gorge contains the stratified records of the development of stone tools from the most simple beginnings to elaborately fashioned hand axes; it contains fossil evidence of four major types of men or near-men; and it is rich in fossil remains of ancient fauna, including insects, fish, reptiles, and mammals of the lower and middle Pleistocene periods.

The Homo’s habilis excavations were announced by Dr. Leakey at the National Geographic Society in Washington, D.C., and in the April, 4, 1964, issue of Nature. The Olduvai fossil remains are being studied by Professor Phillip Tobias, University of Witwatersrand, Johannesburg, and Dr. John R. Napier, Royal Free Hospital School of Medicine, London.

The Leakeys found bones and teeth representing sixteen hominid individuals in beds’ I and II (the two lowest beds) of Olduvai Gorge. One of these was the well-known Zinjanthropus, which is placed roughly in the genus Australopithecus. The australopithecines were a genuses of near-men living about one million years ago, perhaps a little earlier. They were originally considered close to the direct line of man's ancestry, but this is now in doubt. All of the other ascertaining affectualities were considered by Leakey, Tobias, and Napier to represent Homo habilis, a more advanced hominid with a size and shape intermediate between Australopithecus and Homo. Acquainted with the genus that includes modern man and his immediate ancestors that of time were past over by 500,000 years. The specific name habilis are from the Latin and means ‘able, handy, mentally skilful, vigorous’.

Not all of the 206 bones making a complete skeleton of Homo habilis have yet been discovered. The recovered parts, however, are numerous enough to give a good picture of his anatomy and, by inference, of his behaviour. The recovered parts include the remains of two or three skulls, three mandibles (jawbones), about forty teeth, parts of a hand and foot, the bones of a lower leg, a collarbone, and some rib fragments.

Some features distinguishing modern man from his ancestors of earlier epochs include legs and feet adapted for an upright posture and bipedal gait; hands adapted for tool use rather than the locomotion; teeth and jaws adapted for a meat -eater rather than a purely herbivorous diet; a brain adapted for good hand-eyes coordination in tool manufacture and used; and the ability to express with language of the human sort. Except language, the foot, hands, jaws, teeth, and brain case of Homo habilis suggest his close relatives cut an end of the line between the pre-humans and human grades.

The fossil foot is nearly complete, lacking only the back part of the heel and the toes. The foot bones are within the range of variation of The Homo sapiens. The large toe is stout and carried parallels to the other toes; the longitudinal and transverse arch system is like ours. The bones of the foot and leg show the adult Homo habilis had an upright posture and bipedal locomotion, a slender body build, and a stature of about four feet.

The hands are not entirely apelike, nor are they typically human. The hand bones are heavier than ours, and the finger bones are curved inward. The tips of the fingers and thumb are broad, stout, and covered by flat nails, as our modern man's. Probably Homo habilis could not oppose his thumb and fingertips in the precision, pen-holding the grip of modern man, but his hand can make stone tools.

The jaws are smaller than those of Australopithecus; the front of the lower jaw is retreating, with no development of an external bony chin. The incisor teeth are large, the canines are large compared with the premolars, and the premolars and molars are narrow in the tongue-to-cheek dimension. Both the manlike proportions of the teeth and the remains of fish, reptiles, and small mammals found in his living sites show that Homo habilis had an omnivorous diet.

The skull is intermediate in shape between Australopithecus and modern man. The mass of the facial about the cranial part of the skull is reduced and is thus more like the advanced forms. The greatest breadth of the skull is high on the vault. The curvature of the parietal bones is intermediate; that of the occipital bone resembles A Homo sapiens.

The brain case of the Olduvai specimen known as No. 7 has an estimated endocranial volume of 680 cc. The endocranial volume for australopithecines ranges from 435 to 600 cc., that of pithecanthropines from 775 to 1,225 cc., and that of modern man from about 1,000 to 2,000 cc., with an average of about 1,350 cc. Thus, the brain of Homo habilis, although both absolutely and proportionally larger than any of the australopithecines, were not large, either absolutely or relationally by contrast, in co-occurrences among a modern humans. A typical adult Homo habilis had a body weight of about 75 pounds and a brain weight of a little more than one pound, whereas a modern man of 150 pounds has a brain weight of about 3 pounds. In the period following Homo habilis, hominid body weight doubled, but the weight of the brain tripled.

The stone tools found in association with Homo habilis are typical of the Oldowan industry first recognized by Leakey 30 years ago. Similar tools are found elsewhere in East Africa, and in South Africa, Angola, and North Africa. These tools are commonly called pebble tools because most of them are made from waterworn pebbles. Most of the Oldowan choppers are worked on both faces to produce a sharp but irregular cutting edges.

These rough choppers made from potato-sized pieces of stone are the earliest known stone tools; they date from the very beginning of the Pleistocene. There is abundant evidence from Olduvai Gorge showing that the great hand ax or Chelles-Acheul culture evolved directly from the Oldowan stone industry.

Oldowan pebble tools and the skeletal remains of Homo habilis are associated in six sites. At some East and South African sites, pebble tools are also found in association with Australopithecus, but Homo habilis are, according to Professor Tobias, always associated with Oldowan tools, whereas Australopithecus is not. The evidence from the six sites unquestionably shows that early hominids regularly manufactured tools of a set design before they developed hands or brains like those of modern man.

The ages of Homo habilis are as standardly forged and unforeseeable as the fossils themselves. Before these new finds most anthropologists thought the earliest Toolmaker lived less than one million years ago. The potassium-argon process of dating had more than doubled the age of known tool manufacture.

The principle of the potassium-argon technique is simple. The radioactive isotope potassium forty (K40) found in volcanic rock is known to disintegrate into calcium forty and argon forty (A40), an inert gas. The rate of transmutation is constant and very slow; one half the K40 atom changes to A40 atoms, each stand for about 1.3 billion years. The phosphorus-containing mineral anorthoclase is found in the volcanic deposits of Olduvai Gorge. While the lava was in a molten state beneath the earth, no A40 accumulated in the mineral because the gas boiled away. After the lava erupted and cooled, however, nearly all newly formed A40 atoms were imprisoned in the crystalline structure of the anorthoclase. By removing the mineral at a low temperature and then heating it, scientists have succeeded in collecting the released A40 atoms to be counted in a mass spectrometer. Because no A40 was initially present and because the rate of accumulation is also known, this count gives an estimate of the age of the rock. Several samples give age estimates ranging from 1.57 to 1.89 million years, or an average of 1.75 for Bed I in Olduvai, where Homo habilis were found and where the first tools of hominid manufacture appear.

The Homo erectus is an extinct primate classified in the subfamily Homininae and the genus Homo, which include humans. Scientists learn about extinct species, such as The Homo erectus, by studying fossils-petrified bones buried in sedimentary rock. Based on their analysis of these fossils, scientists believe that Homo erectus lived from about 1.8 million to 30,000 years ago. Until recently, The Homo erectus was considered an evolutionary ancestor of modern humans, or Homo sapiens.

The anatomical features of A Homo erectus are more humanlike than those of earlier Hominines, such as australopithecines and Homo habilis. The Homo erectus had a larger brain, measuring up to 1150 cc, and a rounder cranium-the portion of the skull that covers the brain-than earlier Hominines. A Homo erectus was also taller, with a flatter face and smaller teeth. Large differences in body size between males and females, characteristic of earlier hominine species, are less evident in Homo erectus specimens.

This larger brain and more modern body-enabled The Homo erectus to do many things its hominine ancestors had never done. A Homo erectus appears to have been the first hominine to venture beyond Africa. It was the first hominine who effectually engaged of systematic hunting, the first to make anything resemble home bases (campsites), and the first to use fire. Evidence suggests that the childhood of The Homo erectus archeologic remains that periods longer than of earlier Hominines, providing an extended period in which to learn complex skills. These skills are reflected in the proportionally sophisticated stone tools associated with The Homo erectus are being included of their archeological remains. Although still primitive compared with the tools made by early Homo sapiens, the tools made by Homo erectus are much more complex than the simple, small pebble tools of earlier Hominines. The most characteristic of these tools was a teardrop-shaped hand ax, known to archaeologists as an Acheulean ax.

Scientific study of The Homo erectus began in the late 19th century. Excited by Charles Darwin‘s theory of evolution and fossil discoveries in Europe, scientists began to search for the fossilized remains of ‘the unknown factor’, the evolutionary ancestor of both human beings and modern apes. In 1891 Dutch anthropologist Eugene Dubois travelled to Java, Indonesia, where he unearthed the top of a skull and a leg bone of an extinct hominine. Measurements of the skull suggested that the creature had possessed a large brain, measuring 850 cc, while the leg-bone anatomy suggested that it had walked upright. In recognition of these characteristics, Dubois named the species Pithecanthropus erectus, or ‘erect ape-man.

Canadian anthropologist Davidson Black found similar fossils in China in the late 1920s. Black named his discovery Sinanthropus pekinensis, or ‘Peking Man’. Later studies by Dutch scientist G. H. von Koenigswald and German scientist Franz Weidenreich showed that the fossils discovered by Dubois and Black came from the same species, which was eventually named The Homo erectus.

Since these earliest discoveries, Homo erectus fossils have been found in East Africa, South Africa, Ethiopia, and various parts of Asia. Kenyan fossil hunter Kamoya Kimeru discovered an almost complete Homo erectus skeleton, known as the Turkana boy, near Lake Turkana in northern Kenya in 1984. The oldest known specimen, dated at almost two million years old, also comes from northern Kenya. Recently developed dating methods have shown that Homo erectus also lived in Java almost two million years ago, but Scientific assumptions about The Homo erectus have changed dramatically since the early 1990s. Anthropologists long assumed that the species spread from Africa to parts of Asia and Europe and that these dispersed populations gradually evolved into The Homo sapiens, or modern humans. Most anthropologists now think it more likely that Homo sapiens originated from a small population in Africa within the past 200,000 years. According to this theory, descendants of this African population of Homo sapiens spread throughout the eastern hemisphere, replacing populations of more ancient Hominines, perhaps with limited interbreeding.

Many anthropologists now believe that some Homo erectus specimens should be classified as a separate species named Homo ergaster. According to this view, Homo ergaster appeared first in East Africa and quickly spread into Asia, where it evolved into The Homo erectus. The Homo sapiens arose in Africa from a population descended from Homo ergaster. Until recently, A Homo erectus was thought to have died out about 300,000 years ago. Recent studies of Homo erectus populations in Java suggest that they may have lived until as recently as 30,000 years ago, long after the evolution of modern humans.

Anthropologists also debate whether Homo erectus used language. Some scientists argue that the brain size of The Homo erectus, the shape of its vocal structures, and the complexity of its behaviour suggest that it had a capacity for spoken language far beyond the rudimentary vocalizations of apes. Other anthropologists reject this conclusion. They point out that the first evidence of artistic expression, a trait closely linked with language, appears only about 40,000 years ago. These skeptics also point to the primitive quality of the tools associated with The Homo erectus. Some anatomical evidence also suggests that Homo erectus overran language abilities. The spinal column of early Homo erectus was much narrower than that of modern humans. This anatomical characteristic implies that Homo erectus had fewer nerves to control the subtle movements of the rib cages required for the production of spoken language. This question may remain unanswered, because, unlike stone tools, spoken words never become part of the archaeological record.

The skulls and teeth of early African populations of the middle Homo differed subtly from those of later H. erectus populations from China and the island of Java in Indonesia. H. ergaster makes a better candidate for an ancestor of the modern human line because Asian H. erectus has some specialized features not seen in some later humans, including our own species. H. heidelbergensis has similarities to both H. erectus and the later species H. neanderthalensis, although it may have been a transitional species evolving between Middle Homo and the line to which modern humans belong.

The Homo’s ergaster probably first evolved in Africa around two million years ago. This species had a rounded cranium with a brain size of between 700 and 850 cu. cm. (forty-nine to fifty-two cu. in.), a prominent brow ridge, small teeth, and many other features that it shared with the later H. erectus. Many paleoanthropologists consider H. ergaster a good candidate for an ancestor of modern humans because it had several modern skull features, including proportionally thin cranial bones. Most H. ergaster fossils come from the time range of 1.8 million to 1.5 million years ago.

The most important fossil of this species yet found is nearly a complete skeleton of a young male from West Turkana, Kenya, which dates as early as 1.55 million years ago. Scientists determined the sex of the skeleton from the shape of its pelvis. They also found from patterns of tooth eruption and bone growth that the boy had died when he was between nine and twelve years old. The Turkana boy, as the skeleton is known, had elongated leg bones and arm, leg, and trunk proportion that essentially match those of a modern humans, in sharp contrast with the apelike proportions of H. habilis and Australopithecus afarensis. He appears to have been quite tall and slender. Scientists estimate that, had he grown into adulthood, the boy would have reached a height of 1.8 m (6 ft) and a weight of 68 kg (150 lb). The anatomy of the Turkana boy shows that H. ergaster was particularly well adapted for walking and perhaps for running long distances in a hot environment (a tall and slender body dissipates heat well) but not for any significant amount of tree climbing. The oldest humanlike fossils outside Africa have also been classified as H. ergaster, dated of nearly 1.75 million years’ old. These finds, from the Dmanisi site in the southern Caucasus Mountains of Georgia, consist of several crania, jaws, and other fossilized bones. Some of these are strikingly like East African H. ergaster, but others are smaller or larger than H. ergaster, suggesting a high degree of variation within a single population.

H. ergaster, H. rudolfensis, and H. habilis, in summing up to possibly two robust Australopiths, all might have coexisted in Africa around 1.9 million years ago. This finding goes against a traditional paleoanthropological view that human evolution consisted of a single line that evolved progressively over time-an Australopiths species followed by early Homo, then Middle Homo, and finally H. sapiens. It appears that periods of species diversity and extinction have been common during human evolution, and that modern H. sapiens has the rare distinction of being the only living human species today.

Although H. ergaster appears to have coexisted with several other human species, they probably did not interbreed. Mating rarely succeeds between two species with significant skeletal differences, such as H. ergaster and H. habilis. Many paleoanthropologists now believe that H. ergaster descended from an earlier population of Homo-perhaps one of the two known species of early Homo-and that the modern human line descended from H. ergaster.

Sophisticated dating techniques combined with new fossil discoveries suggest that skeletal remains unearthed in Africa in 1995 come from the earliest known human ancestors to walk upright, according to a report published in the journal Nature on May 7, 1998.

Researchers said the new findings suggested that Bipedalism (walking on two legs) emerged 4.07 million to 4.17 million years ago, about 500,000 years earlier than was previously believed. Experts said the new research had important implications for the study of human origins because Bipedalism is widely considered a key, evolutionary adaptation that set the human lineage apart from that of other primates.

The new findings are based on fossils found three years ago in northern Kenya near Lake Turkana. Scientists identified the fossils as belonging to a newly discovered primordial human species, the Australopithecus Anamensis, a creature with apelike teeth and jaws, long arms, and a small brain.

Initial efforts to set the age of the sediments in which the fossils were discovered failed, raising doubts about the fossils' antiquity. In addition, a lower-leg bones provide for critical evidence of Bipedalism was found in a different sedimentary layer, suggesting the bone could be younger or from a different species.

Nevertheless, a new dating effort, led by anthropologist Meave G. Leakey of the National Museums of Kenya, used an argon-dating analysis technique that examined crystals in sedimentary volcanic ash. Researchers said the technique showed the lower -leg bone to be a ‘little’ younger than the other fossils dated at 4.07 million to 4.17 million years ago. This finding showed the remains belonged to the same species. The dating analysis was further supported by the subsequent discovery of dozens of new fossils in the area, the researchers said.

Before the discovery of Australopithecus Anamensis, the earliest known bipedal human ancestor was Australopithecus afarensis, the famous “Lucy” skeleton discovered in Ethiopia in 1974 and estimated to be three million to 3.7 million years old. Based on the new findings, some scientists believe that A. Anamensis may be the most ancient species of australopithecine.

One of the earliest defining human traits, Bipedalism-walking on two legs as the primary form of locomotion-evolved more than four million years ago. Fossils show that the evolutionary line leading to us had achieved a substantial upright posture by around four million years ago, then began to increase in body size and in relative brain size around 2.5 million years ago. However, other important human characteristics-such as a large and complex brain, the ability to make and use tools, and the capacity for language-developed more recently. Many advanced traits-including complex symbolic expression, such as art, and elaborative cultural diversity emerged mainly during the past 100,000 years.

Few books have rocked the world the way that. On the Origin of Species did. Influenced in part by British geologist Sir Charles Lyell’s theory of a gradually changing earth, British naturalist Charles Darwin spent decades developing his theory of gradual evolution through natural selection before he published his book in 1859. The logical-and intensely controversial-extension of Darwin’s theory was that humans, too, evolved through the ages. For people who accepted the biblical view of creation, the idea that human beings shared common roots with lower animals was shocking. In this excerpt form, on the Origin of Species, Darwin carefully sidesteps the issue of human evolution (as he did throughout the book), focussing instead on competition and adaptation in lower animals and plants the Darwinian evolution process by natural selection is fundamentally very simple: natural selection occurs whenever genetically influenced variation among individuals affects their survival and reproduction. If a gene codes for characteristics that result in fewer viable offspring in future generations, that gene is gradually eliminated. For instance, genetic mutations that increase vulnerability to infection, or cause foolish risk taking o lack of interest in sex, will never become common. On the other hand, genes that cause resistance to infection, appropriate risk tasking, and success in choosing fertile mates are likely to spread in the gene pool, even if they have substantial costs.

A classical example is the spread of a gene for dark wing colour in a British moth population living downwind from winds major sources of air pollution. Pale moths were conspicuous on smoke-darkened trees and easily caught by rare mutant forms of the moth, whose colour more closely matched that of the bark escaped the predators’ beaks. As the tree trunks became darker, the mutant gene spread rapidly and largely displaced the gene for pale wing colour. That is all there is to it. Natural selection involves no plan, no goal, and no direction-just genes increasing and decreasing in frequency depending on whether individuals with those genes have, relatively to other individuals, greater or lesser reproductive success.

The simplest of natural selection has been obscured by many misconceptions. For instance, Herbert Spencer’s nineteenth-century catch phrase ‘survival of the fittest’ is widely thought to summarize the process, but it actually promotes several misunderstandings. First, survival is of no consequence by itself. This is why natural selection has created some organisms, such as salmon and annual plants, that reproduces only once, then die. Survival increases fitness only because it increases later reproduction. Genes that increase lifetime reproduction will be selected for even if they result in reduced longevity. Conversely, a gene that decreases total lifetime reproduction will obviously be eliminated by selection even if it increases an individual’s survival.

Further confusion arises from the ambiguous meaning of ‘fittest’. The fittest individual, in the biological sense, is not necessarily the healthiest, strongest, or fastest. In today’s world, and many of those of the past, individuals of outstanding athletic accomplishment need not be the ones who produce the most grandchildren, a measure that should be roughly correlated with fitness. To someone who understands natural selection, it is no surprise that parents are so concerned about their children’s reproduction.

A gene or an individual cannot be called ‘fit’ in isolation but only as for a particular species in a particular environment. Even in single environment, every gene involves compromises. Consider a gene that makes rabbits more fearful and by that helps to keep them from the jaws of foxes. Imagine that half the rabbits in a field have this gene. Because they do more hiding and less eating, these timid rabbits might, on average, a bit less fed than their bolder companions. If, hunkered down in the March snow waiting for spring, two thirds of them starve to death while this is the fate of only one third of the rabbits who lack the gene for fearfulness, then, come spring, only a third of the rabbits will have the gene for fearfulness. It ha been selected against. It might be nearly eliminated by a few harsh winters. Milder winters or an increased number of foxes could have the opposite effect. It all depends on the current environment.

While natural selection has been changing us in many small ways in the last ten thousand years, his is but a moment on the scale of evolutionary time. Our ancestors of tn thousand or perhaps fifty thousand years ago, looked and acted fully human, if we could magically transport babies from that time a rear the in modern families. We could exec them to grow up into perfectly modern lawyers or farmers of athletes or cocaine addicts.

The point of the rest, is that we are specifically adapted to Stone Age conditions. These conditions ended a few thousand years ago, n=but evolution has not had time since then to adapt us to a world of dens population, modern socioeconomic conditions, low levels of physical activity, and the many other novel aspects of modern environment, we ae not referring merely to the world of offices, classrooms and fast-food restaurants. Life on any primitive farm or in any third-world village may also be thoroughly abnormal for people whose bodies s were designed for the word of the Stone Age hunter-gatherer.



Even more specifically, we seem too adapted to the ecological and socioeconomic condition experienced by tribal societies living in the semiarid habitat characteristic of sub-Saharan Africa. This is most likely where our species originated and lived for tens o thousand of years and where we spent perhaps 90 precent of it history after becoming fully human and recognizable as the species we are today. Prior to that was a far longer period of evolution in Africa in which our ancestor’s skeletal features lead scientist to give them names, such as Homo erectus and Homo habilis. Yet even these more remote ancestors walked erect and used their hand for making and using tools. We can only guess at many aspects of their biology speech capabilities and social organization is not apparent in stone artifacts and fossil remains, but there is no reason to doubt that their ways of life were rather similar to those of more recent hunter-gatherers.

Technological advance later allowed our ancestors to invade other habitats and regions, such as deserts, bungles, and forests. Beginning about one hundred thousand years ago, our ancestors began to disperse from Africa to parts of Eurasia, including seasonally frigid regions made habitable advances in clothing, habitation and food acquisition and storage, yet despite the geographical and climatic diversity, people still lived in small tribal groups with hunter-gatherer economies. Grainfield agriculture, with its revolutionary alteration of human dit and socioeconomic systems, was practiced fist in southwestern Asia about eight thousand years ago, and shortly therefore after in India and China. It took another thousand years or more to spread to central and western Europe and tropical Africa and to begin independently in Latin America. Most of our ancestors of a few thousand years still lived in bands of hunter-gatherers. We are, the words of some distinguished anthropologist, “Stone Ages, in the fast lane.”

Even so, it is nevertheless, that, all humans are primates. Physical and genetic similarities show that the modern human species, Homo sapiens, has a very close relationship to another group of primate species, the apes. Humans and the so-called great apes (large apes) of Africa-chimpanzees (including bonobos, or so-called pygmy chimpanzees) and gorillas-have the same ancestors that lived sometime between eight million and six million years ago. The earliest humans evolved in Africa, and much of human evolution occurred on that continent. The fossils of early humans who lived between six million and two million years ago come entirely from Africa. Humans and great apes of Africa have the same ancestor that lived between eight million and five million years ago.

Most scientists distinguish among twelve to nineteen different species of early humans. Scientists do not all agree, however, about how the species are related or which ones simply died out. Many early human species-probably most of them-left no descendants. Scientists also debate over how to identify and classify particular species of early humans, and about what factors influenced the evolution and extinction of each species.

The tree of Human Evolution where fossil evidence suggests that the first humans too evolved was from ape ancestors, at least six million years ago. Many species of humans followed, but only some left descendants on the branch leading to Homo sapiens. In this slide show, white skulls represent species that lived around the time stated to the point; gray skulls represent extinct human species.

Early humans first migrated out of Africa into Asia probably between two million and 1.7 million years ago. They entered Europe in some respects later, generally within the past one million years. Species of modern humans populated many parts of the world much later. For instance, people first came to Australia probably within the past 60,000 years, and to the Americas within the past 35,000 years. The beginnings of agriculture and the rise of the first civilizations occurred within the past 10,000 years.

The scientific study of human evolution is called Paleoanthropology. Paleoanthropology is a sub-field of anthropology, the study of human culture, society, and biology. Paleoanthropologists search for the roots of human physical traits and behaviour. They seek to discover how evolution has shaped the potentials, tendencies, and limitations of all people. For many people, Paleoanthropology is an exciting scientific field because it illuminates the origins of the defining traits of the human species, and the fundamental connections between humans and other living organisms on Earth. Scientists have abundant evidence of human evolution from fossils, artifacts, and genetic studies. However, some people find the idea of human evolution troubling because it can seem to conflict with religious and other traditional beliefs about how people, other living things, and the world developed. Yet many people have come to reconcile such beliefs with the scientific evidence.

Modern and Early Humans have undergone major anatomical changes over evolution. This illustration depicts Australopithecus afarensis (centre), the earliest of the three species; Homo erectus, -an intermediate species, and Homo sapiens and modern human. H. erectus and modern humans are much taller than A. afarensis and have flatter faces with a much larger brain. Modern humans have a larger brain than H. erectus and almost flat face beneath the front of the braincase.

All species of organisms originate through the process of biological evolution. In this process, new species arise from a series of natural changes. In animals that reproduce sexually, including humans, the term species refers to an ordered set-groups of adult members regularly interbreed, resulting in fertile offspring-that is, offspring themselves adequate of reproducing. Scientists classify each species with a unique, two -part scientific names. In this system, modern humans are classified as Homo sapiens.

The mechanism for evolutionary change resides in genes-the basic units of heredity. Genes affect how the body and behaviour of an organism develop during its life. The information contained in genes can be change-a process known as mutation. The way particular genes are expressed how they affect the body or behaviour of an organism-can also change. Over time, genetic change can alter a species’s overall way of life, such as what it eats, how it grows, and where it can live.

Genetic changes can improve the ability of organisms to survive, reproduce, and, in animals, raise offspring. This process is called adaptation. Parents pass adaptive genetic changes to their offspring, and ultimately these changes become common throughout a population-a group of organisms of the same species that share a particular local habitat. Many factors can favour new adaptations, but changes in the environment often play a role. Ancestral human species adapted to new environments as their genes changed, altering their anatomy (physical body structure), physiology (bodily functions, such as digestion), and behaviour. Over long periods, evolution dramatically transformed humans and their ways of life.

Geneticists estimate that the human line began to diverge from that of the African apes between eight million and five million years ago (paleontologists have dated the earliest human fossils to at least six million years ago). This figure comes from comparing differences in the genetic makeup of humans and apes, and then calculating how long it probably took for those differences to develop. Using similar techniques and comparing the genetic variations among human populations around the world, scientists have calculated that all people may share common genetic ancestors that lived sometime between 290,000 and 130,000 years ago.

Humans belong to the scientific order named Primates, a group of more than 230 species of mammals that also includes lemurs, lorises, tarsiers, monkeys, and apes. Modern humans, early humans, and other species of primates all have many similarities and some important differences. Knowledge of these similarities and differences helps scientists to understand the roots of many human traits, and the significance of each step in human evolution.

The origin of our own species, Homo sapiens, is one of the most hotly debated topics in Paleoanthropology. This debate centres on whether or not modern humans have a direct relationship to H. erectus or to the Neanderthals, are well-known as to a greater extent a nontraditional set-grouped of humans who evolved within the past 250,000 years. Paleoanthropologists commonly use the term anatomically modern Homo sapiens to distinguish people of today from these similar predecessors.

Traditionally, paleoanthropologists gave to a set-classification as Homo sapiens, any fossil human younger than 500,000 years old with a braincase larger than that of H. erectus. Thus, many scientists who believe that modern humans descend from a single line dating back to H. erectus. The name archaic Homo sapiens to refer to a variety of fossil humans that predate anatomically modern H. sapiens. The defining term ‘archaic’, denotes a set of physical features typical of Neanderthals and other species of a late Homo before modern Homo sapiens. These features include a combination of a robust skeleton, a large but low braincases (positioned amply behind, than over, the face), and a lower jaw lacking a prominent chin. In this sense, Neanderthals are sometimes classified as subspecies of archaic H. sapiens-H. Sapient, or categorized as neanderthalensis. Other scientists think that the variation in archaic fossils existently falls into clearly identifiable sets of traits, and that any type of human fossil exhibiting a unique set of traits should have a new species name. According to this view, the Neanderthals belong to their own species, H. neanderthalensis.

No comments:

Post a Comment