August 2, 2010

PAGE 7

Until very recently it could have been that most approaches to the philosophy of science were ‘cognitive’. This includes ‘logical positivism’, as nearly all of those who wrote about the nature of science would have been in agreement that science ought to be ‘value-free’. This had been a particular emphasis on the part of the first positivist, as it would be upon twentieth-century successors. Science, so it is said, deals with ‘facts’, and facts and values and irreducibly distinct. Facts are objective. They are what we seek in our knowledge of the world. Values are subjective: They bear the mark of human interest, they are the radically individual products of feeling and desire. Fact and value cannot, therefore, be inferred from fact, fact cannot be influenced by value. There were philosophers, notably some in the Kantian tradition, who viewed the relation of the human individual to the universalist aspiration of difference rather differently. But the legacy of three centuries of largely empiricist reflection of the ‘new’ sciences ushered in by Galilee Galileo (1564-1642), the Italian scientist whose distinction belongs to the history of physics and astronomy, rather than natural philosophy.


The philosophical importance of Galileo’s science rests largely upon the following closely related achievements: (1) His stunning successful arguments against Aristotelean science, (2) his proofs that mathematics is applicable to the real world. (3) His conceptually powerful use of experiments, both actual and employed regulatively, (4) His treatment of causality, replacing appeal to hypothesized natural ends with a quest for efficient causes, and (5) his unwavering confidence in the new style of theorizing that would come to be known as ‘mechanical explanation’.

A century later, the maxim that scientific knowledge is ‘value-laded’ seems almost as entrenched as its opposite was earlier. It is supposed that between fact and value has been breached, and philosophers of science seem quite at home with the thought that science and value may be closely intertwined after all. What has happened to bring about such an apparently radical change? What is its implications for the objectivity of science, the prized characteristic that, from Plato’s time onwards, has been assumed to set off real knowledge (epistēmē) from mere opinion (doxa)? To answer these questions adequately, one would first have to know something of the reasons behind the decline of logical positivism, as, well as of the diversity of the philosophies of science that have succeeded it.

More general, the interdisciplinary field of cognitive science is burgeoning on several fronts. Contemporary philosophical re-election about the mind ~ which has been quite intensive ~ has been influenced by this empirical inquiry, to the extent that the boundary lines between them are blurred in places.

Nonetheless, the philosophy of mind at its core remains a branch of metaphysics, traditionally conceived. Philosophers continue to debate foundational issues in terms not radically differently from those in vogue in previous eras. Many issues in the metaphysics of science hinge on the notion of ‘causation’. This notion is as important in science as it is in everyday thinking, and much scientific theorizing is concerned specifically to identify the ‘causes’ of various phenomena. However, there is little philosophical agreement on what it is to say that one event is the cause of another.

Modern discussion of causation starts with the Scottish philosopher, historian, and essayist David Hume (1711-76), who argued that causation is simply a matter for which he denies that we have innate ideas, that the causal relation is observably anything other than ‘constant conjunction’, that there are observable necessary connections anywhere, and that there is either an empirical or demonstrative proof for the assumptions that the future will resemble the past, and that every event has a cause. That is to say, that there is an irresolvable dispute between advocates of free-will and determinism, that extreme scepticism is coherent and that we can find the experiential source of our ideas of self-substance or God.

According to Hume (1978) on event causes another if only if events of the type to which the first event belongs regularly occur in conjunctive events of the type to which the second event belongs. The formulation, however, leaves a number of questions open. Firstly, there is a problem of distinguishing genuine ‘causal law’ from ‘accidental regularities’. Not all regularities are sufficiently law-like to underpin causal relationships. Being that there is a screw in my desk could well be constantly conjoined with being made of copper, without its being true that these screws are made of copper because they are in my desk. Secondly, the idea of constant conjunction does not give a ‘direction’ to causation. Causes need to be distinguished from effects. But knowing that A-type events are constantly conjoined with B-type events does not tell us which of ‘A’ and ‘B’ is the cause and which the effect, since constant conjunction is itself a symmetric relation. Thirdly, there is a problem about ‘probabilistic causation’. When we say that causes and effects are constantly conjoined, do we mean that the effects are always found with the causes, or is it enough that the causes make the effect probable?

Many philosophers of science during the past century have preferred to talk about ‘explanation’ than causation. According to the covering-law model of explanation, something is explained if it can be deduced from premises which include one or more laws. As applied to the explanation of particular events this implies that one particular event can be explained if it is linked by a law to other particular event. However, while they are often treated as separate theories, the covering-law account of explanation is at bottom little more than a variant of Hume’s constant conjunction account of causation. This affinity shows up in the fact at the covering-law account faces essentially the same difficulties as Hume: (1) In appealing to deduction from ‘laws’, it needs to explain the difference between genuine laws and accidentally true regularities: (2) Its omission by effects, as well as effects by causes, after all, it is as easy to deduce the height of the flag-pole from the length of its shadow and the law of optics: (3) Are the laws invoked in explanation required to be exceptionalness and deterministic, or is it an acceptable say, to appeal to the merely probabilistic fact that smoking makes cancer more likely, in explaining why some particular person develops cancer?

Nevertheless, one of the centrally obtainable achievements for which the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploitrated in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. By introducing ‘teleological considerations’, this account views beliefs as states with biological purpose and analyses their truth conditions specifically as those conditions that they are biologically supposed to covary with.

A teleological theory of representation needs to be supplemental with a philosophical account of biological representation, generally a selectionism account of biological purpose, according to which item ‘F’ has purpose ‘G’ if and only if it is now present as a result of past selection by some process which favoured items with ‘G’. So, a given belief type will have the purpose of covarying with ‘P’, say. If and only if some mechanism has selected it because it has covaried with ‘P’ the past.

Similarly, teleological theory holds that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’, teleological theories take issue depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions and a-historical theories. Historical theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical) but lacking r’s historical origins would not represent ‘x’ according to historical theories.

The American philosopher of mind (1935- ) Jerry Alan Fodor, is known for resolute ‘realism’ about the nature of mental functioning, taking the analogy between thought and computation seriously. Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘holist’ such as the American philosopher, Herbert Donald Davidson (1917-2003), or ‘instrumentalists’ about mental ascription, such as the British philosopher of logic and language, Eardley Anthony Michael Dummett, 1925- In recent years he has become a vocal critic of some of the aspirations of cognitive science.

Nonetheless, a suggestion extrapolating the solution of teleology is continually queried by points as owing to ‘causation’ and ‘content’, and ultimately a fundamental appreciation is to be considered, is that: We suppose that there has a causal path from A’s to ‘A’s’ and a causal path from B’s to ‘A’s’, and our problem is to find some difference between B-caused ‘A’s’ and A-caused ‘A’s’ in virtue of which the former but not the latter misrepresented. Perhaps, the two paths differ in their counter-factual properties. In particular, in spite of the fact that although A’s and B’s botheration gives cause by A’s’ every fragmentation is in pieces of its matter in the contestation of conveyance, and, as, perhaps, a conceivable assumption deducing that of only A’s would cause ‘A’s’ in ~ as one can say -, ‘optimal circumstances’. We could then hold that a symbol expresses its ‘optimal property’, viz., the property that would causally control its tokening in optimal circumstances. Correspondingly, when the tokening of a symbol is causally controlled by properties other than its optimal property, the tokens that eventuate are ipso facto wild.

Suppose at the present time, that this story about ‘optimal circumstances’ is proposed as part of a naturalized semantics for mental representations. In which case it is, of course, essential that it be possible to say that the optimal circumstances for tokening a mental representation are in terms that are not themselves either semantical or intentional. (It would not do, for example, to identify the optimal circumstances for tokening a symbol as those in which the tokens are true, that would be to assume precisely the sort of semantical notion that the theory is supposed to naturalize.) Befittingly, the suggestion ~ to put it briefly ~ is that appeals to ‘optimality’ should be buttressed by appeals to ‘teleology’: Optimal circumstances are the ones in which the mechanisms that mediate symbol tokening are functioning ‘as they are supposed to’. In the case of mental representations, these would be paradigmatically circumstances where the mechanisms of belief fixation are functioning as them are supposed to.

So, then: The teleologies of the cognitive mechanisms determine the optimal condition for belief fixation, and the optimal condition for belief fixation determines the content of beliefs. So the story goes.

To put this objection in slightly other words: The teleology story perhaps strikes one as plausible in that it understands one normative notion ~ truth ~ in terms of another normative notion ~ optimality. But this appearance if it is spurious there is no guarantee that the kind of optimality that teleology reconstructs has much to do with the kind of optimality that the explication of ‘truth’ requires. When mechanisms of repression are working ‘optimally’ ~ when they are working ‘as they are supposed to’ ~ what they deliver are likely to be ‘falsehoods’.

Once, again, there is no obvious reason coitions that are optimal for the tokening of one sort of mental symbol need be optimal for the tokening of other sorts. Perhaps the optimal conditions for fixing beliefs about very large objects, are different from the optimal conditions for fixing beliefs about very small ones, are different from the optimal conditions for fixing beliefs sights. But this raises the possibility that if we are to say which conditions are optimal for the fixation of a belief, we will have to know what the content of the belief is ~ what it is a belief about. Our explication of content would then require a notion of optimality, whose explication in turn requires a notion of content, and the resulting pile would clearly be unstable.

Teleological theories hold that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’. Teleological theories differ, depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions: Historically, theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical), but lacking r’s historical origins would not represent ‘x’ according to historical theories.

Just as functional role theories hold that r’s representing ‘x’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

That being said, that nowhere is the new period of collaboration between philosophy and other disciplines more evident than in the new subject of cognitive science. Cognitive science from its very beginning has been ‘interdisciplinary’ in character, and is in effect the joint property of psychology, linguistics, philosophy, computer science and anthropology. There is, therefore, a great variety of different research projects within cognitive science, but the central area of cognitive science, its hard-coded ideology rests on the assumption that the mind is best viewed as analogous to a digital computer. The basic idea behind cognitive science is that recent developments in computer science and artificial intelligence have enormous importance for our conception of human beings. The basic inspiration for cognitive science went something like this: Human beings do information processing. Computers are designed precisely do information processing. Therefore, one way to study human cognition ~ perhaps the best way to study it ~ is to study it as a matter of computational information processing. Some cognitive scientists think that the computer is just a metaphor for the human mind: Others think that the mind is literally a computer program. But it is fair to say, that without the computational model there would not have been a cognitive science as we now understand it.

In, Essay Concerning Human Understanding is the first modern systematic presentation of empiricist epistemology, and as such had important implications for the natural sciences and for philosophy of science generally. Like his predecessor, Descartes, the English philosopher (1632-1704) Walter Locke began his account of knowledge from the conscious mind aware of ideas. Unlike Descartes, however, he was concerned not to build a system based on certainty, but to identify the mind’s scope and limits. The premise upon which Locke built his account, including his account of the natural sciences, is that the ideas which furnish the mind are all derived from experience. He thus, totally rejected any kind of innate knowledge. In this he consciously opposing Descartes, who had argued that it is possible to come to knowledge of fundamental truths about the natural world through reason alone. Descartes (1596-1650) had argued, that we can come to know the essential nature of both the ‘mind’ and ‘matter’ by pure reason. Walter Locke accepted Descartes’s criterion of clear and distinct ideas as the basis for knowledge, but denied any source for them other than experience. It was information that came in a via the five senses (ideas of sensation) and ideas engendered from pure inner experiences (ideas of reflection) arose in the building blocks of the understanding.

Locke combined his commitment to ‘the new way of ideas’ with the native espousal of the ‘corpuscular philosophy’ of the Irish scientist (1627-92) Robert Boyle. This, in essence, was an acceptance of a revised, more sophisticated account of matter and its properties that had been advocated by the ancient atomists and recently supported by Galileo (1564-1642) and Pierre Gassendi (1592-1655). Boyle argued from theory and experiment that there were powerful reasons to justify some kind of corpuscular account of matter and its properties. He called the latter qualities, which he distinguished as primary and secondary. The distinction between primary and secondary qualities may be reached by two rather different routes: Either from the nature or essence of matter or from the nature and essence of experience, though practising these have tended to run together. The former considerations make the distinction seem like an a priori, or necessary, truth about the nature of matter, while the latter make it appears to be an empirical hypothesis -. Locke, too, accepted this account, arguing that the ideas we have of the primary qualities of bodies resemble those qualities as they are in the subject, whereas the ideas of the secondary qualities, such as colour, taste, and smell, do not resemble their causes in the object.

There is no strong connection between acceptance of the primary secondary quality distinction and Locke’s empiricism and Descartes had also argued strongly for universal acceptance by natural philosophers, and Locke embraced it within his more comprehensive empirical philosophy. But Locke’ empiricism did have major implications for the natural sciences, as he well realized. His account begins with an analysis of experience. All ideas, he argues, are either simple or complex. Simple ideas are those like the red of a particular rose or the roundness of a snowball. Complicated and complex ideas, our ideas of the rose or the snowball, are combinations of simple ideas. We may create new complicated and complex ideas in our imagination ~ a parallelogram, for example. But simple ideas can never be created by us: We just have them or not, and characteristically they are caused, for example, the impact on our senses of rays of light or vibrations of sound in the air coming from a particular physical object. Since we cannot create simple ideas, and they are determined by our experience. Our knowledge is in a very strict uncompromising way limited. Besides, our experiences are always of the particular, never of the general. It is this particular simple idea or that particular complex idea that we apprehend. We never in that sense apprehend a universal truth about the natural world, but only particular instances. It follows from this that all claims to generality about that world ~ for example, all claims to identity what were then beginning to be called the laws of nature ~ must to that extent go beyond our experience and thus be less than certain.

The Scottish philosopher, historian, and essayist, (1711-76) David Hume, whose famous discussion appears in both his major philosophical works, the ‘Treatise’ (1739) and the ‘Enquiry’(1777). The distinction is couched in terms of the concept of causality, so that where we are accustomed to talking of laws, Hume contends, involves three ideas:

1. That there should be a regular concomitance between

events of the type of the cause and those of the type

of the effect.

2. That the cause event should be contiguous with the

affect events.

3. That the cause event should necessitate the effect event.

The tenets (1) and (2) occasion no differently for Hume, since he believes that there are patterns of sensory impressions of non-problematically related idea of regularity concomitance and of contiguity. But the third requirement is deeply problematic, in that the idea of necessarily that figures in it seems to have no sensory impression correlated with it. However, carefully and attentively we scrutinize a causal process, we do not seem to observe anything that might be the observed correlate of the idea of necessity. We do not observe any kind of activity, power, or necessitation. All we ever observe is one event following another, which is logically independent of it. Nor is this logical necessity, since, as, Hume observes, one can jointly assert the existence of the cause and a denial of the existence of the effect, as specified in the causal statement or the law of nature, without contradiction. What, then, are we to make of the seemingly central notion of necessity that is deeply embedded in the very idea of causation, or lawfulness? To this query, Hume gives an ingenious and telling story. There is an impression corresponding to the idea of causal necessity, but it is a psychological phenomenon: Our exception that even similar to those we have already observed to be correlated with the cause-type of events will come to be in this case too. Where does that impression come from? It is created as a kind of mental habit by the repeated experience of regular concomitance between events of the type of the effect and the occurring of events of the type of the cause. And then, the impression that corresponds to the idea of regular concomitance ~ the law of nature then asserts nothing but the existence of the regular concomitance.

At this point in our narrative, the question at once arises as to whether this factor of life in nature, thus interpreted, corresponds to anything that we observe in nature. All philosophy is an endeavour to obtain a self-consistent understanding of things observed. Thus, its development is guided in two ways, one is demand for coherent self-consistency, and the other is the elucidation of things observed. With our direct observations how are we to conduct such comparisons? Should we turn to science? No. There is no way in which the scientific endeavour can detect the aliveness of things: Its methodology rules out the possibility of such a finding. On this point, the English mathematician and philosopher (1861-1947) Alfred Whitehead, comments: That science can find no individual enjoyment in nature, as science can find no creativity in nature, it finds mere rules of succession. These negations are true of natural science. They are inherent in its methodology. The reason for this blindness of physical science lies in the fact that such science only deals with half the evidence provided by human experience. It divides the seamless coat ~ or, to change the metaphor into a happier form, it examines the coat, which is superficial, and neglects the body which is fundamental.

Whitehead claims that the methodology of science makes it blind to a fundamental aspect of reality, namely, the primacy of experience, it neglected half of the evidence. Working within Descartes’ dualistic framework reference, of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.

Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect ‘blindness’, but there is no need to rely on suspicions. This blindness is evident. Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reasons for their occurrence. Consider, for example, Newton’s law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity ~ gravity. According to this law the gravitational attraction between two objects deceases in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler still, why does celestial or supernal space have three dimensions? Why is time one-dimensional? Whitehead notes, ‘None of these laws of nature gives the slightest evidence of necessity. They are [merely] the modes of procedure which within the scale of observation do in fact prevail’.

This analysis reveals that the capacity of science to fathom the depths of reality is limited. For example, if reality is, in fact, made up of discrete units, and these units have the fundamental character in being ‘ the pulsing throbs of experience’, then science may be in a position to discover the discreteness: But it has no access to the subjective side of nature since, as the Austrian physicist(1887-1961) Erin Schrödinger points out, we ‘exclude the subject of cognizance from the domain of nature that we endeavour to understand’. It follows that in order to find ‘the elucidation of things observed’ in relation to the experiential or aliveness aspect, we cannot rely on science, we need to look elsewhere.

If, instead of relying on science, we rely on our immediate observation of nature and of ourselves, we find, first, that this [i.e., Descartes’] stark division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which make up the constitution of nature, and thirdly, that we should reject the notion of idle wheels in the process of nature. Every factor which makes a difference, and that difference can only be expressed in terms of the individual character of that factor.

Whitehead proceeds to analyse our experiences in general, and our observations of nature in particular, and ends up with ‘mutual immanence’ as a central theme. This mutual immanence is obvious in the case of an experience that, I am a part of the universe, and, since I experience the universe, the experienced universe is part of me. Whitehead gives an example, ‘I am in the room, and the room is an item in my present experience. But my present experience is what I am now’. A generalization of this relationship to the case of any actual occasions yields the conclusion that ‘the world is included within the occasion in one sense, and the occasion is included in the world in another sense’. The idea that each actual occasion appropriates its universe follows naturally from such considerations.

The description of an actual entity for being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each and every actual entity is one of interdependence with all the other actual entities in the universe. Each and every effective entity the determinant by which some outward appearance of something as distinguished from the substance for which it is made a process of prehending or appropriating all the other actual entities and creating one new entity out of them all, namely, itself.

There are two general strategies for distinguishing laws from accidentally true generalizations. The first stands by Hume’s idea that causal connections are mere constant conjunctions, and then seeks to explain why some constant conjunctions are better than others. That is, this first strategy accepts the principle that causation involves nothing more than certain events always happening together with certain others, and then seeks to explain why some such patterns ~ the ‘laws’ ~ matter more than others ~ the ‘accidents’ -. The second strategy, by contrast, rejects the Humean presupposition that causation involves nothing more than is responsible for an effect to happen in reserve to the chance-stantial co-occurrence, and instead postulates the relationship ‘necessitation’, a kind of ‘cement, which links events that are connected by law, but not those events (like having a screw in my desk and being made of copper) that are only accidentally conjoined.

There are a number of versions of the first Human strategy. The most successful, originally proposed by the Cambridge mathematician and philosopher F.P. Ramsey (1903-30), and later revived by the American philosopher David Lewis (1941-2002), who holds that laws are those true generalizations that can be fitted into an ideal system of knowledge. The thought is, that, the laws are those patterns that are somewhat explicated in terms of basic science, either as fundamental principles themselves, or as consequences of those principles, while accidents, although true, have no such explanation. Thus, ‘All water at standard pressure boils at 1000 C’ is a consequence of the laws governing molecular bonding: But the fact that ‘All the screws in my desk are copper’ is not part of the deductive structure of any satisfactory science. Frank Plumpton Ramsey (1903-30), neatly encapsulated this idea by saying that laws are ‘consequences of those propositions which we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’.

Advocates of the alternative non-Humean strategy object that the difference between laws and accidents is not a ‘linguistic’ matter of deductive systematization, but rather a ‘metaphysical’ contrast between the kind of links they report. They argue that there is a link in nature between being at 1000 C and boiling, but not between being ‘in my desk’ and being ‘made of copper’, and that this is nothing to do with how the description of this link may fit into theories. According to the forthright Australian D.M. Armstrong (1983), the most prominent defender of this view, the real difference between laws and accidentals, is simply that laws report relationships of natural ‘necessitation’, while accidents only report that two types of events happen to occur together.

Armstrong’s view may seem intuitively plausible, but it is arguable that the notion of necessitation simply restates the problem, than solving it. Armstrong says that necessitation involves something more than constant conjunction: If two events e related by necessitation, then it follows that they are constantly conjoined, but two events can be constantly conjoined without being related by necessitation, as when the constant conjunction is just a matter of accidents. So necessitation is a stronger relationship than constant conjunction. However, Armstrong and other defenders of this view say very little about what this extra strength amounts to, except that it distinguishes laws from accidents. Armstrong’s critics argue that a satisfactory account of laws ought to cast more light than this on the nature of laws.

Hume said that the earlier of two causally related events is always the cause, and the later effect. However, there are a number of objections to using the earlier-later ‘arow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in principle, possible that some causes and effects could be simultaneous. That more, in the idea that time is directed from ‘earlier’ too ‘later’ itself stands in need of philosophical explanation ~ and one of the most popular explanations is that the idea of ‘movement’ from earlier to depend on the fact later that cause-effect pairs always have a time, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, that we will clearly need to find some account of the direction of causation which does not itself assume the direction of time.

A number of such accounts have been proposed. David Lewis (1979) has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events ~ consider a person who dies after simultaneously being shot and struck by lightning ~ is a very rare occurrence, by contrast, the multiple ‘over-determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also the fingerprint on the button, his trembling, the further depletion of his gin bottle, the recording of the button’s click on tape, he emission of light waves bearing the image of his action through the window, the warnings of the wave from the passage often signal current, and so on, and so on, and on.

The American philosopher David Lewis (1941-2002), relates this asymmetry of over-determination to the asymmetry of causation as follows. If we suppose the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freak -like occurrence in the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the causes. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following, the philosopher of science and probability theorists, Hans Reichenbach (1891-1953), they note that the different causes of any given type of effect are normally probabilistically independent of each other, by contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both obesity and high excitement can cause heart attacks, but this does not imply that fat people are more likely to get excited than thin ones: Its facts, that both lung cancer and nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the latter are probabilistically dependent on each other.

However, there is another course of thought in philosophy of science, the tradition of ‘negative’ or ‘eliminative’ induction. From the English diplomat and philosopher Francis Bacon (1561-1626) and in modern time the philosopher of science Karl Raimund Popper (1902-1994), we have the idea of using logic to bring falsifying evidence to bear on hypotheses about what must universally be the case that many thinkers accept in essence his solution to the problem of demarcating proper science from its imitators, namely that the former results in genuinely falsifiable theories whereas the latter do not. Although falsely, allowed many people’s objections to such ideologies as psychoanalysis and Marxism.

Hume was interested in the processes by which we acquire knowledge: The processes of perceiving and thinking, of feeling and reasoning. He recognized that much of what we claim to know derives from other people secondhand, thirdhand or worse: Moreover, our perceptions and judgements can be distorted by many factors ~ by what we are studying, as well as by the very act of study itself, the main reason, however, behind his emphasis on ‘probabilities and those other measures of evidence on which life and action entirely depend’ is this: It is apparent that complete understanding concerning the validity of ‘matter of fact’, are founded on the relation of cause and effect, and that we can never infer the existence of one object from another unless they are connected nor as well as mediately or immediately.

When we apparently observe a whole sequence, say of one ball hitting another, what exactly do we observe? And in the much commoner cases, when we wonder about the unobserved causes or effects of the events we observe, what precisely are we doing?

Hume recognized that a notion of ‘must’ or necessity is a peculiar feature of causal relation, inference and principles, and challenges us to explain and justify the notion. He argued that there is no observable feature of events, nothing like a physical bond, which can be properly labelled the ‘necessary connection’ between a given cause and its effect: Events are simply, they merely occur, and there is in ‘must’ or ‘ought’ about them. However, repeated experience of pairs of events sets up the habit of expectation in us, such that when one of the pair occurs we inescapably expect the other. This expectation makes us infer the unobserved cause or unobserved effect of the observed event, and we mistakenly project this mental inference onto the events themselves. There is no necessity observable in causal relations, all that can be observed is regular sequence, here is necessity in causal inferences, but only in the mind. Once we realize that causation is a relation between pairs of events. We also realize that often we are not present for the whole sequence e which we want to divide into ‘cause’ and ‘effect’. Our understanding of the casual relation is thus intimately linked with the role of the causal inference cause only causal inferences entitle us to ‘go beyond what is immediately present to the senses’. But now two very important assumptions emerge behind the causal inference: The assumptions that like causes, in ‘like circumstances, will always produce like effects’, and the assumption that ‘the course of nature will continue uniformly the same’ ~ or, briefly that the future will resemble the past. Unfortunately, this last assumption lacks either empirical or a priori proof, that is, it can be conclusively established neither by experience nor by thought alone.

Hume frequently endorsed a standard seventeenth-century view that all our ideas are ultimately traceable, by analysis, to sensory impressions of an internal or external kind. Accordingly, he claimed that all his theses are based on ‘experience’, understood as sensory awareness together with memory, since only experience establishes matters of fact. But is our belief that the future will resemble the past properly construed as a belief concerning only a mater of fact? As the English philosopher Bertrand Russell (1872-1970) remarked, earlier this century, the real problem that Hume rises are whether future futures will resemble future pasts, in the way that past futures really did resemble past pasts. Hume declares that ‘if . . . the past may be no rule for the future, all experience become useless and can give rise to inference or conclusion. And yet, he held, the supposition cannot stem from innate ideas, since there are no innate ideas in his view nor can it stems from any abstract formal reasoning. For one thing, the future can surprise us, and no formal reasoning seems able to embrace such contingencies: For another, even animals and unthinkable people conduct their lives as if they assume the future resembles the past: Dogs return for buried bones, children avoid a painful fire, and so forth. Hume is not deploring the fact that we have to conduct our lives on the basis of probabilities, and he is not saying that inductive reasoning could or should be avoided or rejected. Rather, he accepted inductive reasoning but tried to show that whereas formal reasoning of the kind associated with mathematics cannot establish or prove matters of fact, factual or inductive reasoning lacks the ‘necessity’ and ‘certainty’ associated with mathematics. His position, therefore clear; because ‘every effect is a distinct event from its cause’, only investigation can settle whether any two particular events are causally related: Causal inferences cannot be drawn with the force of logical necessity familiar to us from deductivity, but, although they lack such force, they should not be discarded. In the context of causation, inductive inferences are inescapable and invaluable. What, then, makes ‘experience’ the standard of our future judgement? The answer is ‘custom’, it is a brute psychological fact, without which even animal life of a simple kind would be more or less impossible. ‘We are determined by custom to suppose the future conformable to the past’ (Hume, 1978), nevertheless, whenever we need to calculate likely events we must supplement and correct such custom by self-conscious reasoning.

Nonetheless, the problem that the causal theory of reference will fail once it is recognized that all representations must occur under some aspect or that the extentionality of causal relations is inadequate to capture the aspectual character of reference. The only kind of causation that could be adequate to the task of reference is intentional causal or mental causation, but the causal theory of reference cannot concede that ultimately reference is achieved by some met device, since the whole approach behind the causal theory was to try to eliminate the traditional mentalism of theories of reference and meaning in favour of objective causal relations in the world, though it is at present by far the most influential theory of reference, will prove to be a failure for these reasons.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states. Our concepts of mental states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. But that is no problem for the identity theory. As J.J.C. Smart (1962), who first argued for the identity theory, emphasized, the requisite identities do not depend on understanding concepts of mental states or the meanings of mental terms. For ‘a’ to be the identical with ‘b’, ‘a’, and ‘b’ must have the same properties, but the terms ‘a’ and ‘b’ need not mean the same. Its principal means by measure can be accorded within the indiscernibility of identical, in that, if ‘A’ is identical with ‘B’, then every property that ‘A’ has ‘B’, and vice versa. This is, sometimes known as Leibniz’ s Law.

But a problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same as a neural-firing, we identify that state in two different ways: As a pain and as neural-firing. that the state will therefore have certain properties in virtue of which we identify it as pain and others in virtue of which we identify it as an excitability of neural firings. The properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which ewe identify it as neural excitability firing, will be physical properties. This has seemed as of too many to lead to a kind of dualism at the level of the properties of mental states, even if we reject dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will, nonetheless have both mental and physical properties. So disallowing dualism with respect to substances and their states is simply to its reappearance at the level of the properties of those states.

There are two broad categories of mental property. Mental states such as thoughts and desires, often called ‘propositional attitudes’, have ‘content’ that can be de scribed by ‘that’ clauses. For example, one can have a thought, or desire, that it will rain. These states are said to have intentional properties, or ‘intentionality sensations’, such as pains and sense impressions, lack intentional content, and have instead qualitative properties of various sorts.

The problem about mental properties is widely thought to be most pressing for sensations, since the painful qualities of pains and the red quality of visual sensations seem to be irretrievably non-physical. And if mental states do actually have non-physical properties, the identity of mental states generate to physical states as they would not sustain a thoroughgoing mind-body materialism.

The Cartesian doctrine that the mental is in some way non-physical is so pervasive that even advocates of the identity theory sometimes accepted it, for the ideas that the mental is non-physical underlies, for example, the insistence by some identity theorists that mental properties are really neural as between being mental and physical. To be neural is in this way. A property would have to be neutral as to whether it is mental at all. Only if one thought that being meant being non-physical would one hold that defending materialism required showing the ostensible mental properties are neutral as regards whether or not they are mental.

But holding that mental properties are non-physical has a cost that is usually not noticed. A phenomenon is mental only if it has some distinctively mental property. So, strictly speaking, a materialist, who claims that mental properties are non-physical phenomena subsisting the state or fact of having independently been being actualized in the presence that present a reality that proves to exist. This is the ‘eliminative-Materialist position advanced by the American philosopher and critic Richard Rorty (1979).

According to Rorty (1931- ) ‘mental’ and ‘physical’ are incompatible terms. Nothing can be both mental and physical, so mental states cannot be identical with bodily states. Rorty traces this incompatibly to our views about incorrigibility: ‘Mental’ and ‘physical’ are incorrigible reports of one’s own mental states, but not reports of physical occurrences, but he also argues that we can imagine a people who describe themselves and each other using terms just like our mental vocabulary, except that those people do not take the reports made with that vocabulary to be incorrigible. Since Rorty takes a state to be a mental state only if one’s reports about it are taken to be incorrigible, his imaginary people do not ascribe mental states to themselves or each other. Nonetheless, the only difference between their language and ours is that we take as incorrigible certain reports which they do not. So their language as no less descriptive or explanatory power than ours. Rorty concludes that our mental vocabulary is idle, and that there are no distinctively mental phenomena.

This argument variably rests on or upon the indeterminant contingence of its buildings incorrigibly into the meaning of the term ‘mental’. If we do not, the way is open to interpret Rorty’s imaginary people as simply having a different theory of mind from ours, on which reports of one’s own mental states are corrigible. Their reports would this be about mental states, as construed by their theory. Rorty’s thought experiment would then provide to conclude not that our terminology is idle, but only that this alternative theory of mental phenomena is correct. His thought experiment would thus sustain the non-eliminativist view that mental states are bodily states. Whether Rorty’s argument supports his eliminativist conclusion or the standard identity theory, therefore, depends solely on whether or not one holds that the mental is in some way non-physical.

Paul M. Churchlands (1981) advances a different argument for eliminative materialism. According to Churchlands, the common-sense concepts of mental states contained in our present folk psychology are, from a scientific point of view, radically defective. But we can expect that eventually a more sophisticated theoretical account will relace those folk-psychological concepts, showing that mental phenomena, as described by current folk psychology, do not exist. Since, that account would be integrated into the rest of science, we would have a thoroughgoing materialist treatment of all phenomena, unlike Rorty’s, does not rely of assuming that the mental is non-physical.

But even if current folk psychology is mistaken, that does not show that mental phenomenon does not exist, but only that they are of the way folk psychology described them as. We could conclude they do not exist only if the folk-psychological claims that turn out to be mistaken actually define what it is for some phenomena to be mental. Otherwise, the new theory would be about mental phenomena, and would help show that they are identical with physical phenomena. Churchlands argument, like Rorty’s, depends on a special way of defining the mental, which we need not adopt, it is likely that any argument for Eliminative materialism will require some such definition, without which the argument would instead support the identity theory.

Despite initial appearances, the distinctive properties of sensations are neutral as between being mental and physical, in that borrowed from the English philosopher and classicist Gilbert Ryle (1900-76), they are topic neutral: My having a sensation of red consists in my being in a state that is similar, in respect that we need not specify, even so, to something that occurs in me when I am in the presence of certain stimuli. Because the respect of similarity is not specified, the property is neither distinctively mental nor distinctively physical. But everything is similar to everything else in some respect or other. So leaving the respect of similarity unspecified makes this account too weak to capture the distinguishing properties of sensation.

A more sophisticated reply to the difficultly about mental properties is due independently to the Australian, David Malet Armstrong (1926- ) and American philosopher David Lewis (1941-2002), who argued that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which e identify states as thoughts or sensations will still be neural as between being mental and physical, since anything can bear a causal relation to anything else. But causal connections have a better chance than similarity in some unspecified respect to capturing the distinguishing properties of sensations and thought.

This casual theory is appealing, but is misguided to attempt to construe the distinctive properties of mental states for being neutral as between being mental, and physical. To be neutral as regards being mental or physical is to be neither distinctively mental nor distinctively physical. But since thoughts and sensations are distinctively mental states, for a state to be a thought or a sensation is perforce for it to have some characteristically mental property. We inevitably lose the distinctively mental if we construe these properties for being neither mental nor physical.

Not only is the topic-neutral construal misguided: The problem it was designed to solve is equally so, only to say, that problem stemmed from the idea that mental must have some non-physical aspects. If not at the level of people or their mental states, then at the level of the distinctively mental properties of those states. However, it should be of mention, that properties can be more complicated, for example, in the sentence, ‘Walter is married to Julie’, we are attributing to Walter the property of being married, and unlike the property of Walter is bald. Consider the sentence: ‘Walter is bearded’. The word ‘Walter’ in this sentence is a bit of language ~ a name of some individual human being ~ and more some would be tempted to confuse the word with what it names. Consider the expression ‘is bald’, this too is a bit of language ~ philosophers call it a ‘predicate’ ~ and it brings to our attention some property or feature which, if the sentence is true. Is possessed by Walter? Understood in this way, a property is not its self linguist though it is expressed, or conveyed by something that is, namely a predicate. What might be said that a property is a real feature of the word, and that it should be contrasted just as sharply with any predicates we use to express it as the name ‘Walter’ is contrasted with the person himself. Controversially, just what sort of ontological status should be accorded to properties by describing ‘anomalous monism’, ~ while it is conceivably given to a better understanding the similarity with the American philosopher Herbert Donald Davidson (1917-2003), wherefore he adopts a position that explicitly repudiates reductive physicalism, yet purports to be a version of materialism, nonetheless, Davidson holds that although token mental evident states are identical to those of physical events and states ~ mental ‘types’ -, i.e., kinds, and/or properties ~ are neither to, nor nomically co-existensive with, physical types. In other words, his argument for this position relies largely on the contention that the correct assignment of mental a actionable properties to a person is always a holistic matter, involving a global, temporally diachronic, ‘intentional interpretation’ of the person. But as many philosophers have in effect pointed out, accommodating claims of materialism evidently requires more than just repercussions of mental/physical identities. Mentalistic explanation presupposes not merely that metal events are causes but also that they have causal/explanatory relevance as mental -, i.e., relevance insofar as they fall under mental kinds or types. Nonetheless, Davidson’s position, which denies there are strict psychological or psychological laws, can accommodate the causal/explanation relevance of the mental quo mentally: If to ‘epiphenomenalism’ with respect to mental properties.

But the idea that the mental is in some respect non-physical cannot be assumed without argument. Plainly, the distinctively mental properties of the mental states are unlikely any other properties we know about. Only mental states have properties that are at all like the qualitative properties that anything like the intentional properties of thoughts and desires. However, this does not show that the mental properties are not physical properties, not, but all physical properties like the standard states: So, mental properties might still be special kinds of physical properties. It is question beginning to assume otherwise. The doctrine that the mental properties is simply an expression of the Cartesian doctrine that the mental is automatically non-physical.

It is sometimes held that properties should count as physical properties only if they can be defined using the terms of physics. This to far to restrictively. Nobody would hold that to reduce biology to physics, for example, we must define all biological properties using only terms that occur in physics. And even putting ‘reduction’ aside, in certain biological properties could have been defined, that would not mean that those properties were in n way non-physical. The sense of ‘physical’ that is relevant that is of its situation it must be broad enough to include not only biological properties, but also most common-sense, macroscopic properties. Bodily states are uncontroversially physical in the relevant way. So, we can recast the identity theory as asserting that mental states are identical with bodily state.

In the course of reaching conclusions about the origin and limits of knowledge, Locke had occasioned in concerning himself with topics which are of philosophical interest in themselves. On of these is the question of identity, which includes, more specifically, the question of personal identity: What are the criteria by which a person at one time is numerically the same person as a person encountering of time? Locke points out whether ‘this is what was here before, it matters what kind of thing ‘this’ is meant to be. If ‘this’ is meant as a mass of matter then it is what was before so long as it consists of the same material panicles, but if it is meant as a living body then its considering of the same particles does mot matter and the case is different. ‘A colt grown up to a horse, sometimes fat, sometimes lean, is all the while the same horse, though . . . there may be a manifest change of the parts. So, when we think about personal identity, we need to be clear about a distinction between two things which ‘the ordinary way of speaking runs together’ ~ the idea of ‘man’ and the idea of ‘person’. As with any other animal, the identity of a man consists ‘in nothing but a participation of the same continued life, by constantly fleeting particles of matter, in succession initially united to the same organized body, however, the idea of a person is not that of a living body of a certain kind. A person is a ‘thinking’. ‘intelligent being, which has some sorts of reflection and such a being ‘will be the same self as far as the same consciousness can extend to action past or to come’. Locke is at pains to argue that this continuity of self-consciousness does not necessarily involve the continuity of some immaterial substance, in the way that Descartes had held, for we all know, says Locke, consciousness and thought may be powers which can be possessed by ‘systems of matter fitly disposed’, and even if this is not so the question of the identity of a person is not the same as the question of the identity of an ‘immaterial; substance’. For just as the identity of as horse can be preserved through changes of matter and depended not on the identity of a continued material substance of its unity of one continued life. So the identity of a person does not depend on the continuity of a immaterial substance. The unity of one’s continued consciousness does not depend on its being ‘annexed’ only to one individual substance, [and not] . . . continued in a succession of several substances. For Lock e, then, personal identity consists in an identity of consciousness, and not in the identity of some substance whose essence it is to be conscious

Casual mechanisms or connections of meaning will help to take a historical route, and focus on the terms in which analytical philosophers of mind began to discuss seriously psychoanalytic explanation. These were provided by the long-standing and presently unconcluded debate over cause and meaning in psychoanalysis.

It is not hard to see why psychoanalysis should be viewed in terms of cause and meaning. On the one hand, Freud’s theories introduce a panoply of concepts which appear to characterize mental processes as mechanical and non-meaningful. Included are Freud’s neurological model of the mind, as outlined in his ‘Project or a Scientific Psychology’, more broadly, his ‘economic’ description of the mental, as having properties of force or energy, e.g., as ‘cathexing’ objects: And his account in the mechanism of repression. So it would seem that psychoanalytic explanation employs terms logically at variance with those of ordinary, common-sense psychology, where mechanisms do not play a central role. Bu t on the other hand, and equally striking, there is the fact that psychoanalysis proceeds through interpretation and engages on a relentless search for meaningful connections in mental life ~ something that even a superficial examination of the Interpretation of Dreams, or The Psychopathology of Everyday Life, cannot fail to impress upon one. Psychoanalytic interpretation adduces meaningful connections between disparate and often apparently dissociated mental and behavioural phenomena, directed by the goal of ‘thematic coherence’. Of giving mental life the sort of unity that we find in a work of art or cogent narrative. In this respect, psychoanalysis would seem to adopt as its central plank the most salient feature of ordinary psychology, its insistence e on relating actions to reason for them through contentful characterizations of each that make their connection seem rational, or intelligible: A goal that seems remote from anything found in the physical sciences.

The application to psychoanalysis of the perspective afforded by the cause-meaning debate can also be seen as a natural consequence of another factor, namely the semi-paradoxical nature of psychoanalysis’ explananda. With respect to all irrational phenomena, something like a paradox arises. Irrationality involves a failure of a rational connectedness and hence of meaningfulness, and so, if it is to have an explanation of any kind, relations that are non-meaningful are causal appear to be needed. And, yet, as observed above, it would seem that, in offering explanations for irrationality ~ plugging the ‘gaps’ in consciousness ~ what psychoanalytic explanation hinges on is precisely the postulation of further, although non-apparent connections of meaning.

For these two reasons, then ~ the logical heterogeneity of its explanation and the ambiguous status of its explananda ~ it may seem that an examination in terms of the concepts of cause and meaning will provide the key to a philosophical elucidation of psychoanalysis. The possible views of psychoanalytic explanation that may result from such an examination can be arranged along two dimensions. (1) Psychoanalytic explanation may then be viewed after reconstruction, as either causal and non-meaningful, or meaningful and non-causal, or as comprising both meaningful and causal elements, in various combinations. Psychoanalytic explanation then may be viewed, on each of these reconstructions, as either licensed or invalidated depending one’s view of the logical nature of psychology.

So, for instance, some philosophical discussion infer that psychoanalytic explanation is void, simple on the grounds that it is committed to causality in psychology. On another, opposed view, it is the virtue of psychoanalytic explanation that it imputes causal relations, since only causal relations can be relevant to explaining the failures of meaningful psychological connections. On yet another view, it is psychoanalysis’ commitment to meaning which is its great fault: It s held that the stories that psychoanalysis tries to tell do not really, on examination, explain successfully. And so on.

It is fair to say that the debates between these various positions fail to establish anything definite about psychoanalytic explanation. There are two reasons for this. First, there are several different strands in Freud’s whitings, each of which may be drawn on, apparently conclusively, in support of each alternative reconstruction. Secondly, preoccupation with a wholly general problem in the philosophy of mind, that of cause and meaning, distracts attention from the distinguishing features of psychoanalytic explanation. At this point, and in order to prepare the way for a plausible reconstruction of psychoanalytic explanation. It is appropriate to take a step back, and take a fresh look at the cause-meaning issue in the philosophy of psychoanalysis.

Suppose, first, that some sort of cause-meaning compatibilism ~ such as that of the American philosopher Donald Davidson (1917-2003) -, hold for ordinary psychology, on this view, psychological explanation requires some sort of parallelism of causal and meaningful connections, grounded in the idea that psychological properties play causal roles determined by their content. Nothing in psychoanalytic explanation is inconsistent with this picture: After his abandonment of the early ‘Project’. Freud exceptionlessly viewed psychology as autonomous relative to neurophysiology, and at the same time as congruent with a broadly naturalistic world-view. ‘Naturalism’ is often used interchangeably with ‘physicalism’ and ‘materialism’, though each of these hints at specific doctrines. Thus, ‘physicalism’ suggests that, among the natural sciences, there be something especially fundamental about physics. And ‘materialism’ has connotations going back to eighteenth-and-nineteenth-century views of the world as essentially made of material particles whose behaviour is fundamental for explaining everything else. Moreover, ‘naturalism’ with respect to some realm is the view that everything that exists in that realm, and all those events that take place in it, are empirically accessible features of the world. Sometimes naturalism is taken to my that some realm can be in principle understood by appeal to the laws and theories of the natural sciences, but one must be careful as sine naturalism does not by itself imply anything about reduction. Historically, ‘natural’ contrasts with ‘supernatural’, but in the context of contemporary philosophy of mind where debate centres around the possibility of explaining mental phenomena as part of the natural order, it is the non-natural rather than the supernatural that is the contrasting notion. The naturalist holds that they can be so explained, while the opponent of naturalism thinks otherwise, though it is not intended that opposition to naturalism commits one to anything supernatural. Nonetheless, one should not take naturalism in regard as committing one to any sort of reductive explanation of that realm, and there are such commitments in the use of ‘physicalism’ and ‘materialism’.

If psychoanalytic explanation gives the impression that it imputes bare, meaning-free causality, this results from attending to only half the story, and misunderstanding what psychoanalysis means when it talks of psychological mechanisms. The economic descriptions of mental processes that psychoanalysis provides are never replacements for, but themselves always presuppose, characterizations of mental processes in terms of meaning. Mechanisms in psychoanalytic context are simply processes whose operation cannot be reconstructed as instances of rational functioning (they are what we might by preference call mental activities, by contrast with action) Psychoanalytic explanation’s postulation of mechanisms should not therefore be regarded as a regrettable and expugnable incursion of scientism into Freud’s thought, as is often claimed.

Suppose, alternatively, that Hermeneuticists such as Habermas ~ who follow Dilthey beings as a interpretative practice to which the concepts of the physical sciences. Are given ~ are correct in thinking that connections of meaning are misrepresented through being described as causal? Again, this does not impact negatively o psychoanalytic explanation since, as just argued, psychoanalytic explanations nowhere impute meaning-free causation. Nothing is lost for psychoanalytic explanation I causation is excised from the psychological picture.

The conclusion must be that psychoanalytic explanation is at bottom indifferent to the general meaning-cause issue. The core of psychoanalysis consists in its tracing of meaningful connections with no greater or lesser commitment to causality than is involved in ordinary psychology. (Which helps to set the stage ~ pending appropriate clinical validation ~ for psychoanalysis to claim as much truth for its explanation as ordinary psychology?). Also, the true key to psychoanalytic explanation, its attribution of special kinds of mental states, not recognized in ordinary psychology, whose relations to one another do not have the form of patterns of inference or practical reasoning.

In the light of this, it is easy to understand why some compatibilities and Hermeneuticists assert that their own view of psychology is uniquely consistent with psychoanalytic explanation. Compatibilities are right to think that, in order to provide for psychoanalytic explanation, it is necessary to allow mental connections that are unlike the connections of reasons to the actions that they rationalize, or to the beliefs that they support: And, that, in outlining such connections, psychoanalytic explanation must outstrip the resources of ordinary psychology, which does attempt to force as much as possible into the mould of practical reasoning. Hermeneuticists, for their part, are right to think that it would be futile to postulate connections which were nominally psychological but that characterized in terms of meaning, and that psychoanalytic explanation does not respond to the ‘paradox’ of irrationality by abandoning the search for meaningful connections.

Compatibilities are, however, wrong to think that non-rational but meaningful connections require the psychological order to be conceived as a causal order. The Hermeneuticists is free to postulate psychological connections that are determined by meaning but not by rationality: It is coherent to suppose that there are connections of meaning that are not -bona fide- rational connections, without these being causal. Meaningfulness is a broader concept than rationality. (Sometimes this thought has been expressed, though not helpful, by saying that Freud discovered the existence of ‘neurotic rationality.) Despite the fact that an assumption of rationality is doubtless necessary to make sense of behaviour in general. It does not need to be brought into play in making sense of each instance of behaviour. Hermeneuticists, in turn, are wrong to think that the compatibility view psychology as causal signals a confusion of meaning with causality or that it must lead to compatibilism to deny that there is any qualitative difference between rational and irrational psychological connections.

All the same, the last two decades have been an intermittent interval through which times’ extraordinary changes, placing an encouraging well-situated plot in the psychology of the sciences. ‘Cognitive psychology’, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level processing, has become ~ perhaps, the ~ dominant paradigm among experimental psychologists, while behaviouristically oriented approaches have gradually fallen into disfavour.

The relationship between physical behaviour and agential behaviour is controversial. On some views, all ‘actions’ are identical; to physical changes in the subjects body, however, some kinds of physical behaviour, such as ‘reflexes’, are uncontroversially not kinds of agential behaviour. On others, a subject’s action must involve some physical change, but it is not identical to it.

Both physical and agential behaviours could be understood in the widest sense. Anything a person can do ~ even calculating in his head, for instance ~ could be regarded as agential behaviour. Likewise, any physical change in a person’s body ~ even the firing of a certain neuron, for instance ~ could be regarded as physical behaviour.

Of course, to claim that the mind is ‘nothing over and above’ such-and-such kinds of behaviour, construed as either physical or agential behaviour in the widest sense, is not necessarily to be a behaviourist. The theory that the mind is a series of volitional acts ~ a view close to the idealist position of George Berkeley (1685-1753) ~ and the theory that the mind is a certain configuration of neuronal events, while both controversial, are not forms of behaviourism.

Awaiting, right along side of an approaching account for which anomalous monism may take on or upon itself is the view that there is only one kind of substance underlying all others, changing and processes. It is generally used in contrast to ‘dualism’, though one can also think of it as denying what might be called ‘pluralism’ ~ a view often associated with Aristotle which claims that there are a number of substances, as the corpses of times generations have let it be known. Against the background of modern science, monism is usually understood to be a form of ‘materialism’ or ‘physicalism’. That is, the fundamental properties of matter and energy as described by physics are counted the only properties there are.

The position in the philosophy of mind known as ‘anomalous monism’ has its historical origins in the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804), but is universally identified with the American philosopher Herbert Donald Davidson (1917-2003), and it was he who coined the term. Davidson has maintained that one can be a monist ~ indeed, a physicalist ~ about the fundamental nature of things and events, while also asserting that there can be no full ‘reduction’ of the mental to the physical. (This is sometimes expressed by saying that there can be an ontological, though not a conceptual reduction.) Davidson thinks that complete knowledge of the brain and any related neurophysiological systems that support the mind’s activities would not in themselves be knowledge of such things as belief, desire, experience and the rest of mentalistic generativist of thoughts. This is not because he thinks that the mind is somehow a separate kind of existence: Anomalous monism is after all monism. Rather, it is because the nature of mental phenomena rules out a priori that there will be law-like regularities connecting mental phenomena and physical events in the brain, and, without such laws, there is no real hope of explaining the mental via the physical structure of the brain.

All and all, one central goal of the philosophy of science is to provided explicit and systematic accounts of the theories and explanatory strategies explored in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts involved in one or another science. in the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and thereby has been a great deal of work on the structure of evolutionary theory and on such crucial concepts. If concepts of the simple (observational) sorts were internal physical structures that had, in this sense, an information-carrying function, a function they acquired during learning, then instances of these structure types would have a content that (like a belief) could be either true or false. In that of ant information-carrying structure carries all kinds of information if, for example, it carries information ‘A’, it must also carry the information that ‘A’ or ‘B’. Conceivably, the process of learning is supposed to be a process in which a single piece of this information is selected for special treatment, thereby becoming the semantic content ~ the meaning ~ of subsequent tokens of that structure type. Just as we conventionally give artefacts and instruments information-providing functions, thereby making their flashing lights, and so forth ~ representations of the conditions in the world in which we are interested, so learning converts neural states that carry information ~ ‘pointer readings’ in the head, so to speak ~ in structures that have the function of providing some vital piece of information they carry when this process occurs in the ordinary course of learning, the functions in question develop naturally. They do not, as do the functions of instruments and artefacts, depends on the intentions, beliefs, and attitudes of users. We do not give brain structure these functions. They get it by themselves, in some natural way, either (in the case of the senses) from their selectional history or (in the case of thought) from individual learning. The result is a network of internal representations that have (in different ways) the power representation, of experience and belief.

To understand that this approach to ‘thought’ and ‘belief’, the approach that conceives of them as forms of internal representation, is not a version of ‘functionalism’ ~ at least, not if this dely held theory is understood, as it is often, as a theory that identifies mental properties with functional properties. For functional properties have to do with the way something, is, in fact, behaves, with its syndrome of typical causes and effects. An informational model of belief, in order to account for misrepresentation, a problem with which a preliminary way that in both need something more than a structure that provided information. It needs something having that as its function. It needs something that is supposed to provide information. As Sober (1985) comments for an account of the mind we need functionalism with the function, the ‘teleological’, is put back in it.

Philosophers need not (and typically do not) assume that there is anything wrong with the science they are studying. Their goal is simply to provide accounts of he theories, concepts and explanatory strategies that scientists are using ~ accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by the scientists themselves.

Cognitive psychology is in many ways a curious and puzzling science. Many of the theories put forward by cognitive psychologists make use of a family of ‘intentional’ concepts ~ like believing that ‘, desiring that ‘q’, and representing ‘r’ ~ which do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories.

It is characteristic of dialectic awareness that discussions of intentionality appeared as the paradigm cases discussed which are usually beliefs or sometimes beliefs and desires, however, the biologically most basic forms of intentionality are in perception and in intentional action. These also have certain formal features which are not common to beliefs and desire. Consider a case of perceptual experience. Suppose that I see my hand in front of my face. What are the conditions of satisfaction? First, the perceptual experience of the hand in front of my face has as its condition of satisfaction that there be a hand in front of my face. Thus far, the condition of satisfaction is the same as the belief than there is a hand in front of my face. But with perceptual experience there is this difference: In order that the intentional content be satisfied, the fact that there is a hand in front of my face must cause the very experience whose intentional content is that there is a hand in front of my face. This has the consequence that perception has a special kind of condition of satisfaction that we might describe as ‘causally self-referential’. The full conditions of satisfaction of the perceptual experience are, first that there be a hand in front of my face, and second, that there is a hand in front of my face caused the very experience of whose conditions of satisfaction forms a part. We can represent this in our acceptation of the form. S(p), such as:

Visual experience (that there is a hand in front of face

and the fact that there is a hand in front of my face

is causing this very experience.)

Furthermore, visual experiences have a kind of conscious immediacy not characterised of beliefs and desires. A person can literally be said to have beliefs and desires while sound asleep. But one can only have visual experiences of a non-pathological kind when one is fully awake and conscious because the visual experiences are themselves forms of consciousness.

People’s decisions and actions are explained by appeal to their beliefs and desires. Perceptual processes, sensational, are said to result in mental states which represent (or sometimes misrepresent) one or as another aspect of the cognitive agent’s environment. Other theorists have offered analogous acts, if differing in detail, perhaps, the most crucial idea in all of this is the one about representations. There is perhaps a sense in which what happens at, say, the level of the retina constitutes, as a result of the processes occurring in the process of stimulation, some kind of representation of what produces that stimulation, and thus, some kind of representation of the objects of perception. Or so it may seem, if one attempts to describe the relation between the structure and characteristic of the object of perception and the structure and nature of the retinal processes. One might say that the nature of that relation is such as to provide information about the part of the world perceived, in the sense of ‘information’ presupposed when one says that the rings in the sectioning of a tree’s truck provide information of its age. This is because there is an appropriate causal relation between the things which make it impossible for it to be a matter of chance. Subsequently processing can then be thought to be one carried out on what is provided in the representations in question.

However, if there are such representations, they are not representations for the perceiver, it is the thought that perception involves representations of that kind which produced the old, and now largely discredited philosophical theories of perception which suggested that perception be a matter, primarily, of an apprehension of mental states of some kind, e.g., sense-data, which are representatives of perceptual objects, either by being caused by them or in being in some way constitutive of them. Also, if it be said that the idea of information so invoked indicates that there is a sense in which the processes of stimulation can be said to have content, but a non-conceptual content, distinct from the content provided by the subsumption of what is perceived under concepts. It must be emphasised that, that content is not one of the perceivers. What the information-processing story provides, at best, a more adequate categorization than previously available of the causal processes involved. That may be important, but more should not be claimed for it than there is. If in perception is a given case one can be said to have an experience as of an object of a certain shape and kind related to another object it is because there is presupposed in that perception the possession of concepts of objects, and more particular, a concept of space and how objects occupy space.

It is, that, nonetheless, cognitive psychologists occasionally say a bit about the nature of intentional concepts and the nature of intentional concepts and the explanations that exploit them. Their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile grounds for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. The American philosopher of mind Alan Jerry Fodor’s (1935- ), The Language of Thought (1975) was a pioneering study in the genre on the field. Philosophers have, also, done important and widely discussed work in what might be called the ‘descriptive philosophy’ or ‘cognitive psychology’.

These philosophical accounts of cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, and they inevitably smooth over some of the rough edges of scientists’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that psychologists actually produce, then the philosophers have just got it wrong. There is, however, a very different way in which philosopher’s have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional concepts in cognitive psychology. Intentional notions have been criticized on various grounds. The two situated consideration are that they fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.

Perhaps e easiest way to make the point about ‘supervenience is to use a thought experiment of the sort originally proposed by the American philosopher Hilary Putnam (1926- ). Suppose that in some distant corner of the universe there is a planet, Twin Earth, which is very similar to our own planet. On Twin Earth, there is a person who is an atom for atom replicas of J.F. Kennedy. Now the President J.F. Kennedy, who lives on Earth believe s that Rev. Martin Luther King Jr. was born in Tennessee. If you asked him. ‘Was the Rev. Martin Luther King Jr. born in Tennessee, In all probability the answer would either or not it is yes or no. twin, Kennedy would respond in the same way, but it is not because he believes that our Rev. Martin Luther King Jr.? Was, as, perhaps, very much in question of what is true or false? His beliefs are about Twin-Luther, and that Twin -Luther was certainly not born in Tennessee, and thus, that J.F. Kennedy’s belief is true while Twin-Kennedy’s is false. What all this is supposed to show is that two people, perhaps on opposite polarities of justice, or justice as drawn on or upon human rights, can share all their physiological properties without sharing all their intentional properties. To turn this into a problem for cognitive psychology, two additional premises are needed. The first is that cognitive psychology attempts to explain behaviour by appeal to people’s intentional properties. The second, is that psychological explanations should not appeal to properties that fall to supervene on an organism’s physiology. (Variations on this theme can be found in the American philosopher Allen Jerry Fodor (1987)).

The thesis that the mental is supervening on the physical ~ roughly, the claim that the mental character of a wholly determinant of its rendering adaptation of its physical nature ~ has played a key role in the formulation of some influential positions of the ‘mind-body’ problem. In particular versions of non-reductive ‘physicalism’, and has evoked in arguments about the mental, and has been used to devise solutions to some central problems about the mind ~ for example, the problem of mental causation.

The idea of supervenience applies to one but not to the other, that this, there could be no difference in a moral respect without a difference in some descriptive, or non-moral respect evidently, the idea generalized so as to apply to any two sets of properties (to secure greater generality it is more convenient to speak of properties that predicates). The American philosopher Donald Herbert Davidson (1970), was perhaps first to introduce supervenience into the rhetoric discharging into discussions of the mind-body problem, when he wrote ‘ . . . mental characteristics are in some sense dependent, or supervening, on physical characteristics. Such supervenience might be taken to mean that there cannot be two events alike in all physical respects but differing in some mental respectfulness, or that an object cannot alter in some metal deferential submission without altering in some physical regard. Following, the British philosopher George Edward Moore (1873-1958) and the English moral philosopher Richard Mervyn Hare (1919-2003), from whom he avowedly borrowed the idea of supervenience. Donald Herbert Davidson, went on to assert that supervenience in this sense is consistent with the irreducibility of the supervened to their ‘subvenient’, or ‘base’ properties. Dependence or supervenience of this kind does not entail reducibility through law or definition . . . ‘

Thus, three ideas have purposively come to be closely associated with supervenience: (1) Property convariation, (if two things are indiscernible in base properties they must be indiscernible in supervening properties). (2) Dependence, (supervening properties are dependent on, or determined by, their subservient bases) and (3) non-reducibility (property convariation and dependence involved in supervenience can obtain even if supervening properties are not reducible to their base properties.)

Nonetheless, in at least, for the moment, supervenience of the mental ~ in the form of strong supervenience, or, at least global supervenience ~ is arguably a minimum commitment to physicalism. But can we think of the thesis of mind-body supervenience itself as a theory of the mind-body relation ~ that is, as a solution to the mind-body problem?

It would seem that any serious theory addressing the mind-body problem must say something illuminating about the nature of psychophysical dependence, or why, contrary to common belief, there is no dependence in either way. However, if we take to consider the ethical naturalist intuitivistic will say that the supervenience, and the dependence, for which is a brute fact you discern through moral intuition: And the prescriptivist will attribute the supervenience to some form of consistency requirements on the language of evaluation and prescription. And distinct from all of these is Mereological supervenience, namely the supervenience of properties of a whole on properties and relations of its pats. What all this shows, is that there is no single type of dependence relation common to all cases of supervenience, supervenience holds in different cases for different reasons, and does not represent a type of dependence that can be put alongside causal dependence, meaning dependence, Mereological dependence, and so forth.

There seems to be a promising strategy for turning the supervenience thesis into a more substantive theory of mind, and it is that to explicate mind-body supervenience as a special case of Mereological supervenience ~ that is, the dependence of the properties of a whole on the properties and relations characterizing its proper parts. Mereological dependence does seem to be a special form of dependence that is meta-physically sui generis and highly important. If one takes this approach, one would have to explain psychological properties as macroproperties of a whole organism that covary, in appropriate ways, with its microproperties, i.e., the way its constituent organs, tissues, and so forth, are organized and function. This more specific supervenience thesis may be a serious theory of the mind-body relation that can compete for the classic options in the field.

On this topic, as with many topics in philosophy, there is a distinction to be made between (1) certain vague, partially inchoate, pre-theoretic ideas and beliefs about the matter at hand, and (2) certain more precise, more explicit, doctrines or theses that are taken to articulate or explicate those pre-theoretic ideas and beliefs. There are various potential ways of precisifying our pre-theoretic conception of a physicalist or materialist account of mentality, and the question of how best to do so is itself a matter for ongoing, dialectic, philosophical inquiry.

The view concerns, in the first instance, at least, the question of how we, as ordinary human beings, in fact go about ascribing beliefs to one another. The idea is that we do this on the basis of our knowledge of a common-sense theory of psychology. The theory is not held to consist in a collection of grandmotherly saying, such as ‘once bitten, twice shy’. Rather it consists in a body of generalizations relating psychological states to each other to input from the environment, and to actions. Such may be founded on or upon the grounds that show or include the following:

(1) (x)(p)(if x fears that p, then x desires that not-p.)

(2) (x)(p)(if x hopes that p and [✸] hopes that p and

[✸] discovers that p, then [✸] is pleased that p.)

(3) (x)(p)(q) (If x believes that p and [✸] believes that

if p, then q, barring confusion, distraction and so

forth [✸] believes that q.)

(4) (x)(p)(q) (If x desires that p and x believes that if q then

p, and x is able to bring it about that q, then, barring

conflict desires or preferred strategies, x brings it about

that q.)

All of these generalizations should be understood as containing ceteris paribus clauses. (1) For example, applies most of the time, but variably. Adventurous types often enjoy the adrenal thrill produced by fear. This leads them, on occasion, to desire the very state of affairs that frightens them. Analogously, with (3). A subject who believes that ‘p’ nd believes that if ‘p’, then ‘q’. Would typically infer that ‘q?’. But certain atypical circumstances may intervene: Subjects may become confused or distracted, or they ma y find the prospect of ‘q’ so awful that they dare not allow themselves to believe it. The ceteris paribus nature of these generalizations is not usually considered to be problematic, since atypical circumstances are, of course, atypical, and the generalizations are applicable most of the time.

We apply this psychological theory to make inference about people’s beliefs, desires and so forth. If, for example, we know that Julie believes that if she is to be at the airport at four, then she should get a taxi at half past two, and she believes that she is to be at the airport at four, then we will predict, using (3), that Julie will infer that she should get a taxi at half past two.

The Theory-Theory, as it is called, is an empirical theory addressing the question of our actual knowledge of beliefs. Taken in its purest form if addressed both first and third-person knowledge: We know about our own beliefs and those of others in the same way, by application of common-sense psychological theory in both cases. However, it is not very plausible to hold that we always ~ or, indeed usually ~ know our own beliefs by way of theoretical inference. Since it is an empirical theory concerning one of our cognitive abilities, the Theory-Theory is open to psychological scrutiny. Various issues of the hypothesized common-sense psychological theory, we need to know whether it is known consciously or unconsciously. Nevertheless, research has revealed that three-year-old children are reasonably god at inferring the beliefs of others on the basis of actions, and at predicting actions on the basis of beliefs that others are known to possess. However, there is one area in which three-year-old’s psychological reasoning differs markedly from adults. Tests of the sorts are rationalized in such that: ‘False Belief Tests’, reveal largely consistent results. Three-year-old subjects are witness to the scenario about the child, Billy, sees his mother place some biscuits in a biscuit tin. Billy then goes out to play, and, unseen by him, his mother removes the biscuit from the tin and places them in a jar, which is then hidden in a cupboard. When asked, ‘Where will Billy look for the biscuits’? The majority of three-year-olds answer that Billy will look in the jar in the cupboard ~ where the biscuits actually are, than where Billy saw them being placed. On being asked ‘Where does Billy think the biscuits are’? They again, tend to answer ‘in the cupboard’, rather than ‘in the jar’. Three-year-olds thus, appear to have some difficulty attributing false beliefs to others in case in which it would be natural for adults to do so. However, it appears that three-year-olds are lacking the idea of false beliefs in general, nor does it dissembles, in that they struggle with attributing false beliefs in other kinds of the situation. For example, they have little trouble distinguishing between dreams and play, on the one hand, and true beliefs or claims on the other. By the age of four and a half years, most children pass the False Belief Tests fairly consistently. There is yet no general accepted theory of why three-year-olds fare so badly with the false beliefs tests, nor of what it reveals about their conception of beliefs.

Recently some philosophers and psychologists have put forward what they take to be an alternative to the Theory-Theory: However, the challenge does not end there. We need also to consider the vital element of making appropriate adjustments for differences between one’s own psychological states and those of the other. Nevertheless, it is implausible to think in every such case of simulation, yet alone will provide the resolving obtainability to achieve.

The evaluation of the behavioural manifestations of belief, desires, and intentions are enormously varied, every bit as suggested. When we move away from perceptual beliefs, the links with behaviour are intractable and indirect: The expectation I form on the basis of a particular belief reflects the influence of numerous other opinions, my actions are formed by the totality of my preferences and all those opinions which have a bearing on or upon them. The causal processes that produce my beliefs reflect my opinions about those processes, about their reliability and the interference to which they are subject. Thus, behaviour justifies the ascription of a particular belief only by helping to warrant a more inclusive interpretation of the overall cognitive position of the individual in question. Psychological descriptions, like translation, is a ‘holistic’ business. And once this is taken into account, it is all the less likely that a common physical trait will be found which grounds all instances of the same belief. The ways in which all of our propositional altitudes interact in the production of behaviour reinforce the anomalous character of the mental and render any sort of reduction of the mental to the physical impossibilities. Such is not meant as a practical procedure, it can, however, generalize on this so that interpretation and merely translation is at issue, has made this notion central to methods of accounting responsibilities of the mind.

Theory and Theory-Theory are two, as many think competing, views of the nature of our common-sense, propositional attitude explanations of action. For example, when we say that our neighbour cut down his apple tree because he believed that it was ruining his patio and did not want it ruined, we are offering a typically common-sense explanation of his action in terms of his beliefs and desires. But, even though wholly familiar, it is not clear what kind of explanation is at issue. Connected of one view, is the attribution of beliefs and desires that are taken as the application to actions of a theory which, in its informal way, functions very much like theoretical explanations in science. This is known as the ‘theory-theory’ of every day psychological explanation. In contrast, it has been argued that our propositional attributes are not theoretical claims do much as reports of a kind of ‘simulation’. On such a ‘simulation theory’ of the matter, we decide what our neighbour will do (and thereby why he did so) by imagining himself in his position and deciding what we would do.

The Simulation Theorist should probably concede that simulations need to be backed up by the independent means of discovering the psychological states of others. But they need not concede that these independent means take the form of a theory. Rather, they might suggest that we can get by with some rules of thumb, or straightforward inductive reasoning of a general kind.

A second and related difficulty with the Simulation Theory concerns our capacity to attribute beliefs that are too alien to be easily simulated: Beliefs of small children, or psychotics, or bizarre beliefs deeply suppressed in the unconscious latencies. The small child refuses to sleep in the dark: He is afraid that the Wicked Witch will steal him away. No matter how many adjustments we make, it may be hard for mature adults to get their own psychological processes, even in pretended play, to mimic the production of such belief. For the Theory-Theory alien beliefs are not particularly problematic: So long as they fit into the basic generalizations of the theory, they will be inferrable from the evidence. Thus, the Theory-Theory can account better for our ability to discover more bizarre and alien beliefs than can the Simulation Theory.

The Theory-Theory and the Simulation Theory are not the only proposals about knowledge of belief. A third view has its origins in the Austrian philosopher Ludwig Wittgenstein (1889-1951). On this view both the Theory and Simulation Theories attribute too much psychologizing to our common-sense psychology. Knowledge of other minds is, according to this alternative picture, more observational in nature. Beliefs, desires, feelings are made manifest to us in the speech and other actions of those with whom we share a language and way of life. When someone says. ‘Its going to rain’ and takes his umbrella from his bag. It is immediately clear to us that he believes it is going to rain. In order to possess an intellectual hold of we neither theorize nor simulate: We just perceive, of course, this is not straightforward visual perception of the sort that we use to see the umbrella. But it is like visual perception in that it provides immediate and non-inferential awareness of its objects. We might call this the ‘Observational Theory’.

The Observational Theory does not seem to accord very well with the fact that we frequently do have to indulge in a fair amount of psychologizing to find in what others believe. It is clear that any given action might be the upshot of any number of different psychological attitudes. This applies even in the simplest cases. For example, because one’s friend is suspended from a dark balloon near a beehive, with the intention of stealing honey. This idea to make the bees behave that it is going to rain and therefore believe that the balloon as a dark cloud, and therefore pay no attention to it, and so fail to notice one’s dangling friend. Given this sort of possibility, the observer would surely be rash immediately to judge that the agent believes that it is going to rain. Rather, they would need to determine ~ perhaps, by theory, perhaps by simulation ~ which of the various clusters of mental states that might have led to the action, actually did so. This would involve bringing in further knowledge of the agent, the background circumstances and so forth. It is hard to see how the sort of complex mental process involved in this sort of psychological reflection could be assimilated to any kind of observation.

The attributions of intentionality that depend on optimality or rationality are interpretations of the assumptive phenomena ~ a ‘heuristic overlay’ (1969), describing an inescapable idealized ‘real pattern’. Like such abstractions, as centres of gravity and parallelograms of force, the beliefs and desires posited by the highest stance have noo independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if ~ most importantly ~ rival intentional interpretations arose that did equally well at rationalizing the history of behaviour f an entity. Orman van William Quine (1908-2000), the most influential American philosopher of the latter half of the 20th century, whose thesis on the indeterminacy of radical translation carries all the way in the thesis of the indeterminacy of radical interpretation of mental states and processes.

The fact that cases of radical indeterminacy, though possible in principle, are vanishingly unlikely ever to comfort us in small solacing refuge and shelter, apparently this idea is deeply counter-intuitive to many philosophers, who have hankered for more ‘realistic’ doctrines. There are two different strands of ‘realism’ that in the attempt to undermine are such:

(1) Realism about the entities purportedly described by four

everyday mentalistic discourse ~ what I dubbed as

folk-psychology, such as beliefs, desires, pains, the self.

(2) Realism about content itself ~ the idea that there have

to be events or entities that really have intentionality

(as opposed to the events and entities that only have as

if they had intentionality).

The tenet indicated by (1) rests of what is fatigue, what bodily states or events are so fatiguing, that they are identical with, and so forth. This is a confusion that calls for diplomacy, not philosophical discovery: The choice between an ‘eliminative materialism’ and an ‘identity theory’ of fatigues is not a matter of which ‘ism’ is right, but of which way of speaking is most apt to wean these misbegotten features of them as conceptual schemata.

Again, the tenet (2) my attack has been more indirect. The view that some philosophers, in that of a demand for content realism as an instance of a common philosophical mistake: Philosophers oftentimes manoeuvre themselves into a position from which they can see only two alternatives: Infinite regress versus some sort of ‘intrinsic’ foundation ~ a prime mover of one sort or another. For instance, it has seemed obvious that for some things to be valuable as means, other things must be intrinsically valuable ~ ends in themselves ~ otherwise we would be stuck with a vicious regress (or, having no beginning or end) of things valuable only that although some intentionality is ‘derived’ (the ‘aboutness’ of the pencil marks composing a shopping list is derived from the intentions of the person whose list it is), unless some intentionality is ‘original’ and underived, there could be no derived intentionality.

There is always another alternative, namely, a finite regress that peters out without marked foundations or thresholds or essences. Here is an avoided paradox: Every mammal has a mammal for a mother ~ but, this implies an infinite genealogy of mammals, which cannot be the case. The solution is not to search for an essence of mammalhood that would permit us in principle to identify the Prime Mammal, but rather to tolerate a finite regress that connects mammals to their non-mammalian ancestors by a sequence that can only be partitioned arbitrarily. The reality of today’s mammals is secure without foundations.

The best instance of this theme is held to the idea that the way to explain the miraculous-seeming powers of an intelligent intentional system is to decompose it into hierarchically structured teams of ever more stupid intentional systems, ultimately discharging all intelligence-debts in a fabric of stupid mechanisms. Lycan (1981), has called this view ‘homuncular functionalism’. One may be tempted to ask: Are the subpersonal components ‘real’ intentional systems? At what point in the diminutions of prowess as we descend to simple neurons does ‘real’ intentionality disappear? Don’t ask. The reasons for regarding an individual neuron (or a thermostat) as a intentional system are unimpressive, but zero, and the security of our intentional attributions at the highest lowest-level of real intentionality. Another exploitation of the same idea is found in Elbow Room (1984): At what point in evolutionary history did real reason-appreciations real selves, make their appearance? Don’t ask ~ for the dame reason. Here is yet another, more fundamental versions of evolution can point in the early days of evolution can we speak of genuine function, genuine selection-for and not mere fortuitous preservation of entities that happen to have some self-replicative capacity? Don’t ask. Many of the more interesting and important features of our world have emerged, gradually, from a world that initially lacked them ~ function, intentionality, consciousness, morality, value ~ and it is a fool’s errand to try to identify a first or most-simple of an instance of the ‘real’ thin. It is for the same reason a mistake must exist to answer all the questions our system of cognitive content attribution permit us to ask. Tom says he has an older brother in Toronto and that he is an only child. What does he really believe? Could he really believe that he had a but if he also believed he was an only child? What is the ‘real’ content of his mental state? There is no reason to suppose there is a principled answer.

The most sweeping conclusion having drawn from this theory of content is that the large and well-regarded literature on ‘propositional attitudes’ (especially the debates over wide versus narrow content) is largely a disciplinary artefact of no long-term importance whatever, except perhaps, as history’s most slowly unwinding unintended reductio ad absurdum. Mostly, the disagreements explored in that literature cannot even be given an initial expression unless one takes on the assumption of an unsounded fundamentality of strong realism about content, and its constant companion, the idea of a ‘language of thought’ a system of mental representation that is decomposable into elements rather like terms, and large elements rather like sentences. The illusion, that this is plausible, or even inevitable, is particularly fostered by the philosophers’ normal tactic of working from examples of ‘believing-that-p’ that focuses attention on mental states that are directly or indirectly language-infected, such as believing that the shortest spy is a spy, or believing that snow is white. (Do polar bears believe that snow is white? In the way we do?) There are such states ~ in language-using human beings ~ but, they are not exemplary r foundational states of belief, needing a term for them. As, perhaps, in calling the term in need of, as they represent ‘opinions’. Opinions play a large, perhaps even decisive role in our concept of a person, but they are not paradigms of the sort of cognitive element to which one can assign content in the first instance. If one starts, as one should, with the cognitive states and events occurring in non-human animals, and uses these as the foundation on which to build theories of human cognition, the language-infected states are more readily seen to be derived, less directly implicated in the explanation of behaviour, and the chief but an illicit source of plausibility of the doctrine of a language of thought. Postulating a language of thought is in any event a postponement of the central problems of content ascribed, not a necessary first step.

Our momentum, forces out the causal theories of epistemology, of what makes a belief justified and what makes a true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals depends on what causes the subject to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. For some proposed casual criteria for knowledge and justification are for us, to take under consideration.

Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right sort of causal connection to the fact that ‘p’. Such a criteria can be applied only to cases where the fact that ‘p’, a sort that can enter causal relations: This seems to exclude mathematical and other necessary facts and perhaps any fact expressed by a universal generalization. And proponents of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject’s environment.

For example, the forthright Australian materialist David Malet Armstrong (1973), proposed that a belief of the form ‘This (perceived) object is ‘F’ is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictate that, for any subject ‘x’ and perceived object ‘y’. If ‘x’ has those properties and believes that ‘y’ is ‘F’, then ‘y’ is ‘F’. Dretske (1981) offers a rather similar account in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’.

This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge because it is compatible with the belief’s being unjustified, and an unjustified belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that any tinted colour in things that look brownishly-tinted to you and brownishly-tinted things look of any tinted colour. If you fail to heed these results you have for thinking that your colour perception is awry and believe of a thing that look’s colour tinted to you that it is colour tinted, your belief will fail to b e justified and will therefore fail to be knowledge, even though it is caused by the thing’s being tinted in such a way as to be a completely reliable sign (or to carry the information) that the thing is tinted or found of some tinted discolouration.

One could fend off this sort of counter-example by simply adding to the causal condition the requirement that the belief be justified. But this enriched condition would still be insufficient. Suppose, for example, that in an experiment you are given a drug that in nearly all people (but not in you, as it happens) causes the aforementioned aberration in colour perception. The experimenter tells you that you are taken such a drug that says, ‘No, wait a minute, the pill you took was just a placebo’. But suppose further that this last ting the experimenter tells that you are false. Her telling you this gives you justification for believing of a thing that looks colour tinted or tinged in brownish tones, but in fact about this justification that is unknown to you (that the experimenter’s last statement was false) makes it the casse that your true belief is not knowledge even though it satisfies Armstrong’s causal condition.

Goldman (1986) has proposed an important different sort of causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that a ‘global’ and ‘locally’ reliable. It is global reliability of its propensity to cause true beliefs is sufficiently high. Local reliability had to do with whether the process would have produced a similar but false belief in certain counter-factual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge e does not require the fact believed to be causally related to the belief and so it could in principle apply to knowledge of any kind of truth.

Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires, also for knowledge because justification is required for knowledge. What he requires for knowledge but suffices to say that it is not required for justification as local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counter-factual situation in which it is

The theory of relevant alternative is best understood as an attempt to accommodate two opposing strands in our thinking about knowledge. The first is that knowledge is an absolute concept. On one interpretation, tis means that the justification or evidence one must have an order to know a proposition ‘p’ must be sufficient to eliminate all the alternatives to ‘p’ (when an alternative to a proposition ‘p’ is a proposition incompatible with ‘p’).

For knowledge requires only that elimination of the relevant alternatives. So the relevant alternatives view preservers both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

The relevant alternative’s account of knowledge can be motivated by noting that other concepts exhibit the same logical structure e. two examples of this are the concepts ‘flat’ and the concept ‘empty’. Both appear to be absolute concepts ~ a space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of flat, there is a standard for what there is a standard for what counts as a bump and in the case of empty, there is a standard for what counts as a thing. We would not deny that a table is flat because a microscope reveals irregularities in its surface. Nor would we den y that a warehouse is empty because it contains particles of dust. To be flat is to be free of any relevant bumps. To be empty is to be devoid of all relevant things. Analogously, the relevant alternative’s theory says that to know a proposition is to have evidence that eliminates all relevant alternatives.

Some philosophers have argued that the relevant alternative’s theory of knowledge entails the falsity of the principle that set of known (by S) propositions in closed under known (by S) entailment, although others have disputed this however, this principle affirms the following conditional or the closure principle:

If ‘S’ knows ‘p’ and ‘S’ knows that ‘p’ entails ‘q’, then ‘S’ knows ‘q’.

According to the theory of relevant alternatives, we can know a proposition ‘p’, without knowing that some (non-relevant) alterative to ‘p’ is false. But, once an alternative ‘h’ to ‘p’ incompatible with ‘p’, then ‘p’ will trivially entail not-h. So it will be possible to know some proposition without knowing another proposition trivially entailed by it. For example, we can know that we see a zebra without knowing that it is not the case that we see a cleverly disguised mule (on the assumption that ‘ewe see a cleverly disguised mule’ is not a relevant alterative). This will involve a violation of the closure principle. This is an interesting consequence of the theory because the closure principle seems to many to be quite intuitive. In fact, we can view sceptical arguments as employing the closure principle as a premise, along with the premise that we do not know that the alternatives raised by the sceptic are false. From these two premisses, it follows (on the assumption that we see that the propositions we believe entail the falsity of sceptical alternatives) that we do not know the proposition we believe. For example, it follows from the closure principle and the fact that we do not know that we do not see a cleverly disguised mule, that we do not know that we see a zebra. We can view the relevant alternative’s theory as replying to the sceptical arguments by denying the closure principle.

What makes an alternative relevant? What standard do the alternatives rise by the sceptic fail to meet? These notoriously difficult to answer with any degree of precision or generality. This difficulty has led critics to view the theory as something being to obscurity. The problem can be illustrated though an example. Suppose Smith sees a barn and believes that he does, on the basis of very good perceptual evidence. When is the alternative that Smith sees a paper-maché replica relevant? If there are many such replicas in the immediate area, then this alternative can be relevant. In these circumstances, Smith fails to know that he sees a barn unless he knows that it is not the case that he sees a barn replica. Where no such replica exist, this alternative will not be relevant. Smith can know that he sees a barn without knowing that he does not see a barn replica.

This suggests that a criterion of relevance be something like probability conditional on Smith’s evidence and certain features of the circumstances. But which circumstances in particular do we count? Consider a case where we want the result that the barn replica alternative is clearly relevant, e.g., a case where the circumstances are such that there are numerous barn replicas in the area. Does the suggested criterion give us the result we wanted? The probability that Smith sees a barn replica given his evidence and his location to an area where there are many barn replicas is high. However, that same probability conditional on his evidence and his particular visual orientation toward a real barn is quite low. We want the probability to be conditional on features of the circumstances like the former bu t not on features of the circumstances like the latter. But how do we capture the difference in a general formulation?

How significant a problem is this for the theory of relevant alternatives? This depends on how we construe theory. If the theory is supposed to provide us with an analysis of knowledge, then the lack of precise criteria of relevance surely constitute a serious problem. However, if the theory is viewed instead as providing a response to sceptical arguments, it can be argued that the difficulty has little significance for the overall success of the theory.

What justifies the acceptance of a theory? Although particular versions of empiricism have met many criticisms, it remains attractive to look for an answer in some sort of empiricist terms: In terms, that is, of support by the available evidence. How else could objectivity of science be defended but by showing that its conclusions (and in particular its theoretical conclusion ~ those theories it presently accepts) are somehow legitimately based on agreed observational and experimental evidence? But, as is well known, theories in general pose a problem for empiricism.

Allowing the empiricist the assumption that there are observational statements whose truth-values can be inter-subjectively agreeing, and show the exploratory, non-demonstrative use of experiment in contemporary science. Yet philosophers identify experiments with observed results, and these with the testing of theory. They assume that observation provides an open window for the mind onto a world of natural facts and regularities, and that the main problem for the scientist is to establish the unique or the independence of a theoretical interpretation. Experiments merely enable the production of (true) observation statements. Shared, replicable observations are the basis for scientific consensus about an objective reality. It is clear that most scientific claims are genuinely theoretical: Nether themselves observational nor derivable deductively from observation statements (nor from inductive generalizations thereof). Accepting that there are phenomena that we have more or less diet access to, then, theories seem, at least when taken literally, to tell us about what is going on ‘underneath’ the observable, directly accessible phenomena on order to produce those phenomena. The accounts given by such theories of this trans-empirical reality, simply because it is trans-empirical, can never be established by data, nor even by the ‘natural’ inductive generalizations of our data. No amount of evidence about tracks in cloud chambers and the like, can deductively establish that those tracks are produced by ‘trans-observational’ electrons.

One response would, of course, be to invoke some strict empiricist account of meaning, insisting that talk of electrons and the like, is, in fact just shorthand for talks in cloud chambers and the like. This account, however, has few, if any, current defenders. But, if so, the empiricist must acknowledge that, if we take any presently accepted theory, then there must be alternatives, different theories (indefinitely many of them) which treat the evidence equally well ~ assuming that the only evidential criterion is the entailment of the correct observational results.

All the same, there is an easy general result as well: assuming that a theory is any deductively closed set of sentences, and assuming, with the empiricist that the language in which these sentences are expressed has two sorts of predicated (observational and theoretical), and, finally, assuming that the entailment of the evidence is only constraint on empirical adequacy, then there are always indefinitely many different theories which are equally empirically adequate in a language in which the two sets of predicates are differentiated. Consider the restricts if ‘T’ to quantifier-free sentences expressed purely in the observational vocabulary, then any conservative extension of that restricted set of T’s consequences back into the full vocabulary is a ‘theory’ co-empirically adequate with ~ entailing the same singular observational statements as ~ ‘T’. Unless veery special conditions apply (conditions which do not apply to any real scientific theory), then some of the empirically equivalent theories will formally contradict ‘T’. (A similar straightforward demonstration works for the currently more fashionable account of theories as sets of models.)

How can an empiricist, who rejects the claim that two empirically equivalent theories are thereby fully equivalent, explain why the particular theory ‘T’ that is, as a matter of fact, accepted in science is preferred these other possible theories ‘T’, with the same observational content? Obviously the answer must be ‘by bringing in further criteria beyond that of simply having the right observational consequence. Simplicity, coherence with other accepted these and unity are favourite contenders. There are notorious problems in formulating this criteria at all precisely: But suppose, for present purposes, that we have a strong enough intuitive grasp to operate usefully with them. What is the status of such further criteria?

The empiricist-instrumentalist position, newly adopted and sharply argued by van Fraassen, is that those further criteria are ‘pragmatic’ ~ that is, involved essential reference to us as ‘theory-users’. We happen tp prefer, for our own purposes, since, coherent, unified theories ~ but this is only a reflection of our preference es. It would be a mistake to think of those features supplying extra reasons to believe in the truth (or, approximate truth) of the theory that has them. Van Fraassen’s account differs from some standard instrumentalist-empiricist account in recognizing the extra content of a theory (beyond its directly observational content) as genuinely declarative, as consisting of true-or-false assertions about the hidden structure of the world. His account accepts that the extra content can neither be eliminated as a result of defining theoretical notions in observational terms, nor be properly regarded as only apparently declarative but in fact as simply a codification schemata. For van Fraassen, if a theory say that there are electrons, then the theory should be taken as meaning what it says ~ and this without any positivist divide debasing reinterpretations of the meaning that might make ‘There are electrons’ mere shorthand for some complicated set of statements about tracks in obscure chambers or the like.

In the case of contradictory but empirically equivalent theories, such as the theory T1 that ‘there are electrons’ and the theory T2 that ‘all the observable phenomena as if there are electrons but there are not ‘t’. Van Fraassen’s account entails that each has a truth-value, at most one of which is ‘true’, is that science need not to T2, but this need not mean that it is rational believe that it is more likely to be true (or otherwise appropriately connected with nature). As far as belief in the theory is belief but T2. The only belief involved in the acceptance of a theory is belief in the theorist’s empirical adequacy. To accept the quantum theory, for example, entails believing that it ‘saves the phenomena’ ~ all the (relevant) phenomena, but only the phenomena, theorists do ‘say more’ than can be checked empirically even in principle. What more they say may indeed be true, but acceptance of the theory does not involve belief in the truth of the ‘more’ that theorist say.

Preferences between theories that are empirically equivalent are accounted for, because acceptance involves more than belief: As well as this epistemic dimension, acceptance also has a pragmatic dimension. Simplicity, (relative) freedom from ads hoc assumptions, ‘unity’, and the like are genuine virtues that can supply good reasons to accept one theory than another, but they are pragmatic virtues, reflecting the way we happen to like to do science, rather than anything about the world. Simplicity to think that they do so: The rationality of science and of scientific practices can be in truth (or approximate truth) of accepted theories. Van Fraassen’s account conflicts with what many others see as very strong intuitions.

The most generally accepted account of this distinction is that a theory of justification is internalist if and only if it requires that all of the factors needed for a belief to be epistemologically justified for a given person to be cognitively accessible to that person, internal to his cognitive perceptive, and externalist, if it allows that, at least some of the justifying factors need not be thus accessible, so that they can be external to the believer’s cognitive perspective, beyond his knowingness. However, epistemologists often use the distinction between internalist and externalist theories of epistemic explication.

The externalism/internalism distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and a rather different way to accounts of belief and thought content. The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factors in order to be justified while a weaker version would require only that he be capable of becoming aware of them by focussing his attention appropriately. But without the need for any change of position, new information, and so forth. Though the phrase ‘cognitively accessible’ suggests the weak interpretation, therein intuitive motivation for intentionalism, viz., the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, wherefore, it would require the strong interpretation.

Perhaps the clearest example of an internalist position would be a ‘foundationalist’ view according to which foundational beliefs pertain to immediately experienced states of mind other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a ‘coherentist’ view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessarily, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak versions) objects of objective awareness. Also, on this way of drawing the distinction, a hybrid view (like the ones already mentioned), according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).

The most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirements for justification is roughly that the belief be produce d in a way or via a process that make it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have or likely to be true, but will, on such an account, nonetheless, be epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemological working within this tradition is likely to feel that the externalist, than offering a competing account on the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

Two general lines of argument are commonly advanced in favour of justificatory externalism. The first starts from the allegedly common-sensical premise that knowledge can be non-problematically ascribed to relativity unsophisticated adults, to young children and even to higher animals. It is then argued that such ascriptions would be untenable on the standard internalist accounts of epistemic justification (assuming that epistemic justification is a necessary condition for knowledge), since the beliefs and inferences involved in such accounts are too complicated and sophisticated to be plausibly ascribed to such subjects. Thus, only an externalist view can make sense of such common-sense ascriptions and this, on the presumption that common-sense is correct, constitutes a strong argument in favour of externalism. An internalist may respond by externalism. An internalist may respond by challenging the initial premise, arguing that such ascriptions of knowledge are exaggerated, while perhaps at the same time claiming that the cognitive situation of at least some of the subjects in question. Is less restricted than the argument claims? A quite different response would be to reject the assumption that epistemic justification is a necessary condition for knowledge, perhaps, by adopting an externalist account of knowledge, rather than justification, as those aforementioned.

The second general line of argument for externalism points out that internalist views have conspicuously failed to provide defensible, non-sceptical solutions to the classical problems of epistemology. In striking contrast, however, such problems are in general easily solvable on an externalist view. Thus, if we assume both that the various relevant forms of scepticism are false and that the failure of internalist views so far is likely to be remedied in the future, we have good reason to think that some externalist view is true. Obviously the cogency of this argument depends on the plausibility of the two assumptions just noted. An internalist can reply, first, that it is not obvious that internalist epistemology is doomed to failure, that the explanation for the present lack of success may be the extreme difficulty of the problems in question. Secondly, it can be argued that most of even all of the appeal of the assumption that the various forms of scepticism are false depends essentially on the intuitive conviction that we do have reasons our grasp for thinking that the various beliefs questioned by the sceptic are true ~ a conviction that the proponent of this argument must of course reject.

The main objection to externalism rests on the intuition that the basic requirements for epistemic justification are that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be aware of a reason for thinking that the belief is true or at the very least, that such a reason be available to him. Since the satisfaction of a externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason. It is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-example to externalism. The first of these challenges the necessity justification by appealing to examples of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples of this sort are cases where beliefs produced in some very non-standard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable on that of someone whose beliefs are produced more normally. Cases of this general sort can be constructed in which any of the standard externalist condition, e.g., that the belief be a result of a reliable process, fail to be satisfied. The intuitive claim is that the believer in such a case is nonetheless, epistemically justified, inasmuch as one whose belief is produced in a more normal way, and hence that externalist accounts of justification must be mistaken.

Perhaps the most interesting reply to this sort of counter-example, on behalf of reliabilism specifically, holds that reliability of a cognitive process is to be assessed in ‘normal’ possible worlds, i.e., in possible worlds that are actually the way our world is common-scenically believed to be, rather than in the world which actually contains the belief being judged. Since the cognitive processes employed in the Cartesian demon case are, we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious further issue is whether or not there is an adequate rationale for this construal of reliabilism, so that the reply is not merely ad hoc.

The second, correlative way of elaborating the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. Here the most widely discussed examples have to do with possible occult cognitive capacities like clairvoyance. Considering the point in application once again to reliabilism specifically, the claim is that a reliable clairvoyant who has no reason to think that he has such a cognitive power, and perhaps even good reasons to the contrary, is not rational or responsible and hence, not epistemologically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliabilist condition is satisfied.

One sort of response to this latter sort of objection is to ‘bite the bullet’ and insist that such believer e in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalist sort, which will rule out the offending example while still stopping far short of a full internalist . But while there is little doubt that such modified versions of externalism can indeed handle particular cases well enough to avoid clear intuitive implausibility, the issue is whether there will always be equally problematic cases that the cannot handle, and whether there is any clear motivation for the additional requirements other than the general internalist view of justification that externalists are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism, holding that epistemic justification requires that there be a justificatory facto r that is cognitively accessible e to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. at the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, this further fact need not be in any way grasped o r cognitive ly accessible to the believer. In effect, of the two premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, while the second can be (and will normally be) purely external. Here the internalist will respond that this hybrid view is of no help at all in meeting the objection that the belief is not held in the rational responsible way that justification intuitively seems required, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process (and, perhaps, further conditions as well). This makes it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept is epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the common-sen conviction that animals, young children and unsophisticated adult’s posse’s knowledge, though not the weaker conviction (if such a conviction even exists) that such individuals are epistemically justified in their belief. It is also, least of mention, less vulnerable to internalist counter-example of the sort and since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seem in fact to be primarily concerned with justification rather than knowledge?

A rather different use of the terms ‘internalism’ and ‘externalism’ has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intentional states depends only on the non-relational, internal properties of the individual’s mind or brain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors. Here too a view that appeals to both internal and external elements is standardly classified as an externalist view.

As with justification and knowledge, the traditional view of content has been strongly internalist character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexical, and so forth, that motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem at least to show that the belief or thought content that can e properly attributed to a person is dependent on facts about his environment -, e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, and so forth. ~ not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent of external factors pertaining to the environment, then knowledge of content should depend on knowledge of the these factors ~ which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification in the following way: If part of all of the content of a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to the content and the status of that content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist must insist that there are no rustication relations of these sorts, that only internally accessible content can either be justified or justify anything else: By such a response appears lame unless it is coupled with an attempt to shows that the externalists account of content is mistaken.

To have a word or a picture, or any other object in one’s mind seems to be one thing, but to understand it is quite another. A major target of the later Ludwig Wittgenstein (1889-1951) is the suggestion that this understanding is achieved by a further presence, so that words might be understood if they are accompanied by ideas, for example. Wittgenstein insists that the extra presence merely raise the same kind of problem again. The better of suggestions in that understanding is to be thought of as possession of a technique, or skill, and this is the point of the slogan that ‘meaning is use’, the idea is congenital to ‘pragmatism’ and hostile to ineffable and incommunicable understandings.

Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what we know of ourselves and the world. Contributions to this study include the theory of speech acts and the investigation of commonisation and the relationship between words and ideas, sand words and the world.

The most influential idea I e theory of meaning I the past hundred years is the thesis that the meaning of an indicative sentence is given by its truth-condition. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by the German mathematician and philosopher of mathematics Gottlob Frége (1848-1925), then was developed in a distinctive way by the early Wittgenstein, and is as leading idea of the American philosopher Donald Herbert Davidson. (1917-2003). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

The conception of meaning as truth-conditions need not and should not be advanced for being a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally performed by the various types of sentences in the language, and must have some ideate significance of speech act, the claim of the theorist of truth-conditions should rather be targeted on the notion of content: If two indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in their truth-conditions. It is this claim and its attendant problems, which will be the concern of each in the following.

The meaning of a complex expression is a function of the meaning of its constituents. This is indeed just a statement of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning as truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of a complex expression is a function of the meaning its constituents. On the truth-conditional conception, to give the meaning of sn expressions is the contribution it makes to the truth-conditions of sentence in which it occur. For example terms ~ proper names, indexical, and certain pronouns ~ this is done by stating the reference of the term in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it true. The meaning of a sentence-forming operators as given by stating its contribution to the truth-conditions of a complex sentence, as function of the semantic values of the sentence on which it operates. For an extremely simple, but nevertheless structured language, er can state that contribution’s various expressions make to truth condition, are such as:

A1: The referent of ‘London ‘ is London.

A2: The referent of ‘Paris’ is Paris.

A3: Any sentence of the form ‘a is beautiful’ is true if and only if the referent of ‘a’ is beautiful.

A4: Any sentence of the form ‘a is lager than b’ is true if and only if the referent of ‘a’ is larger than referent of ‘b’.

A5: Any sentence of t he for m ‘its no t the case that ‘A’ is true if and only if it is not the case that ‘A’ is true .

A6: Any sentence of the form ‘A and B’ is true if and only if ‘A’ is true and ‘B’ is true.

The principle’s A1-A6 form a simple theory of truth for a fragment of English. In this the or it is possible to derive these consequences: That ‘Paris is beautiful’ is true if and only if Paris is beautiful, is true and only if Paris is beautiful (from A2 and A3): That ‘London is larger than Paris and it is not the case that London is beautiful, is true if and only if London is larger than Paris and it is not the case that London is beautiful (from A1-A5), and in general, for any sentence ‘A’, this simple language we can derive something of the form ‘A’ is true if and only if ‘A’ .

Yet, theorist of truth conditions should insist that not ever y true statement about the reference o f an expression is fit to be an axiom in a meaning-giving theory of truth for a language. The axiom‘London’ refers to the city in which there was a huge fire in 1666.

This is a true statement about the reference of ‘London’. It is a consequence of a theory which substitute’s tis axiom for A1 in our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. A subject can understand the naming of ‘London’, without knowing that the last-mentioned truth condition, this replacing of axiomatic fit is not to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorist of meaning as truth conditions to state the constraints on the acceptability of axioms in a way which does not presuppose any prior, truth-conditional conception of meaning.

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental, first, the theorist has to answer the charge of triviality or vacuity. Second, the theorist must offer an account of what it is fir a person’s language to truly describable by a semantic theory containing a given semantic axiom.

What can take the charge of triviality first? In more detail, it would run thus: since the content of a claim that the sentence ‘Paris is beautiful’ is true amounts to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence, if we wish, as knowing its truth-conditions. But this gives us no substantive account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge tests upon what has been called the ‘redundancy theory of truth’, the theory also known as ‘Minimalism’. Or the ‘deflationary’ view of truth, fathered by the German mathematician and philosopher of mathematics, had begun with Gottlob Frége (1848-1925), and the Cambridge mathematician and philosopher Plumpton Frank Ramsey (1903-30). Wherefore, the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, nit centres on the points that ‘it is true that p’ says no more nor less than ‘p’(hence redundancy): That in less direct context, such as ‘everything he said was true’. Or ‘all logical consequences are true’. The predicate functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said or the kind’s of propositions that follow from true propositions. For example: ‘(∀p, q)(p & p ➞ q ➞ q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive users of the notion, such as ‘science aims at the truth’ or ‘truth is a normative governing discourse’. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objectivity’ conception of truth. But, perhaps, we can have the norm even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whenever science holds that ‘p’, then ‘p’, discourse is to be regulated by the principle that it is wrong to assert ‘p’ when

not-p.

It is, nonetheless, that we can take charge of triviality, since the content of a claim ht the sentence ‘Paris is beautiful’ is true, amounting to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence. If we wish, as knowing its truth-condition, but this gives us no substitute account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge rests on or upon what has been the redundancy theory of truth. The minimal theory states that the concept of truth is exhaustively by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories, accept that e equivalence principle, as e distinguishing feature of the minimal theory, its claim that the equivalence principle exhausts the notion of truth. It is, however, widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both the minimal theory of truth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful, it is circular to try to explain the sentence’s meaning in terms of its truth condition. The minimal theory of truth has been endorsed by Ramsey, Ayer, and later Wittgenstein, Quine, Strawson, Horwich and ~ confusingly and inconsistently of Frége himself.

The minimal theory treats instances of the equivalence principle as definitional truth for a given sentence. But in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as:

‘London is beautiful’ is true if and only if

London is beautiful

can be explained are precisely A1 and A3 in that, this would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does? But that is very implausible: It is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’. The idea that facts about the reference of particular words can be explanatory of facts about the truth conditions of sentences containing them in no way requires any naturalistic or any other kind of reduction of the notion of reference. Nor is the idea incompatible with the plausible point that singular reference can be attributed at all only to something which is capable of combining with other expressions to form complete sentences. That still leaves room for facts about an expression’s having the particular reference it does to be partially explanatory of the particular truth condition possessed by a given sentence containing it. The minimal theory thus treats as definitional or stimulative something which is in fact open to explanation. What makes this explanation possible is that there is a general notion of truth which has, among the many links which hold it in place, systematic connections with the semantic values of subsentential expressions.

A second problem with the minimal theory is that it seems impossible to formulate it without at some point relying implicitly on features and principles involving truth which go beyond anything countenanced by the minimal theory. If the minimal theory treats truth as a predicate of anything linguistic, be it utterances, type-in-a-language, or whatever. Then the equivalence schemata will not cover all cases, but only those in the theorist’s own language. Some account has to be given of truth for sentences of other languages. Speaking of the truth of language-independent propositions or thoughts will only post-pone, not avoid, this issue, since at some point principles have to be stated associating these language-dependent entities with sentences of particular languages. The defender of the minimalist theory is that the sentence ‘S’ of a foreign language is best translated by our sentence, then the foreign sentence ‘S’ is true if and only if ‘p’. Now the best translation of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are pervasive plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individuating account of any concept that there exist what may be called a ‘Determination Theory’ for that account ~ that is, a specification on how the account contributes to fixing the semantic value of that concept. The notion of a concept’s semantic value is the notion of something which makes a certain contribution to the truth conditions of thoughts in which the concept occurs. But this is to presuppose, than to elucidate, a general notion of truth.

It is, also, plausible that there are general constraints on the form of such Determination Theories, constrains which involve truth and which are not derivable from the minimalist ‘s conception. Suppose that concepts are individuated by their possession condition. A possession condition may in various ways make a thinker’s possession of a particular concept dependent upon his relation to his environment. Many possession conditions will mention the links between accept and the thinker’s perceptual experience. Perceptual experience represents the world as being a certain way. It is arguable that the only satisfactory explanation to what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, to mention of such experiences in a possession condition dependent in part upon the environmental relations of the thinker. Evan though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary in the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Its alternative approach, addresses the question by starting from the idea that a concept is individuated by the condition which must be satisfied a thinker is to posses that concept and to be capable of having beliefs and other altitudes whose content contain it as a constituent. So, to take a simple case, one could propose that the logical concept ‘and’ is individualized by this condition: It is the unique concept ‘C’ to posses which a thinker has to find these forms of inference compelling, without basting them on any further inference or information: From any two premises ‘A’ and ‘B’, ACB can be inferred and from any premise s a relatively observational concepts such as; round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement which individuates a concept by saying what is required for a thinker to posses it can be described as giving the possession condition for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’ doers not. We can also expect to use relatively observational concepts in specifying the kind of experience which have to be mentioned in the possession conditions for relatively observational; concepts. What we must avoid is mention of the concept in question as such within the content of the attitude attributed to the thinker in the possession condition. Otherwise we would be presupposed possession of the concept in an account which was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering of the others. Two of the families which plausibly have this status are these: The family consisting of same simple concepts 0, 1. 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers, ‘there are o so-and-so’s, there is 1 so-and- so’s, . . . and the family consisting of the concepts ‘belief’ and ‘desire’. Such families have come to be known as ‘local holist’s’. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demand that all the concepts in the family be individuated simultaneously. So one would say something of this form, belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to posses them is to meet such-and-such condition involving the thinker, C1 and C2. For those other possession conditions to individuate properly. It is necessary that there be some ranking of the concepts treated. The possession condition or concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession condition may in various ways make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world as being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to te subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession f that concept relations tn the thicker. Burge (1979) has also argued from intuitions about particular examples that even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary in the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Once, again, some general principles involving truth can, as Horwich has emphasized, be derived from the equivalence schemata using minimal logical apparatus. Consider, for instance, the principle that ‘Paris is beautiful and London is beautiful’ is true if and only if ‘Paris is beautiful’ is true and ‘London is beautiful’ is true if and only if Paris is beautiful and London is beautiful. But no logical manipulations of the equivalence e schemata will allow the derivation of that general constraint governing possession condition, truth and assignment of semantic values. That constraints can of course be regarded as a further elaboration of the idea that truth is one of the aims of judgement.

What is to a greater extent, but to consider the other question, for ‘What is it for a person’s language to be correctly describable by a semantic theory containing a particular axiom, such as the above axiom A6 for conjunctions? This question may be addressed at two depths of generality. A shallower of levels, in this question may take for granted the person’s possession of the concept of conjunction, and be concerned with what hast be true for the axiom to describe his language correctly. At a deeper level, an answer should not sidestep the issue of what it is to posses the concept. The answers to both questions are of great interest.

When a person means conjunction by ‘and’, he is not necessarily capable of formulating the axiom A6 explicitly. Even if he can formulate it, his ability to formulate it is not causal basis of his capacity to hear sentences containing the word ‘and’ as meaning something involving conjunction. Nor is it the causal basis of his capacity to mean something involving conjunction by sentences he utters containing the word ‘and’. Is it then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence as involving a particular way of deriving a theorem from a truth theory at some level of unconscious processing? One problem with this is that it is quite implausible that everyone who speaks the same language has to use the same algorithms for computing the meaning of a sentence. In the past thirteen years, the particular work as befitting Davies and Evans, whereby a conception has evolved according to which an axiom like A6, is true of a person’s component in the explanation of his understanding of each sentence containing the words ‘and’, a common component which explains why each such sentence is understood as meaning something involving conjunction. This conception can also be elaborated in computational; terms: As alike to the axiom A6 to be true of a person’s language is for the unconscious mechanism, which produce understanding to draw on the information that a sentence of the form ‘A and B’ is true only if ‘A’ is true and ‘B’ is true. Many different algorithms may equally draw on or open this information. The psychological reality of a semantic theory thus is to involve, Marr’s (1982) given by classification as something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. This conception of the psychological reality of a semantic theory can also be applied to syntactic and phonological theories. Theories in semantics, syntax and phonology are not themselves required to specify the particular algorithm which the language user employs. The identification of the particular computational methods employed is a task for psychology. But semantic, syntactic and phonological theories are answerable to psychological data, and are potentially refutable by them ~ for these linguistic theories do make commitments to the information drawn on or upon by mechanisms in the language user.

This answer to the question of what it is for an axiom to be true of a person’s language clearly takes for granted the person’s possession of the concept expressed by the word treated by the axiom. In the example of the axiom A6, the information drawn on or upon the sentences of the form ‘A and B’ are true if and only if ‘A’ is true and ‘B’ is true. This informational content employs, as it has to if it is to be adequate, the concept of conjunction used in stating the meaning of sentences containing ‘and’. S he computational answer we have returned needs further elaboration, which does not want to take for granted possession of the concepts expressed in the language. It is at this point that the theory of linguistic understanding has to argue that it has to draw upon a theory if the conditions for possessing a given concept. It is plausible that the concept of conjunction is individuated by the following condition for a thinker to have possession of it:

The concept ‘and’ is that concept ‘C’ to possess which a thinker must meet the following conditions: He finds inferences of the following forms compelling, does not find them compelling as a result of any reasoning and finds them compelling because they are of there forms:



pCq pCq pq

p q pCq



Here ‘p’ and ‘q’ range over complete propositional thoughts, not sentences. When axiom A6 is true of a person’s language, there is a global dovetailing between this possessional condition for the concept of conjunction and certain of his practices involving the word ‘and’. For the case of conjunction, the dovetailing involves at least this:

If the possession condition for conjunction entails that the thinker who possesses the concept of conjunction must be willing to make certain transitions involving the thought p&q, and of the thinker’s semitrance ‘A’ means that ‘p’ and his sentence ‘B’ means that ‘q’ then: The thinker must be willing to make the corresponding linguistic transition involving sentence ‘A and B’.

This is only part of what is involved in the required dovetailing. Given what we have already said about the uniform explanation of the understanding of the various occurrences of a given word, we should also add, that there is a uniform (unconscious, computational) explanation of the language user’s willingness to make the corresponding transitions involving the sentence ‘A and B’.

This dovetailing account returns an answer to the deeper questions because neither the possession condition for conjunction, nor the dovetailing condition which builds upon the dovetailing condition which builds on or upon that possession condition, takes for granted the thinker’s possession of the concept expressed by ‘and’. The dovetailing account for conjunction is an instances of a more general; schematic application to any concept. The case of conjunction is of course, exceptionally simple in several respects. Possession conditions for other concepts will speak not just of inferential transitions, but of certain conditions in which beliefs involving the concept in question are accepted or rejected, and the corresponding dovetailing condition will inherit these features. This dovetailing account has also to be underpinned by a general rationale linking contributions to truth conditions with the particular possession condition proposed for concepts. It is part of the task of the theory of concepts to supply this in developing Determination Theories for particular concepts.

Finally, this response to the deeper questions allows us to answer two challenges to the conception of meaning as truth-conditions. First, there was the question left hanging earlier, of how the theorist of truth-conditions is to say what makes one axiom of a semantic theory correct rather than another, when the two axioms assigned the same semantic values, but do so by different concepts. Since the different concepts will have different possession conditions, the dovetailing accounts, at the deeper level, of what it is for each axiom to be correct for a person’s language will be different accounts. Second, there is a challenge repeatedly made by the minimalist theories of truth, to the effect that the theorist of meaning as truth-conditions should give some non-circular account of what it is to understand a sentence, or to be capable of understanding all sentences containing a given constituent. For each expression in a sentence, the corresponding dovetailing account, together with the possession condition, supplies a non-circular account of what it is to that expression. The combined accounts for each of the expressions which comprise a given sentence together constitute a non-circular account of what it is to understand the complete sentence. Taken together, they allow the theorists of meaning as truth-conditions fully to meet the challenge.

Neurophysiology is the study of how nerve cells, or neurons, receive and transmit information. Two types of phenomena are involved in processing nerve signals: Electrical and chemical. Electrical events propagate a signal within a neuron, and chemical processes transmit the signal from one neuron to another neuron or to a muscle cell.

The signals conveying everything that human beings sense and think, and every motion they make, follows nerve pathways in the human body as waves of ‘ions’ (atoms or group of atoms that carry electric charges). Electrochemical signalling processes, particularly the pivotal step in which a signal is conveyed from one nerve cell to another.

A neuron is a long cell that has a thick central area containing the ‘nucleus’: It also has one long process called an ‘axon’ and one more short, neurons (The exceptions are sensory neurons, such as those that transmit information about temperature or touch, in which the signal is generated by specialized receptors in the skin.) These impulses are propagated electrically along the cell membrane to the end of the ‘axiom’. At the tip of the ‘axon’ the signal is chemically transmitted to an adjacent neuron or muscle cell.





















































































































































































































Like all other cells, neurons contain charged ‘ions’: Potassium and Sodium (positively charged) and Chlorine (negatively charged). Neurons differ from other cells in that they are able to produce a nerve impulse. A neuron is ‘polarized’ - that is, it has an overall negative charge inside the cell membrane because of the high concentration of ‘chlorine ions’ and low concentrations of ‘potassium’ and ‘sodium ions’. The concentration of these same ‘ions’ is exactly reversed outside the cell. This charge differentia represent stored electrical energy, sometimes referred to as ‘membrane potential’ or ‘resting potential’. The negative charge inside the cell is maintained by two features. The first is the selective permeability of the cell membrane, which is more permeable to potassium that sodium. The second feature is sodium pump s within the cell membrane that actively pump sodium out of the cell. When depolarization occurs, this charge differential across the membrane e is reversed, and a nerve impulse is produced.

Polarization is a rapid change in the permeability of the cell membrane. When sensory input or any other kind of stimulating current is received by the neuron, the membrane permeability is changed, allowing a sudden influx of sodium ions into the cell. The high concentration of sodium, or action potential, changes the overall charges within the cell from negative into positive. The local charges in ion concentration triggers similar reactions along the membrane, propagating the nerve impulse. After a brief period called the ‘refractory period’, during which the ionic concentration returns to resting potential, the neuron can repeat this process.

Nerve impulses travel at different speeds, depending on the cellular composition of a neuron. Where speed of impulses is important, as in the nervous system, axons are insulated with a membranous substance called ‘myelin’. The insulation provided by myelin maintains the ionic charge over long distances. Nerve impulses are propagated at specific points along the myelin sheath: These points are called the ‘nodes of Ranvier’. Examples of myelinated axons are those in sensory nerve fibres and nerves connected to skeletal muscles. In non–myelinated cells, the nerve impulse is propagated more diffusely.

When the electrical signal reaches the tip of an axon, it stimulates small ‘presynaptic vesicles’ in the cell. These vesicles contain chemicals called ‘neurotransmitters’, which are released into the microscopic space between neutrons (the synaptic cleft). The neurotransmitters attach themselves to specialized receptors on the surface of the adjacent neuron. This stimulus causes the adjacent cell to depolarize and propagate an action potential of its own. The duration of a stimulus from a neurotransmitter is limited by the breakdown of the chemicals in the synaptic clef t and the reuptake by the neuron that produced them. Formerly, each neuron was thought to make only one transmitter, however, in recent studies have shown that some cells make two or more.

The signals conveying everything that human beings sense and think, and every motion they make, follow nerve pathways in the human body as waves of ions (atoms or groups of atoms that carry electric charges). Australian physiologist Sir John Eccles discovered many of the intricacies of this electrochemical signalling process, particularly the pivotal step in which a signal is conveyed from one nerve cell to another. He shared the 1963 Noble Prize in physiology or medicine for this work, which be described in a 1965 Scientific American article.

How does one nerve cell transmit the nerve impulse to another cell? Electron microscopy and other methods show that it does so by means of special extensions that deliver a squirt of transmitter substance.

The human brain is the most highly organized form of matter known, and in complexity the brains of the other higher animals are not greatly inferior. For certain purposes it is expedient to regard the brain as being analogous to a machine. Even if it is so regarded, however, it is a machine of a totally different kind from those made by man, in trying to understand the workings of his own rain man meets his highest challenge. Nothing is given: There are no operating diagrams, no make’s instructions.

The first step in trying to understand the brain is to examine it structure in order to discover the components from which it is built and how they are related to one another. After that, one can attempt to understand the mode of operation of the simplest components. These two modes of investigation - the morphological and the physiological - have now become complementary. In studying the nervous system with today’s sensitive electrical device, however, it is all too easy to find physiological events that cannot be correlated with any known anatomical structure. Conversely, the electron microscope reveals many structural details whose physiological significance is obscure or unknown.

At the close of the past century the Spanish anatomist Santiago Ramón y Cajal showed how all parts of the nervous system are built up of individual nerve cells of many different shapes and sizes. Like other cells, each nerve cell has a nucleus and a surrounding cytoplasm. Its outer surface consists of numerously fine branches - the ‘dendrites’ - that receive nerve impulses from other nerve cells, and one relatively long branch - the axon - that transmits nerve impulses. Near its end the axon divides into branches that terminate at the dendrites or bodies of other nerve cells. The axon can be as short as a faction of a millimetre or as long as a metre, depending on its place and function. It has many of the properties of an electric cable and is uniquely specialized to conduct that brief electrical waves called ‘nerve impulses’. In very thin axons these impulses travel at less than one metre per second - in other words, for example, in the large axons of the nerve cells that activate muscles, they travel as fast as 100 metres pre second.

The electrical impulse that traves along the axon ceases abruptly when it comes to the point where the axon’s terminal fibres make contact with another nerve cell. These junction points were given the name ‘synapses’ by Sir Charles Sherrington, who laid the foundations of what is sometimes called ‘synaptology’. If the nerve impulse is to continue beyond the synapse, it must be regenerated afresh on the other side. As recently as 15 years ago, some physiologists held that transmission at the synapse was predominantly, if not exclusively, an electrical phenomenon. Now, however, there is abundant evidence that transmission is effectuated by the release of specific chemical substances that trigger a regeneration of the impulse. In fact, the first strong evidence showing that a transmitter substance acts across the synapse was provided more than 40 years ago by Sir Henry Dale and Otto Loewi.

It has been estimated that the human central nervous system, which of course includes the spinal cord as well as the brain itself, consists of about Ten-billion nerve cells. With rare exceptions each fibre cell receives information directly in the form of impulses from many other nerve cells - often hundreds - and transmits information to a like number. Depending on its threshold of response, a given nerve cell may fire an impulse when simulated by only a few incoming fibres or it may not fire until stimulated by many incoming fibres. It has a long been known that this threshold can be raised or lowered by various factors. Moreover, it as conjectured some 60 years ago that some of the incoming fibres must inhibit the firing of the receiving cell rather than excite it. The conjecture was subsequently confirmed, and the mechanism of the inhibitory effect has now been clarified. This mechanism and its equally fundamental counterpart-nerve-cell excitation - are the subject of this article.

At the level of anatomy there are some clues to indicate how the fine axon terminals impinging on a nerve cell can make the cell regenerate a nerve impulse of its own . . . a nerve cell and its dendrites are covered by fine branches of nerve fibres that terminate in knoblike structures. These structures are the ‘synapses’.

The electron microscope has revealed structural details of synapses that fit in nicely with the view that a chemical transmitter is involved in nerve transmission. Enclosed in the synaptic knob are many vesicles, or tiny sacs, which appear to contain the transmitter substances that induce synaptic transmission. Between the synaptic knob and the synaptic membrane of the adjoining nerve cell is a remarkably uniform space of about 20 millimicrons that is termed the ‘synaptic cleft’. Many of the synaptic vesicles are concentrated adjacent to this cleft: It seems plausible that the transmitter substance is discharged from the nearest vesicle into the cleft, where it can act on the adjacent cell membrane. This hypothesis is supported by the discovery that the transmitter is released in packets of a few thousand molecules.

The study of synaptic transmission was revolutionized in 1951 by the introduction of delicate techniques for recording electrically from the interior of single nerve cells. This is done by inserting into the nerve cell an extremely fine glass pipette with a diameter of .5 micron-about a fifty-thousandth of an inch. The pipette is filled with an electrically conducting salt solution such as concentrated potassium chloride. If the pipette is carefully inserted and held rigidly in place, the cell membrane appears to seal quickly around the glass, thus, preventing the flow of a short-circuiting current through the puncture in the cell membrane. Impaled in this fashion, nerve cells can function normally for hours. Although there is no way of observing the cells during the insertion of the pipette, the insertion can be guided by using as clues the electric signals that the pipette picks up when close to active nerve cells.

Soon to find that when the nerve cell responds to the chemical synaptic transmitter, the response depends in part on characteristic features of ionic composition that are also concerned with the transmission of impulses in the cell and along its axon. When the nerve cell is at rest, its physiological makeup resembles that of most other cells in the water solution inside the cell is quite different in composition from the solution in which the cell is bathed. The nerve cell is able to exploit this difference between external and internal composition and use it in quite different ways for generating an electrical impulse and for synaptic transmission.

Thee composition of the external solution is well established because the solution is essentially the same as blood from which cells and proteins have been removed. The composition of the internal solution is known only approximately. Indirect evidence indicates that the concentrations of sodium and chloride ions outside the cell are respectively some 10 and 14 times higher than th e concentrations inside the cell. In contrast, the concentration of potassium ions inside the cell is about 30 times higher than the concentration outside.

How can one account for this remarkable state of affairs? Part of the explanation is that the inside of the cell is negatively charged with respect to the outside of the cell by about 70 millivolts. Since like charges repel each other, this internal negative charge tends to drive chloride ions (CI-) outward through the cell membrane and, at the same time, to impede their inward movement. In fact, a potential difference of 70 millivolts is just sufficient to maintain the observed disparity in the concentration of chloride ions inside the cell and outside it: Chloride ions diffuse inward and outward at equal rates. A drop of 70 millivolts across the membrane therefore, defines the ‘equilibrium potential’ for chloride ions.

To obtain a concentration of potassium ions (K+) that is 30 times higher inside the cell than outside would require that the interior of the cell membrane be about 90 millivolts negative with respect to the exterior. Since the actual interior is only 70 millivolts negative, it falls short of the equilibrium potential for potassium ions by 20 millivolts. Evidently, the thirty-fold concentration can be achieved and maintained only if there is some auxiliary mechanism for ‘pumping’ potassium ions into the cell at a rate equal to their spontaneous net outward diffusion.

The pumping mechanism has the still the difficult task of pumping sodium ions (Na+) out of the cell against a potential gradient of 130 millivolts. This figure is obtained by adding the 70 millivolts of the internal negative charge to the equilibrium potential for sodium ions, which is 60 millivolts of internal positive charge. If it were not for this postulated pump, the concentration of sodium ions inside and outside the cell would be almost the reverse of what is observed.

In their classic studies of nerve-impulse transmission in the giant axon of the squid, A.L.Hodgkin, A.F.Huxley and Bernhard Katz of Britain demonstrated that the propagation of the impulse coincides with abrupt changes in the permeability of the axon membrane. When a nerve impulse has been triggered in some way, what can be described as a gate opens and lets sodium ions pour into the axon during the advance of the impulse, making the interior of the axon locally positive. The process is self-reinforcing in that the flow of some sodium ions through the membrane opens the gate further and makes it easier for others to follow. The sharp reversal of the internal polarity of the membrane constitutes the nerve impulse, which moves like a wave until it has travelled the length of the axon. In the wake of the impulse the sodium gate closes and a potassium gate opens, thereby restoring the normal polarity of the membrane within a millisecond or less.

With this understanding of the nerve impulse in hand, one is ready to follow the electrical events at the excitatory synapse. One might guess that if the nerve impulse results from an abrupt inflow of sodium ions and a rapid change in the electrical polarity of the axon’s internal, sometimes similar must happen at the body and dendrites of the nerve cell in order to generate the impulse in the first place. Indeed, the function of the excitatory synaptic terminals on the cell body and its dendrites is to depolarize the interior of the ell membrane essentially by permitting an inflow of sodium ions. When the depolarization reaches a threshold value, a nerve impulse is triggered.

As a simple instance of this phenomenon it has been recorded that depolarization that occurs in a single motoneuron activated directly by the large nerve fibres that enter the spinal cord from special stretched-receptors known as ‘annulospiral endings’. These receptors in turn are located in the same muscle that is activated by the motoneuron under study. Thus, the whole system forms a typical reflex arc, such as the arc responsible for the patellar reflex, or ‘knee jerk’.

To conduct the experiment we anaesthetize an animal (most often a cat) and free by dissection a muscle nerve that contains these large nerve fibres. By applying a mild electric shock to the exposed nerve one can produce a single impulse in each of the fibres: Since the impulses travel to the spinal cord almost synchronously, they are referred to collectively as a ‘volley’. The number of impulses contained in the volley can be reduced by reducing the stimulation applied to the nerve. The volley strength is measured at a point just outside the spinal cord and is displayed on an oscilloscope. A bout half a millisecond after detection of a volley there is a wavelike change in the voltage inside the motoneuron that has received the volley. The change is detected by a micro electrode inserted in the motoneurom that has received the volley. The change is detected by a microelectrode inserted in the motoneuron and is displayed on another oscilloscope.

What is found is that the negative voltage inside the cell becomes progressively less negative as more of the fibres impinging on the cell are stimulated to fire. This observed depolarization is in fact a simple summation of the depolarization produced by each individual synapse. When the depolarization of the interior of the motoneuron reaches a critical point, and suddenly appears on the second oscilloscope, showing that a nerve impulse has been generated. During the spike the voltage inside the cell charges from about 70 millivolt s negative to as much as 30 millivolts positive. The spike regularity appear s when the depolarization, or reduction of membrane potential reaches a critical level, which is usually between 10 and 18 millivolts. The only effect of a further strengthening of the synaptic stimulus is to shorten the time needed for the motoneuron to reach the firing threshold. The depolarizing potentials produced in the cell membrane by excitatory synapses are called excitatory postsynaptic potentials, or EPSP’s.

Through one barrel of a double-barrelled microelectrode one can apply a background current to change the resting potential of the interior of the cell membrane, either increasing it or decreasing it. When the potential is made more negative, the EPSP rises more slowly to a lower peak. Finally, when the charge inside the cell is reversed so as to be positive with respect to the exterior, the excitatory synapses give rise to an EPSP that is actually the reverse of the normal one.

These observations support the hypothesis that excitatory synapses produce what amounts virtually to a short circuit in the synaptic membrane potential. When this occurs, the membrane no longer acts as a barrier to the passage of ions, but lets them flow through in response to the differing electric potential on the two sides of the membrane. In other words, the ions are momentarily allowed to travel freely down their electrochemical gradients, which means that, sodium ions flow into the cell and, to a lesser degree, potassium ions flow out. It is this net flow of positive ions that creates the excitatory postsynaptic potential. The flow of negative ions, such as the chloride ion, is apparently not involved. By artificially altering the potential inside the cell one can establish that there is no flow of ions, and, therefore, no EPSP, when the voltage drop across the membrane is zero.

How is the synaptic membrane converted from a strong ionic barrier into an ion-permeable state? It is currently accepted that the agency of conversion is the chemical transmitter substance contained in the vesicles inside the synaptic knob. When a nerve impulse reaches the synaptic knob, some of the vesicles are caused to eject the transmitter substance into the synaptic cleft. The molecules of the substance would take only a few microseconds to diffuse across the cleft and become attached to specific receptor sites on the surface membrane of the adjacent nerve cell.

Presumably the receptor sites are associated with fine channels in the membrane that are opened in some way by the attachment of the transmitter-substance molecules to the receptor sites. With the channels thus opened, sodium and potassium ions flow through the membrane thousands times more readily than they normally do, thereby producing the intense ionic flux that depolarizes the cell membrane and produces the EPSP. In many synapses the current flows strongly for only about a millisecond before the transmitter substance is eliminated from the synaptic cleft, either by diffusion into the surrounding regions or as a result of being destroyed by ‘enzymes’. The latter process is known to occur when the transmitter substance is ‘acetylcholine’, which is destroyed by the enzyme ‘acetylcholinesterase’.

The substantiation of this general picture of synaptic transmission requires the solution of many fundamental problems. Considering that we do not know the specific transmitter substance for the vast majority of synapses in the nervous system we do not know if there are many different substances or only a few. The only one identified with reasonable certainty in the mammalian central nervous system is ‘acetylcholine’. We know practically nothing about the mechanism by which a presynaptic nerve impulse causes the transmitter substance to be injected into the synaptic cleft. Nor do we know how the synaptic vesicles not immediately adjacent to the synaptic cleft are moved up to the firing line to replace the emptied vesicles. It is conjectured that the vesicles contain the enzyme system needed to recharge themselves. The entire process must be swift and efficient: The total amount of transmitter substance in synaptic terminals is enough fort only a few minutes of synaptic activity at normal operating rates. There are also knotty problems to be solved on the other side of the synaptic cleft. What, for example, is the nature of the receptor sites? How are the ionic channels in the membrane opened up?

The second type of synapse that has been identified in the nervous system. These are the synapses that can inhibit the firing of a nerve cell even though it may be receiving a volley of excitation. When inhibitory synapses are examined in the electron microscope, they look very much like excitatory synapses. (There are probably some subtle differences, but they need not concern us at this interval of time.) Microelectrod e recordings of the activity of single motoneurons and other nerve cells have now shown that the inhibitory postsynaptic potential (IPSP) is virtually a mirror image of the EPSP. Moreover, individual inhibitory synapses, like excitatory synapses, have a cumulative effect T. the chief difference is simply that the IPSP makes the cell’s internal voltage more negative than it is normally, which is in a direction opposite to that needed for generating a spike discharge.

By driving the internal voltage of a nerve cell in the negative direction inhibitory synapses oppose the action of excitation, which of course, drive it in the positive direction. Hence, if the potential inside a resting cell is 70 millivolts negative, a strong volley of inhibitory impulses can drive the potential to 75 or 80 millivolts detrimentally negative. One can easily see that if the potential is made more negative in this way the excitatory synapses find it more difficult to raise the internal voltage to the threshold point for the generation of a spike. Thus, the neve cell responds to the algebraic sum of the internal voltage changes produced by excitation and inhibitory synapses.

If, as in the experiment described earlier, the internal membrane potential is altered by the flow of an electric current through one barrel of a double-barrelled micro electrode, one can observe the effect of such changes on the inhibitory postsynaptic potential. When the internal potential is made less negative, the inhibitory postsynaptic potential is deepened. Conversely, when the potential is made more negative, the IPSP diminishes, it finally reverses when the internal potential is driven below minus 80 millivolts.

In an effort to discover the permeability changes associated with the inhibitory potential, it should be kept in mind, that, by altering the concentration of is normally found in motoneurons and introduce a variety of other ions that are not normally present. This can be done by impaling nerve cells with micro pipettes that are filled with a salt solution containing the ion to be injected. The actual injection is achieved by passing a brief current through the micro pipette.

If the concentration of chloride ions within the cell is in this way increased as much as three times, the inhibitory postsynaptic potential reverses and acts as a depolarizing current: That is, it resembles an excitatory potential. Nonetheless, if the cell is heavily injected with sulfate ions, which are also negatively charged, there is no such reversal. This simple test shows that under the influence of the inhibitory transmitter substance, which is still unidentified, the subsynaptic membrane becomes permeable momentarily to chloride ions, but, not to sulfate ions. During the generation of IPSP that the outflow of chloride ions is so rapid that in more than outweighs the flow of other ions generate the normal inhibitory potential.

Probationary speculations in the effect of injecting motoneurons with more than 30 kinds of negatively charged ions. With one exception the hydrated ions (ions bound to water) to which the cell membrane is permeable under the influence of the inhibitory transmitter substance are smaller than the hydrated ions to which the membrane is impermeable. The exception is the formats ion (HC)2-, which may have an ellipsoidal shade and so be able to pass through membrane pores that block smaller spherical ions.

Apart from the formats ion all the ions to which the membrane is permeable have a diameter not greater than 1.14 times the diameter of the potassium ion - that is, they are less than 2.9 angstrom units in diameter. Comparable investigations in other laboratories have found the same permeability effects, including the exceptional behaviour of the formats ion, in fishes, toads, and snails. It may well be that the ionic mechanism responsible for synaptic inhibition is the same throughout the animal kingdom.

The significance of these and other studies is that they strongly indicate that the inhibitory transmitter substance open the membrane to the flow of potassium ions but not to sodium ions. It is known that the sodium ion is somewhat larger than any of the negatively charged ions, including the formats ion, that are able to pass through the membrane during synaptic inhibition. It is not possible, however, to test the effectiveness of potassium ions by injecting excess amounts into the cell because the excess is immediately diluted by an osmotic flow of water into the cell.

As indicated, the concentration of potassium ions inside the nerve cell is about 30 times greater than the concentration outside, and to maintain this large difference in concentration without the help of a metabolic pump the inside of the membrane would have to be charged 90 millivolts negative with respect to the exterior. This implies that if the membrane were suddenly made porous to potassium ions, the resulting outflow of ions would make the inside potential of the membrane even more negative than it is in the resting state, and that is just what happens during synaptic inhibition. The membrane must not simultaneously become porous to sodium ions, because they exist in much higher concentration outside the cell than inside and their rapid inflow would more than compensate for the potassium outflow. In fact, the fundamental difference between synaptic excitation and synaptic inhibition is that the membrane freely passes sodium ions in response to the former and largely excludes the passage of sodium ions in response to the latter.

This fine discrimination between ions that are not very different in size must be explained by any hypothesis of synaptic action. It is most unlikely that the channels through the membrane are created afresh and accurately maintained for a thousandth of a second every time a burst of transmitter substance is released into the synaptic cleft. It is more likely that channels of at least two different sizes are built directly into the membrane structure. In some way the excitatory transmitter substance would selectively unplug the larger channels and permit the free inflow of sodium ions. Potassium ions would simultaneously flow out and thus, would tend to counteract the large potential change that would be produced by the massive sodium inflow. The inhibitory transmitter substance would selectively unplug the smaller channels that are large enough to pass potassium and chloride ions but not sodium ions.

To explain certain types of inhibition other features must be added to this hypothesis of synaptic transmission. In the simple hypothesis chloride and potassium ions can flow freely through pores of all inhibitory synapses. It has been shown, however, that the inhibition of the concentration of heart muscle by the vagus nerve is due almost exclusively to potassium-ion flow. On the one hand, in the muscle of ‘crustaceans’ and in nerve cells in the snail’s brain synaptic inhibition is due largely to the flow of chloride ions. This selective permeability could be explained if there were fixed charges along the walls of the channels. If such charges were negative, they would repel negatively charged ions and prevent their passage: If they were positive, they would similarly prevent the passage of positively charged ions. One can now suggest, that the channels opened by the excitatory transmitter are negatively charged and do not permit the passage of the negatively charged chloride ion, even though it is small enough to move through the channel freely.

One might wonder if a given nerve cell can have excitatory synaptic action at some of its axon terminals and inhibitory action at others. The answer is no. Two different kinds of nerve cells are needed, one for each type of transmission and synaptic transmitter substance. This can readily be demonstrated by the effects of ‘strychnine’ and ‘tetanus toxins’, in the spinal cord: They specifically prevent inhibitory synaptic action and leave excitatory action unaltered. As a result the synaptic excitation of nerve cells is uncontrolled and convulsions result. The special types of cells responsible for inhibitory synaptic action are now being recognized in many parts of the central nervous system.

This account of communication between nerve cells is necessarily oversimplified, yet it shows that some significant advances are being made at the level of individual components of the nervous system. By selecting the most favourable situations we have been able to throw light on some details of nerve-cell behaviour. We can be encouraged by these limited successes. Nonetheless, the task of understanding in a comprehensive way how the human brain operates staggers its own imagination.

The brain functions by complex neuronal, or nerve cell, circuits. Communication between neurons is both electrical and chemical and always travels from the dendrites of a neuron, through its ‘soma’, and out its axon to the dendrites of another neuron.

Dendrites of one neuron receive signals from the axons of other neurons through chemicals known as ‘neurotransmitters’. The neurotransmitters set off electrical charges in the dendrites, which then carry the signals electrochemically to the soma. The soma integrates the information, which is then transmitted electrochemically down the axon to its tip.

At the tip of the axon, small, bubble like structures called ‘vesicles’ release neurotransmitters that carry the signal across the synapse, or gap, between two neurons. There are many types of neurotransmitters, including ‘norepinephrine’, ‘dopamine’, and ‘serotonin’. Neurotransmitters can be excitatory (that is, they excite an electrochemical response in the dendrite receptors) or inhibitory (they block the response of the dendrite receptors).

One neuron may communicate with thousands of other neurons, and many thousands of neurons are involved with even the simplest behaviour. It is believed that these connections and their efficiency can be modified, or altered by experience.

Scientists have used two primary approaches to studying how the brain works. One approach is to study brain function after parts of the brain have been damaged. Functions that disappear or that is no longer normal after injury to specific regions of the brain can often be associated with the damaged areas. The second approach is to study the responses of the brain to direct stimulation or to stimulation of various sense organs.

Neurons are grouped by function into collections of cells called ‘nuclei’. These nuclei are connected to form sensory motor, and other systems. Scientists can study the function of ‘somatosensory’ (pain and touch), motor, olfactory, visual, auditory, language, and other systems by measuring the physiological (physical and chemical) change that occur in the brain when these senses are activated. For example, electroencephalography (EEG) measures the electrical activity of specific groups of neurons through electrodes attached to the surface of the skull. Electrodes inserted directly into the brain can give readings of individual neurons. Changes in blood flow, glucose (sugar), or oxygen consumption in groups of active cells can also be mapped.

Although the brain appears symmetrical, how it functions is not. Each hemisphere is specialized and dominates the other in certain functions. Reaseach has shown that hemispheric dominance is related to whether a person is predominantly right-handed or left-handed. In most right-handed people, the left hemisphere processes arithmetic, language, and speech. The right hemisphere interprets music, complex imagery, and spatial relationships and recognizes and expresses emotion. In left-handed people, the pattern of brain organization is more variable.

Hemispheric specialization has traditionally been studied in people who have sustained damage to the connections between the two hemispheres, as may occur with a stroke, an interruption of blood flow to an area of the brain that causes the death of nerve cells in that area. The division of functions between the two hemispheres has also been studied in people who have had to have the connection between, the two hemispheres surgically cu t in order to control severe epilepsy, a neurological disease characterized by convulsions and loss of consciousness.

Seemingly, it is, nonetheless, the apparatuses of a ‘neural network’ in computer science, is a highly interconnected network of information-processing elements that mimics the connectivity and functioning of the human brain. Neural networks address problems that are often difficult for traditional computers to solve, such as speech and pattern recognition. They also provide some insight into the way the human brain works. One of the most significant strengths of neural networks is their ability to learn from a limited set of examples.

The neural networks that are increasingly being used in computing mimic the found in the nervous systems of vertebrates. The main characteristic of a biological neural network - is that, each neuron, or nerve cell receives signals from many other neurons through its branching dendrites. The neutron produces an output signal that depends on the values of all the input signals and passes this output onto many other neurons along a branching fibre called an ‘axon’. In an artificial neural network where the input signals, such as signals from a television camera’s image, fall on a layer of input nodes, or computing units. Each of these nodes is linked to several other ‘hidden’ nodes between the input and output nodes of the network. Each hidden node performs a calculation on the signals reaching it and sends a corresponding output signal to other nodes. The final outputs are highly processed versions of the input.

Neural networks were initially studied by computer and cognitive scientists in the late 1950s and early 1960s in an attempt to model sensory perception in biological organisms. Neural networks have been applied to many problems since they were first introduced, including pattern recognition, handwritten character recognition, speech recognition, financial and economic modelling, and next-generation computing models.

Neural networks fall into two categories: Artificial neural networks and biological neural networks. Artificial neural networks are modelled on the structure and functioning of biological neural networks. The most familiar biological neural network is the human brain. The human brain is composed of approximately 100 billion nerve cells called ‘neurons’ that are massively interconnected. Typical neurons in the human brain are connected to on the order of 10,000 other neutrons, with some types of neurons having more than 200,000 connections. The extensive number of neurons and their high degree of interconnectedness are part of the reason that the brains of living creatures are capable of making a vast number of calculations is a short amount of time.

Biological neutrons have a fairly simple large-scale structure, although their operation and small-scale structure is immensely complex. Neutrons have three main parts: A central cell body, called the ‘soma’, and two different types of branched, treelike structures that extend from the soma, called ‘dendrites’ and ‘axons’. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called ‘synapses’. The information flows from the dendrites to the soma, where it is processed. The output signal, a train of impulses, is then sent down the axon to the synapses of other neurons.

Artificial neurons, like their biological counterpart, have simple structures and are designed to mimic the function of biological neurons. The main body of an artificial neuron is called a ‘node’ or ‘unit’. Artificial neurons may be physically connected to one another by wires that mimic the connections between biological neurons, if, for instance, the neurons are simpler integrated circuits, however, neural networks are usually simulated on traditional computers, in which case, the connections between processing nodes are not physical but are instead virtual.

Artificial neurons may be either discrete or continuous. Discrete neurons send an output signal of [one] if the sum of received signals is above a certain critical value called a ‘threshold value’, otherwise they send an output signal of [0]. Continuous neurons are not restricted to sending output values of only [1's] and [0's]: Instead they send an output value between [1] and [0] depending on the total amount of input that they receive - the stronger the received signal, the stronger the signal sent out from the node and vice-versa. Continuous neurons are the most commonly used in actual artificial neural networks.

The architecture of a neural network is the specific arrangement and connections of the neutrals that make up the network. One of the most common neural network architectures has three layers. The first layer is called the ‘input layer’ and is the only layer exposed to external signals. The input layer transmits signals to the neurons in the next layer, which is called a ‘hidden layer’. The hidden layer extracts relevant features or patterns from the received signals. Those features or patterns that are considered important are then directed to the output layer, the final of the network. Sophisticated neural networks may have several hidden layers, feedback loops, and time-delay elements, which are designed to make the network as efficient s possible is discriminating relevant features or patterns from the input layer.

Neural networks differ greatly from traditional computers (for example, personal computers, workstations, mainframes) in both form and function. While neural networks use a large number of simple processors to do their calculations, traditional computers generally use one or a few extremely complex processing units. Neural networks also do not have a centrally located memory, nor are they programmed with a sequence of instructions, as are all traditional computers.

The information processing of a neural network is distributed throughout the network in the form of its processors and connections, while the memory is distributed in the form1` of the weights given to the various connections. The distribution of both processing capability and memory means that damage to part of the network does not necessarily results in processing dysfunction or information loss. This ability of neural networks to withstand limited damage and continue to function well is one of their greatest strengths.

Neural networks also differ greatly from traditional computer s in the way they are programmed, rather than using programs that are written as a series of instructions, as do all traditional computers, neural networks are ‘taught’ with a limited set of training examples. The network is then able to ‘learn’ from the initial examples to respond to information sets that it has never encountered before. The resulting values of the connection weights can be thought of as a ‘program’.

Neural networks are usually simulated on traditional computers. The advantage of this approach is that computers can easily be reprogrammed to change the architecture or learning rule of the simulated neural network. Since the computation in a neural network is massively parallel, the processing speed of a simulated neural network can be increased by using massively parallel computers - computers that link together hundreds or thousands of CPU’s in parallel to achieve very high processing speeds.

In all biological neural networks the connections between particular dendrites and axons may be reinforced or discouraged. For example, connections may become reinforced as more signals are sent down to them, and may be discouraged when signals are infrequently sent down to the reinforcement of certain neural pathways, or dendrite-axon connections, results, further reinforcing the pathway. Paths between neurons that are rarely used slowly atrophy, or decay, masking it less likely that signals will be transmitted along them.

The role of connection strength between neurons in the brain is crucial: Scientists believe they determine, to a greater extent, the way in which the brain processes the information it takes in through the senses. Neuroscentists studying the structure and function of the brain believe that various patterns of neurons firing can be associated with specific memories. In this theory, the strength of the connections between the relevant neurons determines the strength of the memory. Important information that needs to be remembered may casus the brain to constantly reinforce the pathways between the neurons that form the memory, while relatively unimportant information will not receive the same degree of reinforcement.

To mimic the way in which biological neurons reinforce certain axon-dendrite pathways, the connections between artificial neurons in a neural network are given adjustable connection weights, or measures of importance. When signals are received and processed by a ‘node’, they are multiplied by a weight, added up, and then transformed by a nonlinear function. The effect of the nonlinear function is to cause the sum of the input signals to approach some value, usually +1 or 0. If the signals entering the node add up to a positive number, the node sends an output signal that approaches +1 out along all of its connections, while if the signal adds up to a negative value, the node sends a signal that approaches 0. This is similar to a simplified model of how a biological neuron functions - the larger the input signal, the larger the output signals.

Computer scientists teach neural networks by presenting them with desired input-output training sets. The input-output training sets are related patterns of data, for instance, a sample training set might consist of ten different photographs for each of ten different faces. The photographs would then be digitally entered into the input layer of the network. The desired output t would be for the network to signal one of the neurons in the output layer of the network per face. Beginning with equal, or random, connection weight’s between the neurons, the photographs are digitally computed and compared to the targe t output. Small adjustments are then made to the connection weights to reduce the difference between the actual output and the target output. The input-output set is again presented to the network and further adjustments are made to the connection weights because the first few times that the input is entered, the network will usually choose the incorrect output neuron. After repeating the weight-adjustment process many times for all input-output patterns in the training set, the network learns to respond in the desired manner.

A neural network is said to have learned when it can correctly perform the tasks for which it has been trained. Neural networks are able to extract the important features and patterns of a class of training examples and generalize from these to correctly process new input data that they have not encountered before. For a neural network trained to recognize a series of photographs, generalization would be demonstrated if a new photograph presented to the network resulted in the correct output neuron being signalled.

A number of different neural network learning rules, or algorithms, exist and use various techniques to process information. Common arrangements use some sort of system to adjust the connection weights between the neurons automatically. The most widely used scheme for adjusting the connection weights is called ‘error back-propagation’, developed independently by American computer scientists Paul Werbos (in 1974), David Parker (in 18984/1985), and David Rumelhart, Ronald Williams, and others (in 1985). The back-propagation learning scheme compares a neural network’s calculated output to a target output and calculates an error adjustment for each of the nodes in the network. The neural network adjusts the connection weight s according to the error values assigned to each node, beginning with the connections between the last hidden layer and the output layer. After the network has made adjustments to this set of connections, it calculates error values for the next previous layer and makes adjustments. The back-propagation algorithm continues in this way, adjusting all of the connection weights between the hidden layers until it reaches the input layer. At this point, it is ready to calculate another output.

Neural networks have been applied to many tasks that are easy for humans to accomplish, but difficult for traditional computers. Because neural networks mimic the brain, they have shown much promise in so-called ‘sensory processing’ tasks such as speech recognition, pattern recognition, and the transcription of hand-written text. In some settings, neural networks can perform as well as humans. Neural-network-based backgammon software, for example, rivals the best human players.

While traditional computers still outperform neural networks in most situations, neural networks are superior in recognizing patterns in extremely large data sets. Furthermore, because neural networks have the ability to learn from a set of examples and generalize this knowledge to new situations, they are excellent for work requiring adaptive control systems. For this reason, the United States National Aeronautics and Space Administration (NASA) have extremely studied neural networks to determine whether they might serve to control future robots sent to explore planetary bodies in our solar system. In this application, robots could be sent to other planets, such as Mars, to carry out significant and detailed exploration autonomously.

An important advantage that neural networks have over traditional computer systems is that they can sustain damage and still function properly. This design characteristic of neural networks makes them very attractive candidates for future aircraft control systems, especially in high performance military jets. Another potential use of neural networks for civilian and military use is in pattern recognition software for radar, sonar, and other remote-sensory devices.

Within the central nervous system, which consists of the brain and spinal cord, neurotransmitters pass from neuron to neuron. In the peripheral nervous system, which is made up of the nerves that run from the central nervous system to the rest of the body, the chemical signal pass between a neuron and an adjacent muscle or a gland cell.

Nine chemical compounds - belonging to three chemical families - are widely recognized as neurotransmitters. In addition, certain other body chemicals, including adenosine, histamine, enkephalin, endorphin, and epinephrine, have neurotransmitter-like properties. Experts believe that there are many more neurotransmitters as yet undiscovered.

The fist of the three families is composed of amines, a group of compounds containing molecules of carbon, hydrogen, and nitrogen. Among the amine neurotransmitters are acetylcholine, norepinephrine, dopamine, and serotonin. Acetylcholine is the most widely used neurotransmitter in the body, and neurons that leave the central nervous system, for example, those running to skeletal muscle, use acetylcholine as their neurotransmitter: Neurons that run to the heart, blood vessels, and other organs may use acetylcholine or norepinephrine. Dopamine is involved in the movement of muscles, and it controls the secretion of the pituitary hormone - prolactin - which triggers milk production in nursing mothers.

The second neurotransmitter family is composed of amino acids, organic compounds containing both an amino group (NH2) and a carboxylic acid group (COOH). Amino acids that serve as neurotransmitter s include glycine, glutamic and aspartic acids, and gamma-amino butyric acid (GABA). Glutamic acid and GABA are the most abundant t neurotransmitters within the central nervous system, and especially in the cerebral cortex, which is largely responsible for such higher brain functions as thought and interpreting sensations.

The third neurotransmitter family is composed of peptides, which are compounds that contain at least two, and sometimes as many as 100 amino acids. Peptide neurotransmitters are poorly understood, but scientists know that the peptide neurotransmitter r called ‘substance e P’ influences the sensation of pain.

In general, each neuron uses only a single compound as its neurotransmitter, however, some neurons outside the central nervous system are able to release both an amine and a peptide neurotransmitter.

Neurotransmitters are manufactured from precursor compounds like amino acids, glucose, and the dietary amine called ‘choline’. Neurons modify the structure of these precursor compounds in a series of reactions wit h enzymes. Neurotransmitters that come from amino acids include serotonin, which is derived from tryptophan, dopamine and norepinephrine, which are derived from tyrosine, and glycine, which is derived from threonine. Among the neurotransmitters that come from amino acids include serotonin, which is derived from tryptophan, dopamine and norepinephrine, which are derived from tyrosine, and glycine, which is derived from threonine. Among the neurotransmitter s made from glucose and glutamate, aspartame, and GABA. The choline serves as the precursor for acetylcholine.

In the nervous system, a message-carrying impulse travels from one end of a nerve cell to the other by means of an electrical impulse. When it reaches the terminal end of a nerve cell, the impulse triggers tiny sacs’ called ‘presynaptic vesicles’ to release their contents, chemical messengers called neurotransmitters. The neurotransmitters float across the synapse, or gap between adjacent nerve cells. When they reach the neighbouring nerve cell, the neurotransmitters fit into specialized receptor sites much as a key fits into a lock, causing that nerve cell to ‘fire’, or generate an electrical message-carrying impulse. As the message continues through the nervous system, the presynaptic cell absorbs the excess neurotransmitters, and repackages them in presynaptic vesicles in a process called ‘neurotransmitter reuptake’

Neurotransmitters are released into a microscopic gap, called a ‘synapse’, that separates the transmitting neuron from the cell receiving the cell, while the receiving cell is termed the ‘postsynaptic cell’.

After their release into the synapse, neurotransmitters combine chemically in the surface membrane of the postsynaptic cell. When this combination occurs, the voltage, or electrical force of the postsynaptic cell is either increased (excited) or decreased (inhibited).

When a neuron is in its resting state, its voltage is about -70 millivolts. An excitatory neurotransmitter alters the membrane of the postsynaptic neuron, making it possible for ions (electrically charged molecules) to move back and forth across the neuron’s membranes. This flow of ions makes the neuron’s voltage rise towards zero. If enough excitatory receptor s have been activated, the postsynaptic neuron responds by firing, generating a nerve impulse that causes its own neurotransmitter to be release d into the next synapse. An inhibitory neurotransmitter causes different ions to pass back and forth across the postsynaptic neuron’s membrane, lowering the nerve cell’s voltage to -80 or -90 millivolts. The drop in voltage makes it less likely that the postsynaptic cell will fire.

If the postsynaptic cell is a muscle cell rather than a neuron, an excitatory neurotransmitter will cause the muscle to contract. If the postsynaptic cell is a gland cell, an excitatory neurotransmitter will cause the cell to secrete its contents.

While most neurotransmitters interact with their receptors to create new electrical nerve impulses that energize or inhibits the adjoining cell, some neurotransmitter interactions do not generate or suppress nerve impulses. Instead, they interact with a second type of receptor that changes the internal chemistry of the postsynaptic cell by either causing or blocking the formation of chemicals called ‘second message molecules’. These second messengers regulate the postsynaptic cell’s biochemical processes and enable it to conduct the maintenance necessary to continue synthesizing neurotransmitters and conducting nerve impulses. Examples, of second messengers, which are formed and entirely contained within the postsynaptic cell, including cyclic adenosine monophosphate, diacylglycerol, and` inositol phosphates.

Once neurotransmitters have been secreted into synapses and have passed on their chemical signals, the presynaptic neuron clears the synapse of neurotransmitter molecules. For example, acetylcholine is broken down by the enzyme acetylcholinesterase into choline and acetate. Neurotransmitters like dopamine, serotonin, and GABA are removed by a physical process called ‘reuptake’.In reuptake, a protein in the presynaptic membrane acts as a sort of sponge, causing the neurotransmitters to reenter the presynaptic neuron, where they can be broken down by enzymes o r repackaged for reuse.

Neurotransmitters also play a role in Parkinson disease, which slowly attacks the nervous system, causing symptoms that worsen over time. Fatigue, mental confusion, a face-like facial expression, stooping posture, shuffling gait, and problems with eating and speaking are among the difficulties suffered by Parkinson victims. These symptoms have been partly linked to the deterioration and eventual death of neurons that run from the base of the brain to the basal ganglia, a collection of nerve cells that manufacture the neurotransmitter s dopamine. The reasons why such neurons die are yet to be understood, but the related symptoms can be alleviate, L-dopa or levodopa, widely used to treat Parkinson disease, acts as a supplementary precursor for dopamine. It causes the surviving neurons in the basal ganglia to increase their production of dopamine, thereby compensating to some extent for the disabled neurons.

Many other effective drugs have been shown to act by influencing neurotransmitter behaviour. Some drugs work by interfering with the interactions between neurotransmitters and intestinal receptors, for example, belladonna decreases intestinal cramps in such disorders as irritable bowel syndrome by blocking acetylcholine from combining with receptors. This process reduces nerve signals to the bowel wall, which prevents painful spasms.

Other drugs block the reuptake process. One well-known example is the drug fluoxetine (Prozac), which blocks the reuptake of serotonin. Serotonin then remains in the synapse for a longer time, and its ability to act as a signal is prolonged, which contributes to the relief of depression and the control of obsessive-compulsive behaviour.

1 comment:

  1. Hi,
    If you are needing pipetting for making up reagents, then disposable plastic graduated pipettes that can use a vacuum assisted drawing bulb or plunger would be ideal.
    Thanks.

    ReplyDelete