Philosophy Paper Summaries

Made during PhD, UCL 2012, Sanjay Manohar

Richard Joyce Philosophical Explorations   9:133-48 2006

Metaethics and the empirical sciences

Proximal genealogy (neuroscience of moral deliberation) vs distal (evolution of sense in hominid) – does not presuppose beliefs are true. Origin/genealogy of a belief can sometimes help determine its truth (‘no beliefs are innate’).

  1. Is cognitivist/noncognitivist debate an empirical matter?
  2. Can empirical data undermine moral naturalism (moral properties can supervene on natural ones, without there being a deductive relation between ‘is’ statements and ‘ought’ statements)

Though expt: ‘belief pill’ gives you a specific belief; what happens when you discover that you have previously been given a belief pill? The belief is rendered unjustified (or never justified) & should be revised, “demanding that…you should take the antidote”. [my italics. Does belief work in this way?] Need to have “an open mind about whether there exists anything that is morally right and wrong”.


Discovery of “inbuilt faculty for simple arithmentic” does not imply we are unjustified in holding the belief 1+1=2, because we have no grasp of how it might have improved evolutionary success, “independently of assuming its truth”. Truth is “a background assumption to any reasonable hypothesis of how this belief may have come to be innate”.


For moral beliefs, perhaps “accuracy is not the route to practical success.”


John Sutton   CUP  1998

Philosophy and Memory Traces: Descartes to connectionism

Descartes’ place in philosophy as “the demonic source of modern alienation”

Attempt to put connectionism in historical context. PDP justified as reductionism.

Memory is distributed = extended (over many units) + superposed (many in one unit)

Historical philosophers’ references to the brain have been suppressed/dismissed, as ‘confusion of the logical with the empirical, [or] the mistaken offering of… neuropsychological answers to epistemological questions’

Metaphor (e.g. in memory): generative rather than veridical, requires to adopt and adapt familiar terms, and prevents recognising domain-peculiar problems. Also difficulty in separating ‘the literal and the figural’.


Bergson An Introd to Metaphysics   1912

  1. Absolute is that which cannot be reached by analysis or description, only by being itself. Only intuition can reach essence. But
    1. need to use multiple comparisons: “images as dissimilar as possible [to] prevent any one of them from usurping the place of intuition”, to “accustom consciousness to a particular and clearly-defined disposition…in order to appear to itself as it really is”
    2. still impossible to “reconstruct an object with concepts”: how much weight for each of the multiple facets of a description, and where to start.
    3. There is external reality, given immediately to the mind as change (anti idealist, anti-realist)
  2. Variety of qualities: Starts with phenomenology: surface = perception + memory + action. Within is a succession of states. But not linear (lines can be compared and are superposable), as no two moments in a single consciousness are similar; successive moments ‘contain’ earlier moments. “A consciousness which could experience two identical moments would be a consciousness without memory… In what other way could one represent unconsciousness?”, c.f. a spectrum (infinite continuum) [but does this expunge mathematical continua, which require identity relations?],  but “excluding any idea of juxtaposition, reciprocal externality [of successive shades], and extension”.
  3. Duration is “survival of the past into the present”. “There is no consciousness without memory”. A “homogeneous state… which changes and endures… a perpetual becoming”
  4. “Rest is never more than apparent, or, rather, relative.”
  5. Both unity & multiplicity, continuity and direction: Consciousness as a continuous pulling action that draws out the track of time.
  6. Psychological concepts (sensations feelings ideas) are elements not parts. They are like sketches which can get you back to an intuition you have had, but can’t be used to construct an impression of something you have not experienced. Describing is an irreversible operation. Reconstructing experience from symbols ‘implies such an absurdity that it would never occur to anyone’ that in fact recollecting is dealing with fragmented symbols not fragmented things. In real life we have no worry passing between concepts to things because we have goals and interests. Concepts arise as “practical questions…put to reality”. “Our intellect…takes up its position in ready-made concepts, and endeavors to catch in them, as in a net, something of the reality which passes.” Instead we should “place ourselves in the moving thing itself”.
  7. Criticises Mill, Taine – applying psychological method to metaphysical objective. However much they explore concepts of psychical states, “the ego always escapes them, so that they finish by seeing in it nothing but a vain phantom.” [c.f. Dennett?]. Keep “multiplying points of contact and exploring the intervals” shrinking the personality till it has no size. “Philosophical empiricism…denies the existence of the original on the ground that it is not found in the translation”
  8. Criticises rationalism – try to create unity of self from fragmented description and concepts, until the ego becomes infinite and incapable of distinguishing individuals. “Reasoning about elements of a translation as thought they were parts of the original”.
  9. “KPR ends in establishing that Platonism…becomes legitimate if Ideas are relations [rather that things]…and that the ready-made idea…is in fact, as Plato held, the common basis alike of thought and of nature.” KPR assumes the intellect can only “pour…experience into pre-existing molds” “Doctrines which have a certain basis of intuition escape the Kantian criticism”
  10. Duration for a particular thing (becoming) distinguished from time (Becoming in general); time is immobile for me. It flows only because of “continual change of quality”. Without quality, it is “merely the theatre of the change”.
  11. “So many snapshots” of a body’s movement in space “shall never make a movement”. Movements are only conceptualisable this way – sequences of positions – making movement mysterious; but in intuition, movement (passage of time) is primary. Duration is “a bottomless, bankless river, which flows without assignable force in a direction which could not be defined.”
  12. He feels that the intuition moves “between the two extreme limits” of a moving eternity and a proliferation of repetitions – i.e. between unity and multiplicity. But it is not a conceptual synthesis.


James Russell   Explaining Mental Life: Some philosophical issues in psychology   1984

6 questions of psychology

  1. ‘performance-descriptive’ – simply measurement and characterisation, e.g. content of thought and properties of behaviour
  2. ‘competence-characterising’ – what is the nature of the competence that underlies behaviour, e.g. Piaget, Chomsky, Gibson. Dependent on philosophy. Broader than hypotheses.
  3. ‘genetic question’ – purpose in evolution
  4. ‘performance question’ – by what internal procedures are tasks performed. Linguistic vs spatial vs algorithmic explanations.
  5. ‘mechanistic question’ – how the brain does it. Needs (4) to make theories.
  6. ‘differential question’ – how and why individuals differ (includes 1,3,4,5)

Functionalism is dualistic. The word ‘competence’ rather than ‘capacity’ allows us to circumvent this problem.

“Psychology is the study of the determination of mental competences.”

“The fact that we can know at all is explicable, if not reducible, to competences.”

Sceptical arguments: behaviour might not be tightly determined:

  1. empirical indeterminacy – one phenomenon could have come about in several ways, strategies, individual variation, multiple routes in development. But – there is also partial determination and constraint, and same is true of other sciences.
  2. conceptual indeterminacy – having intentions, knowing or believing things are ‘slippery concepts’

i.e. Intention does not cause behaviour (or if it does, it is in a nondeterministic way): e.g.

  • Wittgenstein’s voluntary action is marked by the absence of “certain prior determining mental events”. “Explanation of behaviour is an hermeneutical business which must proceed by placing an action within a social context”.
  • A.I. Melden distinguishes explanations of movements from those of actions.
  • Charles Taylor: psychology is a hermeneutical science, and all “systems of [behavioural] interpretation [are] enshrined in the human conceptual system” in which we are trapped. “Hermeneutic sciences are also moral sciences”.
  • J Schotter: “we become human by way of having our actions interpreted by others” – dismissed as ‘a monstrosity’ and ‘behaviourist’.
  • Richard Peters (Concept of Motivation) vs Clark Hull (behaviourist): should not confuse ‘drive’ as basic needs causing organism movement, and, ‘drive’ as articulatable reasons in humans. Reasons vs Causes; causes are only in abnormal or simple behaviour. Cf Freud primary (unconscious promptings from ego and superego) and secondary processes (ego and conscious aspects of the superego) in mental economy.
  • Stephen Toulmin: many degrees from cause to reason; learnt in development
  • Donald Davidson (Actions reasons and causes 1963): to explain voluntary actions one needs both a ‘pro-attitude’ and a ‘belief’ in the relation with outcome. Argues that “although mental events can cause actions,… these do not permit scientific explanation”. Attribution of intention “necessarily operate[s] within a system of concepts in part deteremined by the structure of the beliefs and desires of the agent himself”. = anomalous monism
  • Philip Pettit (Philosophical problems in psychology 1979) – says “rationality can be no respecter of persons”, i.e. human rationality is assumed in psychology but what happens in the case of the individual?.
  • Colin McGinn distinguishes between ideographic (how is it with individuals?) and nomothetic (deduced) laws. Claims that ratiocination, and therefore psychology, are ideographic.

But in rebuttal note

  • Piaget’s ‘genetic epistemology’ that psychology can contribute to theoretical physics of temporal duration and contiguity;
  • and that “philosophers often take the model of psychological explanation to be the common-sense, everyday schema” of explanation, rather than the “quest for the determinants of the mental competence of the human species”.
  • H.P. Grice (Meaning, 1957): meaning “can be expressed ultimately in terms of the utterer’s meaning” – the intention to induce a belief plus the meta-intention that the audient should recognise this intention. Stresses pragmatic side. [does this address how one can ask ‘what does this word mean’?]

But “linguistic form and meaning qua intention are logically indissociable.”

Rorty: all that we cover by psychological terms could be covered by material terms with no loss of any significant meaning.

Cf Reductionism: replaces psychological terms with physiological ones


Foucault Abnormal  1974

Herostratus – set fire to a temple – simple-mindedness associated with malice and amorality

Alcibiades – cut tail off his dog? – character combines great qualities and many defects

Don Juanism – pathological search for new quests


WVOQ two dogmas of empiricism 1951

Kant analytic vs synthetic = ‘containment’ (metaphorical) of predicate within a subject

Independent of fact

  • Meaning != extension (A has a heart != A has a vascular system)
  • Intension != Aristotle’s essence (man is rational: property of things); intensions (meaning) ‘is essence divorced from the object and wedded to the word’
  • Definition:
  • Extralogical synonym pairs: ‘bachelor’ != ‘unmarried man’. Carnap’s state-descriptions are useful for induction but not for general analyticity – what is a synonym?
  • Dictionaries are empirical: synonyms == if replaceable in the particular context
  • Explicit definitions: This creates a formal meta-language: “correlations between two languages, the one the part of the other”, but does not explain real synonymy
  • ‘True interchangability’ (Leibniz’s salva veritate) moves the problem to ‘true wordhood’ – is “bachelor of arts” a word?
  • Moreover, statements using “are, is, necessarily, all”, use language that presupposes a definition of “analytic”. è circular, and involve counterfactual reasoning
  • Extensional predicates involve extralogical matters extensional interchangeability does not guarantee cognitive synonymy. (bachelor?=unmarried man)

ð     truth interchangeability is not sufficient for analyticity, unless a language already has clearly defined intensional adverbs e.g. ‘necessarily’ – which requires a previously understood notion of analyticity

  • Cognitive synonymy could be the same as biconditional (iff)
Semantic rules
  • I do not know whether the statement 'Everything green is extended' is analytic. The trouble is not with 'green' or 'extended,' but with 'analytic.'
  • We cannot appeal to sublanguages for analyticity: Analyticity is equally hard for artificial languages:
    • Definitions like ‘S is analytic for L’ cannot be understood without understanding analyticity. That is, defining sublanguages just transfers the problem from one language to another.
    • Definitions like “analyticity in language L = being true according to a semantic rule” simply introduce the new term “semantic rule”, which itself is undefined.
    • Defining an artificial language as the pairing of (a natural language, + set of analytic statements) is bootstrapping
  • True statements could be false if either 1) factually the world happened to be different, or 2) words happened to mean different things
  • The assumption that in some statements, the factual component could be null, is “an unempirical dogma of empiricists, a metaphysical article of faith” – analytic vs synthetic
  • Synonymy = alike in method of empirical confirmation
  • Radical reductionism:
    • Locke/Hume – term definitions may only involve sense-data.
    • But: ambiguous between sense-data as sensory-events or sensory-qualities
    • Russel/Frege: reconceived with statements as units
  • Carnap (aufbau): spatiotemporal points give experiences, and simplest rules are to be found that assign truth-values to statements “Quality q is at point-instant xyzt”.
    • But: his canons counsel us in the use, but not elimination, of the connective “is at”
  • Dogma of reductionism: for each synthetic statement, certain events increase or decrease its likelihood of being true
  • Identical with the existence of ‘analytic’ statements which are vacuously (ipso facto) confirmed come what may
  • Duality between analyticity and verificationalism è individual statements cannot be independently confirmed/disconfirmed, and definition cannot explicate natural meanings outside of a linguistic meaning system
Empiricism without dogmas
  • Any statements can be held true ‘come what may’, if we make drastic enough adjustments to the system of knowledge.
  • Even logical laws may be amendable
  • Each statement in a system has a distance from a ‘sensory periphery’
    • Statements germane to particular experiences = peripheral
    • = how recalcitrant an experience needs to be to refute it
    • Physics//logic/ontology are central
  • Physical objects and gods of Homer differ only in degree – epistemologically comparable
  • Ontological questions are on par with questions of science:
    • Notes on Existence and Necessity (J Phil 1943): Whether to countenance classes as entities = whether to quantify with respect to variables which take classes as values
    • Carnap’s reply: Question not of fact, but of choosing a convenient language form / conceptual framework
    • Quine’s reply: Scientific hypotheses in general rely on such choice. Double standard between ontological/scientific relies on analytic/synthetic distinction
  • Carnap’s pragmatism leaves off at the imagined boundary of analytic/synthetic.
  • Whether experiences alter terms themselves, or the truth of statements involving terms, is determined by conservatism, simplicity, recalcitrance etc.
  • “Analyticity at first seemed most naturally definable by appeal to a realm of meanings. On refinement, the appeal to meanings gave way to an appeal to synonymy or definition. But definition turned out to be a will-o'-the-wisp, and synonymy turned out to be best understood only by dint of a prior appeal to analyticity itself”
  • Introducing linguistic rules for analyticity do not explain the concept itself. Analytic-synthetic division is a dogma.
  • Directly connected with belief that meaning is how statements can be confirmed or disconfirmed – undisconfirmable = analytic
  • “Taken collectively, science has its double dependence upon language and experience; but this duality is not significantly traceable into the statements of science taken one by one. The unit of empirical significance is” not the statement, but “the whole of science”.
  • Distinguishing scientific facts from definitions = a matter of “pragmatic inclination”; evidence can change either.


Lawlor Philosophical Psychology   2010

Elusive reasons: a problem for first-person authority

  • “Social psychology’s unrelenting skepticism about our self-knowledge” (Nisbett & Wilson 1977)
  • “my report has a distinctive sort of authority not because I am an especially well-placed observer of my own mental states. Rather, my self-ascription is authoritative because I author the belief ascribed—I take a kind of responsibility for my belief, which no one else can take” – fallibility does not controvert the responsibility (Richard Moran 1999)
    • Wittgenstein 1953 II.10: “… the expression ‘I believe that this is the case’ is used like the assertion ‘This is the case’ ”
    • Answering ‘whether I believe that p’ calls into operation the same mechanism as for answering ‘whether p’(Evans 1982) = “the ascent routine” for self-knowledge – “an alternative to the idea that self-knowledge requires introspection”
    • Moran says this can be applied to emotions (‘do I fear X?’ ‘is there anything to be feared by X?’) è self-ascriptions can be answered by deliberation
    • Therefore, the actual fear one feels is answerable to such considerations
    • The deliberation ends not in a true description of my state, but in the formation of an endorsement of an attitude (=non-epistemic). E.g. ability to hold unreasonable beliefs (a therapist tells you that you believe something, and you believe it, and you know it is unreasonable) epistemic authority but not deliberative authority.
    • Self-knowledge is “based as much in asymmetries of responsibility and commitment as… in difference in capacities or in cognitive access”
  • Problems:
    • which is more important, epistemic or deliberative?
    • Do they come apart cleanly?
  • Introspection effect
    • Hodges & Wilson 1994: simply asking people for their reasons for belief, can alter their attitudes. Asked attitudes to Reagan asked reasons vs other questions asked attitudes again.
    • Might undermine authority of initial self-ascriptions. But not of final ones.
    • Wilson & Schooler 1991: Introspective reasoning can have deleterious effects on attitudes. Peoples opinon of jam deviates more from the “expert” opinion, after deliberation.
    • Not a threat to authorship, as shallow opinions are fine as long as they’re justified by some reason.
  • Authorship without authority
    • Seligman 1980: People don’t realise their self-reports (after giving reasons) are biased by the reasons they are invited to consider. “I date my partner [because I… vs in order to…]” are more likely to say they would marry.
    • Wilson 1995: Ss given facts about a person Good facts made more accessible by a reminder Asked to either “think about your reasons for your attitude to the person” vs “recall the (more accessible) facts” Rated them. Ratings more influenced by the accessible facts in the “reasons” group than the “recall” group.
    • Ss form more biased beliefs if they seek out reasons.
    • There is no suggestion that Ss are in error about their attitudes
    • But: despite they deliberate into their attitudes, we hear their self-reports as lacking authority. We cannot help feeling that Ss attitudes are misbegotten. We judge the reports to have “a curiously empty sort of authority” after deliberation.
    • “authorship just does not correlate with our commonsense judgements about whether a report is authoritative.”

We suspect that their reported attitudes, after deliberation, will be unstable in certain ways; may not guide behaviour.

If first-person authority is agential authority, deliberation may in fact hamper it.


  • add that one must deliberate diligently? Doesn’t address the problem that Ss can deliberate themselves into attitudes that don’t govern behaviour. doesn’t rescue authority


Carruthers Philosophy & phenomenological research 2010

Introspection: divided and partly eliminated

Argues that judgements and decisions are not introspectible:  only known via a process of self-interpretation.

Introspection =

  1. A higher order cognitive process
  2. Non-interpretative: we know our own states in a qualitatively different way to those of other people. But may be inferential.
    1. Lycan 1987: Inner sense modelled on outer sense

 “A broad swathe of different views will therefore have been ruled out, if it can be shown that our access to our own judgements and decisions is always interpretative”

  1. If a mental state is introspective, it is conscious. (everyone agrees)
  2. Introspection is not a necessary condition of conscious status (Tye 1995) – not relevant to this paper.

Judgments = events of forming a belief with the same content, without the contribution of any further thinking/reasoning (activated beliefs)

Decisions = acts of willing; formation of a new intention with the same content, which may lead to action without further reflection (activated intentions)

Once activated they are not introspectible.

It is possible that we only learn of a belief by interpreting an (unhesitating) utterance when answering a question. (not defended here)

  • Divide: predicts introspection for perceptual but not propositional attitude events.
  • Need to prove that “there aren’t any causal pathways from the outputs of the judgment-generating systems and the decision-making system to mindreading.”
  • = we cannot self-attribute judgements and decisions except by sensory input in a manner similar to the way we attribute other people’s.

Mindreading – innate (Baron-Cohen) or acquired (Gopnik); module (Frith & Frith)?

Evolved as a ‘machiavellian intelligence’ (Byrne & Whiten 1988). No such proposals for introspection of judgements/decisions. [but then, why should we have such strong, if incorrect, intuitions about our own introspective access? Self-narrative??]

We can introspect inner speech, though. So he assumes “that inner speech and imagery are expressive of underlying processes, but not constitutive of those processes.” [not clear this is justified!] cf Locke 1690 “there can be nothing within the mind that the mind itself is unaware of”

Wegner & Wilson: allow introspecting judgements, but give “dual process” account of self-awareness. But:  “the introspectable events in question don’t have the right sorts of causal role to count as genuine judgements or decisions.”

  1. confabulation in split brain
  2. Mark Hallett 1992: TMS to M1 biases free choice L/R hand button press. Motor cx doesn’t make decisions; but Ss aware of deciding to lift that finger. (consistent with dual process model)
  3. confabulation following hypnosis.
    1. As the confabulated explanation could be offered before, during or after the act, it can’t be explained by memory lapse at questioning.
    2. No plausible alternative route to action for which the introspective report could be veridical
    3. Bizarreness and knowing you are hypnotised both prevent confabulation
  4. Wegner & Wheatley 1999: subjects cooperatively move a mouse cursor with 2 mice. 1 is a confederate. Word is played on headphones as ‘distractor’. “When music stops, decide to stop moving mouse sometime after that”. Confederate actually stops mouse at a specific time, on a picture related to the word. Feeling of intending to stop the mouse at that picture was low if it was <1s after word, or >5s after word.
  5. Cognitive dissonance / Attribution literature. Some might be “mistakes about the causes or effects of mental states, rather than confabulation”
    1. Writing a poorly paid essay on a subject changes opinions more, than when well paid
    2. Maybe writing for inadequate pay does increase belief. But can’t defend introspection because

                                                              i.      “It just pushes the failure of introspection back into the judgements through which the concluding belief is formed”

                                                            ii.      How should “increased propensity to attribute a belief to oneself… lead to greater actual belief”? “How does the higher order attribution process filter down into the “first-order” judgement itself?”

    1. Wells & Petty 1980: Nodding or shaking head “to test the headphones” alters belief in the heard message.

                                                              i.      If message is persuasive, nodding increases and shaking decreases belief, but if unpersuasive, it is the opposite!

                                                            ii.      If the message is persuasive, Ss tend to think positive thoughts, and if unpersuasive, negative thoughts (“what a terrible argument”)

                                                          iii.      è the head nodding/shaking affirms/disagrees the thoughts going through their head at the time.

  1. Parallel fast unconscious universal ancient
  2. serial slow conscious flexible variable new

“What constitutes something as a judgement, rather than a supposition or doubt, is its causal role. But the causal role of an utterance in inner speech isn’t accessible to introspection, even if the utterance is… Utterances don’t wear their attitudes upon their sleeves.” [but, isn’t this tantamount to utterances not being propositional?]


What about “even if we can’t know …we have made a judgement or decision… but we can nevertheless introspect the judgement or the decision.”?

He argues, that what we access by introspection, is but an image or inner-speech rehearsed sentence – not the judgement/decision itself. It can’t have the right sort of causal role to be a J/D.


Can we introspect wordless propositional thoughts? (e.g. ‘I wonder where my key is’)? Can’t say, because when asked to report, the mindreading faculty may provide explanations.


Wimmer & Perner   Cognition 13:103-128

Beliefs about beliefs: representation and constraining function of wrong veliefs in young children’s understanding of deception

  • Woodruff & Premack 1979: taught chimps to deceptively point. Took 5/12
  • By 30 months most children spontaneously use words of volition and knowledge.
  • Hood & Bloom 1979: These words are used equally for self and others. At 3yo, commonly answer ‘why’ with intentions.
  • Shultz 1980: 3-5yo distinguish between intended acts from mistakes, reflexes, passive movements.
  • Macnamara 1976: 4yo understand some but not all aspects of ‘know’, ‘guess’, ‘remember’, ‘forget’
  • Representing the difference between self and other’s relation to the same proposition – is harder: “late-arriving bystander”. 4yo can differentiate between own knowledge and absence of this knowledge in another person.

E1: This study: can they discriminate other person’s definite belief from what they know is true?

  1. Belief question: “Where will M look for the chocolate?”
  2. Truth utterance question: “M’s grandpa comes and M asks him to lift him to a cupboard, which one does he choose?”
  3. Lie utterance question:  “M tells his brother a lie so he will look in the wrong place”
  4. Reality question: “Where is the chocolate really?”
  5. Memory question: “Do you remember where M put the chocolate in the beginning?”


·         80% children who answered 1) Belief correctly were accompanied by correct 3) deceit and correct 2) truth utterances: they correctly understood M’s false belief.

·         80% of incorrect answers to 1) Belief were accompanied by correct answers to 5) Memory: not because they didn’t recall what M did.

·         80% of these incorrect answers to 1) Belief were also accompanied by incorrect 3) lying (but consistent with their own belief) and incorrect 2) truth utterance (but consistent with their own belief).

·         Only 30% of 3-4yo answer 5) Remember correctly


E2: Could be due to impulsivity in 4-5yo. “Stop think what happened before – where will he look?” instruction doesn’t help, but it helps 5-6yo.

What if the chocolate is all used up?

  • 4-5yo and 5-6yo performance improves. However 50% of children give non-specific answers initially (‘M will look for the chocolate’).
  • 10% of 4-5yo and 85% of 3-4yo say ‘M searches behind the scenes’ (=the real location of the chocolate)
  • Memory responses unchanged, so not due to interference from the extra location.

E3: What if M’s belief is actually correct, can children construct lies and truth utterances? (Experimenter replaces item in same location)

  • Even with correct beliefs, only 45% of 4-6yo correctly lie – no better than with false belief.
  • High correlation between correct ‘Belief’ answers and lying without false belief; but not very dependent on age.

Piaget 1932: children age 4-5 have no difficulty judging an utterance as false if it doesn’t correspond to the world.

E4: prev expts specify “intentions & goals”, “explicit intention to deceive”, then infer utterance. This expt specifies “intentions & goals”, “the deceitful utterance”, then infer the plan to deceive – “why did M tell mum his foot hurts?”

4-5y: 50% treat the deceiver as telling the truth

5½ y: 94% appropriately treat the speaker depending on the context (very deceptive vs deceptive vs informative). – better at inferring deceptive goals than constructing deceptive ustterances.


Gopnik & Wellman   Mind & Language   2007

Why the child’s theory of mind really _is_ a theory

“Theory theory”. Theories:

1) “appeal to abstract unobservable entities with coherent interrelations”,

2) “invoke characteristic explanations phrased in” these abstract terms,

3) “lead to characteristic patterns of predictions” not just ↑accuracy,

4) the predictions are distinctive from people with a different theory.

  • Change from “mentalist psychological theory” to a “belief-desire psychology” theory with “existence of representational states”. Transitional state 3-4y: children recognise representational states exist, but this “is peripheral to their central explanatory theory”.
  • Desires and percepts are directed immediately towards objects. “
  • 3yo “understanding of belief seems like their earlier understanding of perception”: a Gibsonian theory / situation theory – causal links between objects and believers; beliefs directly reflect the world. Notion is “embedded in the nonrepresentational desire-perception framework of the earlier theory”
  • Then: 3-4yo develop notion that beliefs can be wrong – acknowledge existence of false belief. But they rarely “construe actions as stemming from false beliefs” remains non-representational.
  • Even at this stage they may have but not fully understand “pretences, dreams and images”: they can distinguish them from desires and perceptions, but they don’t play explanatory role. But they may play a role in eventual development of TOM

explanation of somebody’s action (3-4yo)

Prediction: first age 2-3: can predict whether other person can see item, but not that it will look different to the 2 people. Also not about certainty or source of beliefs.

Interpretations: unable to interpret someone else’s utterance “I believe X” if X is contrary to fact.

Simulation theory:

Understanding of mind is more closely linked to the phenomenal than to the theoretical. Your mind is central to the understanding of other minds.

1) Assumes that it is possible for you “to read out and report the output of your own mental states.” “false interpretations of your own mental states should not occur”

2) Easier predictions when the states concerned are similar to the child’s own mental states. “Beliefs and desires are equally available to the child” and should be equally easy.

Evidence against

1)      3yo make false attributions to themselves, that exactly parallel their false attribution to others [how does this go against the theory, since it just means the simulator is faulty] – they can report their own past changed-perceptions but not past false-beliefs

2)      3yo make correct non-egocentric attributions to themselves and others for some mental states

3)      Children refer only to some mental states in their explanations, and refer to different mental states at different stages. i.e. they always attribute beliefs to actions

4)      Children’s understanding of other psychological phenomena changes in parallel with their understanding of false belief. E.g. knowing the source of their own beliefs “how do you know X”

Non-Innate: They also do not behave “in the way that, say, a Chomskyan account would propose” –3yo may spend “virtually all his waking hours and quite possibly many of his sleeping ones as well” working on the theory of mind


Fodor Modularity of Mind 1983

  • Neocartesianism vs Empiricism
    • = Chomsky vs Piaget/Skinner
    • Faculties = Belief structures
      • is propositional (can be the value of a propositional variable, e.g. P in ‘I know that P’
      • but P is not known – rather, P is ‘cognized’
      • this allows transformations respecting semantic relations
      • e.g. Meno, slave boy, ‘knows’ geometry
    • How does structure of thought come to mirror propositional structure?
      • E.g. 7+12=19: I know this because of propositions about numbers
      • But how do I come to know these? By some other mechanism
      • Faculties e.g. “representation, retention, retrieval, inferential elaboration” which are “patently not mental organs”
    • Cf: Faculties = psychological mechanisms
      • E.g. memory: Miller’s 7+/-2 is not a cognized propositional
  • Innate information (nativism) vs Empiricism
    • Locke’s tabula rasa: belief-structures = empirical, but ‘closet nativist with respect to’ psychological mechanisms
  • Horizontal vs Vertical faculty psychology
    • Horizontal faculties are applicable to any content, e.g. moral, perceptual, scientific.
      • Judgement, memory
      • Theatetus’s memory = a bird-cage containing pieces of knowledge
      • Relies upon distinguishing content domains independently of individuating cognitive faculties (!)
    • Vertical faculties (Gall) -  domain specific
      • If musical aptitude is distinct from mathematical aptitude, then psychological mechanisms subserving them are often different
      • Birdsong, nest-building, visual vs auditory vs intellectual acuity (as opposed to a general faculty of acuity);
      • the general concepts of intellect, volition, attention etc. are syncategoramatic. They are “only attributes common to the fundamental…faculties, but not faculties in themselves”.
      • New organology (nativist vertical): talents instincts propensities
      • Assumes: horizontalism predicts someone good at memory in maths should also be good at memory in music. But: differences in capacity across domains could be differences in employment of the faculty.
      • Argues: if A is better at maths than B, and B is better at music than B, then maths and music must be subserved by different faculties. But: it could just be that a certain mix of horizontal faculties is more important for music
      • Unlike Chomsky, they are autonomous.
  • Associationism: if faculties exist, they are constructs from a more fundamental entity
    • Based on Hume’s comparison with gravitation - mental attraction “shows itself in as many and various forms”
    • Parsimony: the only fundamental faculty is to form associations
    • But, if trying to build traditional faculty from associations (constructivism), you inevitably need to build a computational “functional architecture” (Pylyshyn 1980), and “you thereby give up a lot of what distinguishes Hume’s picture of the mind from, say, Kant’s”
    • Virtualisation of architecture: what can be built from faculties, can be built from associations gains no power, but means that identical architecture can be assembled rather than hardwired promises an empiricist theory of cognitive development
    • But, simply modelling/learning from the environment can’t create complex (Turing like) mental architecture in ontogeny, even though it is constructible in logical principle: computational associationism currently lacks a theory of learning undermines the motivation for viewing mental structures as assembled.
  • Is faculty psychology vacuous – invoking a terpsichorean faculty to explain dancing?
    • functional analysis allows the “individuation of mental constructs to steer a proper course between the unacceptable ontological alternatives of eliminative materialism on the one hand and dualism on the other.”
    • Ned Block: “Functionalists can be physicalists in allowing that all the entities (things, states, events) that exist are physical, denying only that what binds certain things is a physical property” pain is what normally causes pain-behaviour.
    • “goal is to get the maximum amount of psychological explanation out of the smallest possible inventory of postulated causal mechanisms”

Input systems:

  • Representation
    • Any system whose internal states covary with environmental ones can be said to represent
    • Computational systems: representation depends on format – i.e. the access processes are syntactic.
    • Perception represents the world so as to make it accessible to thought
  • (Input system + language) gives input to cognition
    • Satisfies criteria for a vertical faculty


  • Domain specific
    • All input systems are ipso facto domain specific – trivially. “it is entirely compatible with the cow specificity of cow perception, that recognizing cows should be mediated by the same mechanism that effects language perception”
    • But real domain specificity:
      • out of context speech sounds are not heard of as speech
      • prototype classification
      • application of theory of language to obtain meaning
    • plausibility is proportional to the eccentricity of the domain – e.g. elaborate and complex theory of universals in language.
    • But: “chess playing exploits a vast amount of eccentric information, but nobody wants to postulate a chess faculty.”
  • Mandatory operation
    • Insensitive to utility: we are obliged to take on the computational burden
      • Marslen-Wilson & Tyler 1981: focus attention on acoustic-phonetic properties unable to avoid identifying words
      • Dichotic listening
      • Highly skilled painters and phoneticians learn how to undo it
      • You know what Chinese sounds like; but what does English sound like?
    • Central cognitive processes don’t have this obligation – unless you are Freudian.
  • Limited central access to intermediate representations
    • Cannot remember people’s clothes etc while talking
    • Cannot remember exact wording of sentences
    • Cannot remember appearance of watch numerals
    • is it just transfer from short- to long-term memory?
    • Ba-pa: sounds that are indistinguishable can still affect RT
  • Fast
    • Marslen-Wilson 73: Shadowing speech speed is limited only by when phonemes become acoustically distinguishable
    • Haber 1980: 2500 images for 10s each, 90% recognition after 1h
    • Image memory encoding asymptotes at 167ms
    • “computational complexity of the problem does not predict the difficulty of solving it; or rather, if it does, the difference between a hard problem and an easy one is measured not in months but in milliseconds” distinguishes perceptual from cognitive tasks
  • Informationally encapsulated
    • Illusions show that perception is resistant to knowledge
      • Saccades no movement of world, due to efference copy. But, pushing eye with finger – world moves! Despite knowledge of feedback. Perceptual system has access to eye commands and no other information
    • But: Perceptual filling in (scotoma, cough in speech) top-down models e.g. Neisser, Bruner ‘New Look’ account of perception
    • Top down is inconsistent with modularity
    • But: “Feedback works only to the extent that the information which perception supplies is redundant.” And the point of perception is to find out what we don’t expect! Novelty = bottom-up.


Bernard Williams Phil Review 79:2  1970

The self and the future

Body swapping: body such that at first, we see the expression of personality/memories of A, then changes to have personality of B. Conditions:

1)      A & B must be sufficiently alike for us to imagine all B’s dispositions in A’s body – e.g. sex.

2)      There must be a causal connection from A’s old experiences, into his new body, e.g. brain transplant (Shoemaker 1963), or just information transfer

  1. Consider “do you want A-body to receive pleasure and B-body pain, or vice versa, after the op?” getting what you want vs what you chose è  “to care about what happens to me in the future is not necessarily to care about what happens to this body”  = Descartes distinction between “I” and “my body”.
  2. B has a wooden leg, after op, B-body-person says “It feels a bit uncomfortable”; A-body person replies “I remember it felt uncomfortable at first but one gets used to it” – is it brain or body that habituates?
  3. Can A get away from his anxious personality by having the op? perhaps “the person who is not bodily continuous with him will not have the anxiety, while the other” body person will have, in some sense, his anxiety. But afterwards, asking body-A if he got rid of his anxiety, will say ‘what are you talking about?’

“philosophical arguments designed to show that bodily continuity was at least a necessary condition of personal identity would seem to be just mistaken.”


a)      Consider “you will be tortured tomorrow evening” fear

b)      “tomorrow morning, we’ll make you forget that I told you this” still fear now, because I can imagine unexpected torture

c)      “you shall not remember any of the things you are now in a position to remember” fear because I can imagine being in a completely amnesiac state (e.g. after an accident) and to also be in great pain

d)     “you shall have all the memories of a different person’s past” fear still seems appropriate, because I can imagine madness/delusions of being someone else and being in pain

“the whole question seems now to be totally mysterious” – but premise is “what memories I have will not have any effect on whether I undergo the pain or not”. Not always true e.g. acrophobia+mountain. But physical pain is minimally dependent on character. “no degree of predicted change in my character can unseat the fear”.

  • (Orthogonal issue of ‘horror’: “Paradox of hedonic utilitarianism” – assurance of machine-generated eternal (&varied) pleasure è fear if compulsorily imposed. People are appalled by contented vegetableness = not a function of how things would be for them.)

Should we use ‘me’ in the second example? OK because I can follow the argument well.

Is it different if we mention that another person will receive all my memories? We have “left out the very person for whom you are fearful”.  So:

  1. Consider my memories replaced by those of a fictitious person
  2. Memories replaced by those designed to be appropriate to another real other person B (does not satisfy initial condition of causal relation to previous experience, so maybe I am not B – even though asking A-body-person gives remarks relevant to B)
  3. Actual injection of information and memory from B, which leaves B the same as he was before (adds in causal relation)
  4. B is also operated on to receive memories that were in me before. (how can what happens to ‘another person’ [B-body] affect whether A experiences pain?)

[The argument is based entirely on the fact it seems appropriate for me to fear pain when I have none of my own memories; isn’t this untrue?]

Putting A’s memories back into B “involves the reintroduction of A himself”, and due to his reappearances, it is for this person that A will have his expectations. Therefore, in situation B and C, “it seems, A just does not exist”.

  • Borderline case acceptable? i.e. can it be “conceptually undecidable” whether it will be me or not who is hurt? No.

1) A future situation can be indeterminate in whether it involves me or not, e.g. fear is about who of two people will be tortured. Indeterminate fear is clearly appropriate. But in the body-switch, fear “seems neither appropriate nor inappropriate, nor appropriately equivocal

2) Ought I to project myself? If I think I cannot engage in such thinking (about whether it is me), I am also implicitly answering the question, that I cannot be the same person. Perhaps I must refrain from thinking about it?

3) there is no place for ‘ambivalent’ pain. I still must, now, conceive as someone in the future having the pain.

4) If I am A, and decide that B-body should be tortured after the experiment, it is “risky. That there is room for the notion of risk here is itself a major feature of the problem.”

5) conventionalist decision as to whether it is me or not? Implausible as it does not explain how to divert fears based on situations which may be unknowable or far away (whether A’s memories are planted somewhere)


Normally, mentalistic considerationsfirst-person aspect of personhood; bodily-ontinuitythird-person aspect. But here, it is the opposite: third-person argument about A&B changing bodies leads us to be mentalists about personhood, whereas the first-person argument about future fear leads us to support bodily-continuity.

[presumably cf, foetus & vegetative state considerations body, and conceivability of BIV mind]


The body-switch experiment is an ideal special case, constructed carefully to make it plausible. Therefore, does not provide a serious model of how personhood can move.



Grice 1941: memory continuity as the condition of personhood: equivalence class of ‘person-stages’ under the relation ‘has a memory of an experience had by’.


Jackson   Phil Quarterly 32:127:127-36 1982

Epiphenomenal Qualia

3 arguments against physicalism

1.      Knowledge argument:

a)      Fred who sees a fourth colour – we cannot know certain properties of Fred’s experience, unless we modify our own brains to match his.

b)      Mary – she cannot know certain properties about her patient even though she knows everything about his brain

2.      Modal argument: A possible world that is physically identical, with no experience. But this ‘rests on a disputable modal intuition’. Qualia-freaks can accept the modal intuition, but only as a consequence of already having an argument that qualia are left out.

3.      “What is it like to be”: distinct from (1) in that, it says we can never know from the point of view of Fred. Separate from, “there are facts about Fred that I cannot know”. Nagel speaks as though the problem with physicalism is that we can’t extrapolate ‘what it is like’ (like Hume’s shades of blue) to alien experiences. But in Fred, we know all the physical facts so extrapolation isn’t the problem.


Arguments why qualia must be causally efficacious

1)      the quale of pain is, just obviously, responsible for my reaction; I am reacting to the pain itself. But, Hume’s scepticism about causality cannot rule out a common confounding cause, i.e. brain causes quale and brain causes behaviour

2)      can’t Evolve without effects. But: non-beneficial traits can evolve if they are a biological consequence of a beneficial one, e.g. polar bear’s coat is heavy. So, qualia may be a by-product of traits that are highly conducive to survival.

3)      I know of other peoples’ qualia only through their behaviour. So they must be causal. But: the real inference we make is ‘backwards then forwards’: we see behaviour, infer brain states, then forwards-infer the existence of others’ qualia. Objection: this is a dubious chain of reasoning. True: but that is the whole philosophical problem of other minds. “there is no special problem of Epiphenomenalism as opposed to, say, Interactionism here.


Objection: “so there is no knockdown refutation of the existence of epiphenomenal qualia. But the fact remains that they are an excrescence. They do nothing, they explain nothing, they serve merely to soothe the intuitions of dualists, and it is left a total mystery how they fit into the world view of science”


Response: physicalism is overly optimistic; it is unlikely that we evolved such that our minds could conceive of everything there is to know. That would be like assuming that “everything in the Universe be of a kind that is relevant in some way or other to the survival of homo sapiens”. there might exists slugs or super-beings whose perspectives we cannot adopt.


Block   BBS  18:227   1995

On a confusion about a function of consciousness

Is it right to reason that, because patients lose consciousness and integrative functions together, consciousness has an integrative function?

Blindsight, prosopagnosia with preserved priming, masked semantic priming, semantic priming during shadowing, alexia subconscious fast processing same argument

Dennett & Kinsbourne 1992: Cartesian materialism – consciousness has a brain substrate

Block: Carterian modularism – consciousness is individuated as a module, by its function

3 different claims:

  1. consciousness is correlated with an information processing function,
  2. consciousness should be identified with the information processing function;
  3. consciousness is efficacious (itself does something) in the information processing; it “is (at least conceptually) distinct from that information-processing function, but is part of its implementation”

Farah 1994: assumes phenomenal consciousness doesn’t do anything.

P-consciousness (phenomenal): there is no noncircular definition.

  • Has “experiential properties” = what it is like. Differences in intentional content (what it is ‘about’) make a P-conscious difference. Distinct from cognitive, intentional and functional properties.
  • Characterised by explanatory gap. e.g. Crick & Koch: 35-75Hz oscillation “deals with the informational aspect of the binding problem, how does it explain what it is like to see something as red”. Why don’t 110Hz oscillations underlie experience?
  • These may all be contingent properties of P-consciousness (i.e. we may know better one day). McGinn 1991: we can never know.
  • May also be ‘pointed at’ by inverted/absent qualia, Jackson (1986) Mary, what it is like to be a bat.

A-consciousness (access): the representation of its content is inferentially promiscuous (Stich 1978), poised for rational control of action, poised for rational control of speech  = reportability.

Can allow that chimps have A-conscious states.

P-zombie: A-consciousness without P-consciousness – robot computationally identical to person, but silicon brain does not support P-consciousness.


Super-blindsight: Learn to know that they are seeing X from imagining forced choices. They have as much knowledge about the world as normals, and can behave exactly like normals, but no phenomenal awareness. [but they don’t behave the same, since they report not consciously seeing!]



JJC Smart Phil review 1959

Sensations and brain processes

Wittgenstein: Expressive account of sensation-statements.

  • reporting after-images do not report anything; they express a temptation to to say “there is x”.
  • Reporting a pain is similar – “a sophisticated sort of wince”, “replaces crying and does not describe it”. = rival to “brain-process thesis” and to old-fashioned dualism.
  • “Sensation is not a something”, but “not a nothing either” = the words have uses.

“it seems that even the behavior of man will one day be explicable in mechanistic terms” [explicable to whom? I don’t have enough brain to understand it in those terms]

to say brain and consciousness “are correlated is to say that they are something ‘over and above’”. implausible that only consciousness is not explained (occam) consciousness as a “nomological dangler” (Feigl).

If laws for consciousness were to be discovered, they would “be like nothing so far known in science” and “have a queer smell to them” [poor argument!]

Sensations are brain processes:

Not: “S means the same as a certain kind of brain process”,

Rather, “insofar as S is a report of a process, it is a report of a process that happens to be a brain process”. So:

  • Ss can’t be translated to brain process,
  • S-statements follows different logic so brain statements.
  • But S are nothing over and above brain processes
  • Copula = strict identity. Not spatiotemporal identity. Not “necessary identity” though.
Objections to the thesis “Ss are brain processes”:
  1. “I know about S but not about brain processes”. Reply: a) two people can use different terms to refer to the same thing and not know it. b) one can talk of lightning (the publicly observable physical object, not the sense datum) without knowing of electricity
  2. It is only a contingent fact (only today’s wisdom) that when we have S, there is a brain process. Therefore when we report S, we are not reporting a brain process. Reply: this is a “Fido”=Fido theory of meaning – meaning of an expression is the expression named. Although “S=B” is a contingent fact, that doesn’t make identity impossible.
  3. There must be irreducibly psychic properties at least – the properties of S are different to those of the brain process. Reply: when a person says “I see S”, he is saying something like “There is something going on which is like what is going on when I” am in certain other situations. This also means that “raw feels are colorless, for the very same reason that something is colorless”; they have properties, but they are brain properties, and “in speaking of them as being like or unlike one another we need not know or mention these properties”. Relies on being able to report like/unlikeness without being able to state the respect in which they are alike; “not sure whether this is so or not”.
  4. After-images are not in physical space but brain proceses are, so they cannot be identical. Reply: this is an “ignoratio elenchi” – not claiming that after-images are brain proceses – rather the experience of having them is. Similarly, brain-processes can’t themselves be coloured, but they could be the experience of colour. “Trees can be green, but not the experience of seeing a tree”.
  5. Brain processes have properties of motion etc it makes no sense to say this of experiences. Reply: Ss and brain processes don’t mean the same thing, and don’t have the same logic. These are conventional. Talk about experience “leaves it open as to what sort of thing is going on”.
  6. a) Sensations are private. b) I cannot be wrong about a percept. c)It does not make sense to say 2 people are reporting the same inner experience. Reply: Just shows that language of introspective report has a different logic. There are currently no criteria for saying “I have S” except my introspective reports “we have adopted a rule of language that whatever I say goes”.
  7. I can imagine no body but still experiencing. Reply: I can imagine that Hesperus is not Phosphorus, but it is! Cf “what can be composed of nothing (atoms) cannot be composed of anything” argument for either infinite divisibility or nothingness. Bad argument for the soul as being made of “ghost stuff”.
  8. If “I have S” is a genuine private report, how can it get a foothold in language? (=Wittegenstein’s beetle in a box). Reply: Saying “It seems like I have S” rather than “I have S” is: a) conservatism about assumptions, and b) permitting a response that is normally considered an unreliable guide to the environment. There is therefore no language of private qualities.

Compares dualism to Gosse’s theory that God created the world in 4000BC with all the strata and fossils in place. “dualism involves a large number of irreducible psychophysical laws of a queer sort, that just have to be taken on trust.” [just doesn’t seem to get subjectivity?]