0-2 Prolegomena

Eye-Stomach Dualism

The most curious aspect of Becker's books is that he writes of "dualism" matter-of-factly. This little term detains him not in the least even as it derails many otherwise sympathetic readers. The only really controversial thing about Becker's dualism, though, is that it arrives stamped with this contested label, a label which it can quite well do without.

Basically, he is saying of human beings that their eyes are bigger than their stomachs.

Presumably even card-carrying materialists have observed this phenomenon, if they have not evinced it themselves. If they would deny even the "idealism" of going back for a second piece of cake after one's stomach is already full, then materialism indeed cannot explain much at all. If the truism that all cognition is "embodied" serves merely to repress awareness of everything that remains confounding about human behavior, then the so-called Cognitive Revolution is not revolutionary but reactionary.

Simply put,

  1. If an organism must eat in order to live, then eating is living.
  2. An organism can have too much food but it cannot have too much life.

That's it. That's Becker's dualism of the "creaturely" and the "symbolic." The theory of desire starts here, "embodied" in reaching for the Alka-Seltzer.

What of it?

Suppose that a guest at a dinner party must explain to the host that a spouse or a child suddenly is not feeling well after going back for a second and third dessert. The guest's rhetorical options are constrained by whatever norms prevail among those present. Only in the paradisiacal and mindfully rationalistic milieu of professors, scientists, and public intellectuals could the existentialist version perhaps, maybe, find practical outlet in genteel company.

"After a full evening of recreation, entertainment, libations, and nutriment, a human being may have eaten too much and yet still not lived nearly enough. And indeed, dear Carruthers The Younger, though he has been neither seen nor heard for at least ninety minutes, has indeed seized the opportunity to live all too profligately. As a consequence, he must retire, along with those members of his kinship group presently charged with his custody."

Meanwhile, in most any other milieu, the necessity to avoid meeting generosity with ingratitude, to avoid risking even fleeting misconstrual to this effect, takes priority over all else; and so a genteel metaphor serves the occasion more gracefully.

"I'm so sorry, but we have to go. The kid's eyes were bigger than his stomach."

This ensures that fault is laid on the guest rather than the host, which is what is most important. The quality of the explanation is much less important. And indeed, this formulaic utterance, even if it were somehow literal rather than metaphorical, merely suggests a proximate cause. Rank and Becker propose an ultimate cause: a living thing can (quite easily) have not enough life, but it cannot have too much life. It does not take a laboratory scientist to see that, but it does take a certain tolerance of wounded vanity for a human being to accept, as they utter such a thing about others, that it applies also to themselves.

It is from the inability to tolerate such unhappy realities, from the various efforts to escape or deny them, that the entire absurdity of the human condition follows. That is most of what Becker is trying to say. Whether human life really can be split in two, in any conventional sense, is far less important.

The Essentiality of Essentialism

"Man's tragic destiny," writes Becker, is that "he must desperately justify himself as an object of primary value in the universe." Hence, "what the anthropologists call "cultural relativity" is thus really the relativity of hero-systems the world over."

This is the comparative view. The view from the inside of any one culture is of course very different. As Francisco Gil-White observes, ca. 2001,

These days “good” anthropologists do not essentialize groups, and therefore no self-proclaimed essentialists are found in anthropology journals. But ordinary folk are not good anthropologists or sophisticated constructivist scholars. Quite to the contrary, they are naive essentialists, and I will try to explain why.

In other words, as human knowledge marches forward, human beings stay pretty much the same. Every scholarly advance widens the knowledge-cognition gap, even if the advance is modest and its practical applications innocuous. And so "sophisticated constructivist scholars" find the ground shifting beneath their feet. Constructivists easily find themselves on the right side of truth but on the wrong side of justice. Justice, after all, is a "local epistemology." Justice is "constructed" only in a crude metaphorical sense. Really justice is essentialist, the ultimate essentialism perhaps. If justice is the end, relativism cannot be the means. Every relativist conclusion converged upon by advancing knowledge, therefore, is a problem created, whether or not it is also a problem solved.

If it requires dozens of additional pages for an astute scholar to "try to explain why" essentialism persists among the folk, it is easier to explain why they do not become constructivists: because it is very difficult for anyone at all to do so. It has taken many generations and much painstaking litigatation of methodology in order for social science to begin to become constructivist without just imagining itself to be that way. The simplicity and elegance of the best advanced thinking too easily conceals a background process that is anything but simple and elegant. It seems unreasonable, then, to expect everyone else (or anyone else) to simply bootstrap themselves out of whatever "naive" beliefs they have already developed and into the "sophisticated constructivist" truth. This is asking too much of the average person, for one thing; and for another thing, advanced thinking per se appears to guarantee no concurrent advance in morality or consciousness, as Hannah Arendt and Christopher Lasch (among others) have observed.

In his address on "Cargo Cult Science," Richard Feynman reflects upon this delicacy of empirical investigation:

there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

The question was, how did the rats know,...? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and, still the rats could tell.

He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

Now, from a scientific standpoint, that is an A‑Number‑1 experiment. That is the experiment that makes rat‑running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. ...

I looked into the subsequent history of this research. The subsequent experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats.

The anecdote itself is evergreen, but there is an ultimate pessimism at its core: "ordinary" human existence cannot proceed this way. Whether it should is moot. It has not and cannot. Still, knowledge advances all the same and seems only to be accelerating its advance. This suggests that whatever possibilities exist for progress can only entail working around essential human frailties, not in trying to change them.

One way of addressing inner human faults, of course, is to simply leave them where they are. This does not mean merely to ignore or deny them, but to leave them on the "inside" of the human being, to render unto Caesar that which is Caesar's, and then to turn attention to everything that is "outside" of human beings. That is, "constructivist" attention can more profitably turn to the literal "construction" of the lived environment and of social institutions. These outer constructions are very responsive to advances in human knowledge whereas the insides of human beings are not. But this, somehow, this is not what anyone means by "constructivism." This is just the old "essentialism," now trailed by a motley of other impish -isms, and it all gives constructivists the heebie-jeebies. Constructivists know how to build buildings but they would rather build humans. Only the first of these aims is coherent, but unfortunately it is the second which is the more intense object of constructivist desire.

Human beings have far greater ability to crystallize their knowledge in the form of built environment and institutions than they do to persuade or coerce each other. By crystallizing knowledge on the "outside" rather than trying to force it "inside" of each and every person, advancing knowledge can be applied and passed on to large populations without demanding that each individual account intellectually for each advance in all of its granular detail. In other words, knowledge can be transfigured into practice. Knowledge per se thereby stands or falls in relation to some practical aim. Knowledge then need not be operationalized as persuasion or coercion. The pantheon of knowledge need not be a caste construction any longer. Untouchables may become as gods if they can get an important person's car started after the car has died; and of course, when important people cannot even get their own cars started, they are not as important as they think.

Materialists of the dialectical persuasion have plenty of their own slogans about practice. For ardent capitalists, meanwhile, there is the more recent example of Nassim Nicholas Taleb:

you need a name for the color blue when you build a narrative, but not in action—the thinker lacking a word for "blue" is handicapped; not the doer. (I've had a hard time conveying to intellectuals the intellectual superiority of practice.)

As always, Becker has the key to why this works:

one of the most vital facts about all objects is that they have both an inside and an outside... But, says [G.H.] Mead, dawning consciousness has no awareness of this dualism; the organism knows its insides by direct experience, but it can know its outside boundaries only in relation to others. ...

...the self cannot come into being without using the other as a lever. As the noted sociologist Franklin Giddings once put it: It is not that two heads are better than one, but that two heads are needed for one.

Consciousness, then, is fundamentally a social experience... A self-reflexive animal, after all, can only get the full meaning of its acts by observing them after they have happened.

One reason for the "superiority of practice," then, is social: social "practice" entails "using the other as a lever." This is how human beings "give outsides to ourselves, and confer insides upon others." Intellectual learning can help, but only through practice are insides and outsides "conferred."

Moreover, the imperative of the social human being to "get the full meaning of its acts by observing them after they have happened" is fulfilled, obviously, only by engaging in some kind of "acts." This is true not just socially but more generally.

This road, nonetheless, is fraught with danger, as Becker elaborates:

We come into contact with people only with our exteriors—physically and externally; yet each of us walks about with a great wealth of interior life, a private and secret self. ... The child learns very quickly to cultivate this private self because it puts a barrier between him and the demands of the world. ...it seems that the outer world has every right to penetrate into his self and that the parents could automatically do so if they wished... But then he discovers that he can lie and not be found out: it is a great and liberating moment, this anxious first lie—it represents the staking out of his claim to an integral inner self, free from the prying eyes of the world.

By the time we grow up we become masters at dissimulation, at cultivating a self that the world cannot probe. But we pay a price. ...we find that we are hopelessly separated from everyone else. ... We touch people on the outsides of their bodies, and they us, but we cannot get at their insides and cannot reveal our insides to them. This is one of the great tragedies of our interiority—it is utterly personal and unrevealable.

The "superiority of practice" implies an inferiority of learning or of cognition, and the above passage from Becker captures one aspect of this. Human "interiority" is "unrevealable" but cognition-about-interiority gives the illusion of revealing something (or perhaps everything). Practice, meanwhile, transfigures cognition into action, ideas into objects, relationships into institutions. Practice does not "reveal" anything either; what it does, rather, is to save human beings the trouble of really needing to reveal idealistically anything that has not already been revealed materially. From there, certain inscrutable social behavior remains in play, but the problem of the knowledge-cognition gap can at least be moderated. Constructivists and essentialists may disagree about almost everything intellectually and culturally, but they can perfectly well work together to clear a fallen tree, organize and deploy a trading party, or paint a mural.

These ideas have been put forward many times before, but of course the ground has never stopped shifting underneath intellectuals' feet.

In the twenty-first century there is, to start, the problem that myriad basic social "acts" (arguably the vast majority of them) no longer permit the ascertaining of "meaning" at all. Contemporary life, increasingly conducted "remotely" or "parasocially," is something of a scream into a void, with the result that even mindful, persistent, refined efforts by a human being to locate "outside boundaries" can fail completely. Already in 1979, Christopher Lasch identifies the leading edge of this shift and accounts for it brilliantly through the lens of then-current Freudian theory.

The scream-into-a-void problem is quite transparently a result of the "outside," i.e., of environment and institutions; and so this is what must be changed, at minimum, if humanity is to extricate itself from its present predicament. It is not possible to go directly to changing the "inside" of human beings if everything outside of them remains as it is. "Transhumanism" proposes to make this intervention directly, on the inside, using technology which is younger than the persons who are to intervene upon each other; but this merely evinces the unfreedom of a refusal to accept limits.

As if all of this were not enough, there is a peculiar irony in the course that psychology has taken ever since it pried itself from Freud's cold, dead hands. The research paradigm which took the eschewing of "insides" in favor of "outsides" as its guiding principle, namely the paradigm called Behaviorism, is considered in the twenty-first century discredited and superseded by a subsequent paradigm, called Cognitivism, which tries very hard to figure out what is happening on the inside of human beings. Cognitivism thus chose a very, very strange time to crash the world stage, right when the expansion of the knowledge-cognition gap was becoming self-sustaining, right when advancing knowledge began to run away and hide from rational human control.

Not coincidentally, the knowledge-cognition gap became co-extensive with who knows how many other "gaps," not just rhetorically but often in reality; this not so much because of the cognition part (which does not change much anyway) as the knowledge part (which only ever changes into something more and more difficult and expensive and grueling). The most widely-demanded remedy for the gap problem, of course, is to get rid of everything that remotely resembles a gap: all boundaries of practice, of habitation, of knowledge; all borders between countries, between art forms, between businesses, smashed; all real and ideal boundaries, not just some of them, smashed and getting smasheder. The problem with this, though, is that there never was any "boundary" in the "gap," but by this time all the other boundaries have been smashed up trying to close the gap, so no problems are solved this way and many are created, including new gaps which are not boundaries but will surely be smashed anyway, in due time, without being closed. And now things have become a bit too abstract, so it is time to regroup.

The crown on top of the present human predicament is that human beings have lost even what control they might have over their "outside" because there are too many of them and because they are too connected. The bottom line, then, of ever-advancing knowledge, exploding population, and global interconnectedness is that even residents of representative democracies actually are no more self-governing than rats in a maze. They may reflexively slam the footwell with their braking foot, but they will not run into any brake pedals. They may dutifully masquerade as Greeks, but they live as Romans, whether they really want to or realize that they do. They may know that not just culture but time and space too have been relativized, but this does not solve their problem; rather, it is their problem.

Cognitivism plus Progressivism equals Behaviorism

Contemporary cognitive science, for all its explanatory and predictive power, cannot penetrate very deeply into existential matters, not even when they are amenable (as the Eye-Stomach problem is) to a cognitivist account. This impasse traces back to an "existential" conflict which is cognitivism's own:

  1. In the teleology of laboratory psychology, and arguably also of Western progressivism writ large, cognitivism itself originates as the necessary and unimpeachable successor to behaviorism.
  2. Despite this privileged position in the psycho-teleology, and despite its real practical successes, cognitivism's findings cannot be operationalized other than by way of behaviorist interventions.

Stated tersely: cognitivism plus progressivism equals a reversion to behaviorism. Try to list the ways that cognitivist findings can be operationalized without being built into the very floors, ceilings and walls of culture, or just built into chips and implanted in people's skulls. Merely explaining to people how to think-about-thinking guarantees nothing and can even be tautological. Coercion is effective only temporarily and hence must ramify into full-blown repression if it is to be sustained. Appealing rhetorically or moralistically to the better angels of our nature fails at scale even if it succeeds in awakening those angels in an elite psycho-minority. And so on down the long list of interventions available to progressivism, from the gentle to the severe, until finally it is tacitly admitted that better floor plans and organizational hierarchies can be created but better people cannot be.

And of course, the kicker: every new piece of cognitivist knowledge also is heaped on the pile of general knowledge, which is already so big that even specialists cannot manage their ration. Specialists, confined even to their speciality, can see straightaway that their stomachs hold only a fraction of what is on their plates, and that there is even more generalism that they ought, ideally, also to digest but cannot even see. And it is the specialists, though they cannot keep up with knowledge either, who see why they cannot keep up: they see the reality of the problem and the terror of it, because they have drawn tight boundaries around themselves.

The broad progressive intervention on behalf of cognitivism, then, ends up as a panoply of imperceptible microinterventions in environment and conditioning, something Skinner could have drawn up, or perhaps precisely what he drew up, depending on how (or if) he is read. This cannot be merely an intervention in knowledge, in morals, or in public opinion. Like dessert and recreation, knowledge and morals are too existentially bound up with life itself to be subject to mere rhetorical persuasion and transactional bargaining. The work of scholars such as William Stephenson and George Lakoff amply shows that even "public opinion" per se also is bound up with nothing less than existence itself. Rank and Becker explain precisely why this is, and precisely what it is about human beings that binds them so tightly to such "superficial" matters; on which point it is best, by now, to refer the still-skeptical reader to the original texts.

Yet another way to put this is that every christening of a new "cognitive bias" is a tacet confession by cognitivists that a certain antisocial behavior is amenable only to behaviorist remedies. Thaler and Sunstein's Nudge is emblematic of this. They are smart enough not to propose that people should try to think differently than people have always thought. They are not the kind of progressives who propose that better people can be created. Rather, they make the humble recommendation that "choice architecture" be expertly designed to favor certain high-level outcomes over others. This is Skinner's brand of progressivism after it has lowered its ambitions and skimmed some cognitivist abstracts.

Now, is this a bad thing? Who is to say? Skinner is a brilliant polymath who demands to be heard. Could it actually be a good thing that all of this has unfolded just as it has? It is a thing, that is all.

As a layperson, blissfully free of "constructivist" rigor and refinement, one can always make what one will of science, as with statistics. The trick, though, is for communities as communities to make the right thing of it. To learn to do this (or, channeling Feynman, to learn what one must learn in order to do this), the really urgent question is: how is any given piece of science to be politically operationalized? Laypeople cannot hope to wade into any methodological briar patches, and thankfully, once this point about operationalization is understood, they do not need to. The political operationalization of science is something anyone can understand. Provided one can catch a glimpse at the operationalization, it is superfluous to examine the methodology, which will soon be changing anyway.

For purposes of mere recreational learning, it is better, then, that the research findings be accepted rather uncritically than that laypeople should arrogate to counterpose such anecdotes as are available to them. This is not to say, however, that anyone should should be eager to wager anything important on a science that is younger than they are. That goes for "revolutionary" cognitivism, too, which is less a "practice" than an ideology-about-practice, an ideology which says: if people can be coaxed into "revealing" their "insides," then the "outside" can be rejiggered with this in mind. But if human beings really needed to know what was "inside" of each other, the species would have fallen by the wayside long ago. If cognitivists are serious about explaining human beings' incomplete and variable ability to "mindread" as an artifact of natural selection, then they have to admit, concurrently, that natural selection has not been half as demanding in this regard as have the cognitivists themselves.  

Human social wiring is a means to the end of "group living," not the other way around. The point of group living is not to learn about what is "inside" of the other group members, pleasant as that activity may be for so many of those who find themselves thus situated. It is the reverse: the point is to use the "outside" world "as a lever," to crystallize the resulting knowledge as built environment and social institutions, and to take note of the "practical" outcomes.

Communion happens through culture. There is no direct, frictionless communion available to human beings; there must be some intermediary "cultural" practice that makes communion possible. Cultural practices present varying degrees of friction, however; that is, varying degrees of autonomy. This means that a culture is never fully "reflected" in its practices and traditions: that kind of picture is always rather distorted.

To learn from practice, which includes learning about each other then, people need to be confronted by the "outside" of things. , because That is how people have confronted the world for the million-or-so years prior to the Cognitive Revolution. Indeed, "we find that we are hopelessly separated from everyone else" this way, in spite of whatever else we learn about the inanimate world ; , and that is why some people, desperate to effect their own merger with the entire world of things and people, fight so hard to "reveal" the "inside" of everything. That is why psycho-voyeurism has become as prominent in art and entertainment as it has in laboratory psychology. It is "the great tragedy of our interiority," as Becker puts it, which is responsible for the "inside" having become the intense object of desire that it is. But people do not learn very much or very well when they think they are getting a direct look at the "inside." This is what people desire, but it is not how they learn. One reason, simply, is that no ascribed cognition leads inexorably to any given behavior, hence even perfect knowledge of cognition is something of an answer in search of a question. And anyway, even now people and their behavior do not make up very much of the lived environment. Much of that environment is inanimate or nonhuman ; it may have been made by humans but it does not have the same things "inside" of it as a human being does. The countervailing fact of ever-upwelling "animism" and "anthropomorphism" in culture, even now, with the material triumph of the scientific method long in the rear-view mirror, itself constitutes most of the "cognitive science" anyone really needs to know.

Of course cognitivists can explain animism (and much else) quite beautifully, but the scientists themselves are not usually in charge of practically deploying their own findings. Deployment customarily falls to scientifically naive politicians and their stable of kept experts, charged with bringing the head honcho up to speed, in a matter of minutes, with science that has taken centuries to bear fruit. That is the whole problem.

And so, much as a work of art is not complete until some audience has received it, and much as "a self-reflexive animal can only get the full meaning of its acts by observing them after they have happened," science is not science until it is operationalized. Science always requires some further elaboration in order to be reconciled with the truly unscientific world of social norms and competing value systems, all the more so for a globalized population of Billions-with-a-B whose specialists even cannot hope to keep up with the advance of specialist knowledge. This task of reconciling the scientific and the social is something that philosophy, still, can assist with. Philosophy, though it has been "dead" for a while now, may nonetheless be the best science-of-science remaining; or perhaps the only one.


NOTES

. . . "creaturely ... symbolic ... man's tragic destiny ... cultural relativity" . . .

Ernest Becker, The Denial of Death (pp. 2-5).

If you took a blind and dumb organism and gave it a self-consciousness and a name, if you made it stand out of nature and know consciously that it was unique, then you would have narcissism. In man, physio-chemical identity and the sense of power and activity have become conscious.

In man a working level of narcissism is inseparable from self-esteem, from a basic sense of self-worth. ... But man is not just a blind glob of idling protoplasm, but a creature with a name who lives in a world of symbols and dreams and not merely matter. His sense of self-worth is constituted symbolically... And this means that man's natural yearning for organismic activity, the pleasures of incorporation and expansion, can be fed limitlessly in the domain of symbols and so into immortality. ...

We like to speak casually about "sibling rivalry," as though it were some kind of byproduct of growing up... But it is too all-absorbing and relentless to be an aberration, it expresses the heart of the creature: the desire to stand out, to be the one in creation. ... An animal who gets his feeling of worth symbolically has to minutely compare himself to those around him, to make sure he doesn't come off second-best. ...it is not that children are viscious, selfish, or domineering. It is that they so openly express man's tragic destiny: he must desperately justify himself as an object of primary value in the universe...

When we appreciate how natural it is for man to strive to be a hero, how deeply it goes in his evolutionary and organismic constitution, how openly he shows it as a child, then it is all the more curious how ignorant most of us are, consciously, of what we really want and need. In our culture anyway, especially in modern times, the heroic seems too big for us. ... We disguise our struggle by piling up figures in a bank book to reflect privately our sense of heroic worth. ... But underneath throbs the ache of cosmic specialness, no matter how we mask it in concerns of smaller scope. Occasionally someone admits that he takes his heroism seriously, which gives most of us a chill... We may shudder at the crassness of earthly heroism, of both Caesar and his imitators, but the fault is not theirs, it is in the way society sets up its hero system and in the people it allows to fill its roles. The urge to heroism is natural, and to admit it honest. For everyone to admit it would probably release such pent-up force as to be devastating to societies as they now are.

The fact is that this is what society is and always has been: a symbolic action system, a structure of statuses and roles, customs and rules for behavior, designed to serve as a vehicle for earthly heroism. Each script is somewhat unique, each culture has a different hero system. What the anthropologists call "cultural relativity" is thus really the relativity of hero-systems the world over. But each cultural system is a dramatization of earthly heroics; each system cuts out roles for performances of various degrees of heroism: from the "high" heroism of a Churchill, a Mao, or a Buddha, to the "low" heroism of the coal miner, the peasant, the simple priest; the plain, everyday, earthy heroism wrought by gnarled hands guiding a family through hunger and disease.




. . . "no self-proclaimed essentialists are found in anthropology journals" . . .

Francisco Gil-White, "Are Ethnic Groups Biological "Species" to the Human Brain? Essentialism in Our Cognition of Some Social Categories" (p. 516).




. . . " "ordinary" human existence cannot proceed this way" . . .

Dan Williams, "Why do people believe true things?"
https://substack.com/@conspicuouscognition/p-146361251

Williams, following Joseph Heath, says,

the ability to appreciate “explanatory inversions” is one of the things that most sharply distinguishes a scientifically informed worldview from a “commonsense” one.

For Heath, these inversions

arise as a consequence of discoveries or theoretical insights that have the effect of changing, not our specific explanations of events, but rather our fundamental sense of what needs to be explained.

e.g., Perhaps it is not "poverty" and "crime" that "need to be explained" but rather "wealth" and "law-abidingness."

Heath:

Common sense is wrong on this point because we are all reasonably well-socialized adults, living in a well-ordered society, and so we take for granted the institutional arrangements that secure our compliance with the rules. But the underlying mechanisms are ones that we do not really understand, as a result of which it is difficult to explain why more people do not break the law more often.

Williams argues broadly:

Social epistemology—very, very roughly, the study of phenomena such as knowledge, belief, and understanding in society—similarly needs an explanatory inversion.

Why?

In complex, modern societies, the relationship between reality and our representations of reality—between what Lippmann called the “real environment” and the “pseudo-environments” that make up our mental models of the real environment—is heavily mediated by complex chains of trust, testimony, and interpretation.

...

Moreover, to organise all that socially acquired information, you relied on simplifying categories, schema, and explanatory models that reduce reality's complexity to a tractable, low-resolution mental model.

In this heavily mediated process, there are countless sources of error and distortion. ...

Just as importantly, the people from whom you have acquired your information about the world are similarly flawed, fallible, and biased. ...

For these reasons, the truth is not the default when people form beliefs about the world beyond their immediate material and social environment.

For Williams, then,

At least relative to a modern scientific worldview, almost everything people have ever believed about the world they are not in close perceptual contact with has been completely wrong.

Both the similarities and the differences with Becker's account of "cultural illusion" are striking here: Becker suggests that even "close perceptual contact" has produced hardly any "truth," because human beings always have one eye on the most distant, cosmic, existential matters, even when the other eye is fixed upon a simple plant or rock.

For Williams it is people's propensity to "treat their mental model of reality as an unproblematic mirror of nature" ("naive realism") which leads directly to an "explanatory" fixation on "why some people believe in misinformation", e.g., as opposed to asking how and why they might believe in "true things." If one proceeds in this manner, then "false beliefs really do seem puzzling" and exceptional; but "false beliefs" really are, in most every meaningful sense, the rule rather than the exception.

Also taken for granted are the "centuries of cultural and institutional development designed to overcome the many sources of ignorance and misperception in human judgement." New generations thrust headlong into such currents may, of course, take for granted just what an "extraordinarily fragile achievement" this has been: the ongoing overproduction of unsynthesized, unapplied knowledge is supremely effective in enabling them to do so, ever more so when that knowledge is "true" than when it is "false." That is why it eventually becomes imperative not to try to teach everything to everyone.




. . . "Cargo Cult Science". . .

Richard P. Feynman, "Cargo Cult Science: Some remarks on science, pseudoscience, and learning how to not fool yourself. Caltech’s 1974 commencement address."
https://calteches.library.caltech.edu/51/2/CargoCult.htm




. . . "the intellectual superiority of practice" . . .

Nassim Nicholas Taleb, Antifragile (pp. 108-109).




. . . "an inside and an outside" . . .

Ernest Becker, The Birth and Death of Meaning: An Interdisciplinary Perspective on the Problem of Man (pp. 23-24).




. . . "We come into contact with people only with our exteriors" . . .

ibid (pp. 28-29).




. . . "the leading edge of this shift" . . .

Christopher Lasch, The Culture of Narcissism.




. . . "a subsequent paradigm, called Cognitivism" . . .

Francesca Happé, Jennifer Cook and Geoffrey Bird, "Exploring the Structure of Social Cognition."
https://www.researchgate.net/publication/298214811

These authors define "cognition" as "the level of explanation lying between neural processes and behavior" (sec. 1).




. . . "too many of them ... too connected" . . .

Yuval Noah Harari, Homo Deus (Ch. 1).

When people realise how fast we are rushing towards the great unknown, and that they cannot count even on death to shield them from it, their reaction is to hope that somebody will hit the brakes and slow us down. But we cannot hit the brakes,...

...nobody knows where the brakes are. ... Nobody can absorb all the latest scientific discoveries, nobody can predict how the global economy will look in ten years, and nobody has a clue where we are heading in such a rush. Since no one understands the system any more, no one can stop it.




. . . "too existentially bound up with life itself ... Stephenson ... Lakoff" . . .

William Stephenson, The Play Theory of Mass Communication.

George Lakoff, "What Orwell Didn't Know About the Brain, Mind, and Language." In András Szántó, ed., What Orwell Didn't Know: Propaganda and the New Face of American Politics.

Stephenson, who should be widely read but cannot be, was one of the first true "media theorists," and perhaps one of the last. Much of the above work suffers from the same inscutability as does Otto Rank's, and it contains the same flashes of insight.

e.g., From the above work's myriad summative "postulates"

The self is differently involved in conditions of social control and convergent selectivity. I distinguish self from ego. The former is overtly attitudinal, and the latter a matter of mental structure.
Self-attitudes are developed largely in interactions under social control. (The boy who wins a prize at school adds to his self-stature thereby, and almost all that we are in selfhood respects is given to us in relation to social controls.) But the self so put upon us is to a degree false—a façade only. The person has to be what custom or status demands of him.
Convergent selectivity is an opportunity for the individual to exist for himself. Such existence is experienced as enjoyment, contentment, serenity, or the like. Certain free aspects of self are possible outcomes of convergent play.
The mass media, plays, art, and the theater generally offer opportunities for convergent selectivity. The self so involved is enhanced. There is an increase of self-awareness—typical, for example, of the mountain climber. There is no gain in social or material respects but much gain in one's self-existence.

...

Ordinary life would be impossible without communication, in school, church, business, on the farm, and so on. ... It is important, however, to distinguish between that part of communication supporting social control and that part of it offering opportunities to convergent selectivity.
... Communication in conditions of social control is a "mover" in national and individual development: it informs a nation of its work, its five-year plans; it teaches literacy and technology; it develops industry and extends markets. ...
Mass communication, literature, drama, and the like serve instead for sociability and self-existence. These are vehicles for communication-pleasure—directly in the enjoyment they enjoin, and indirectly in the social conversations they support.

... Convergent communication, being communication-pleasure, serves mainly as a "fill" in mass communication. The "important" communication concerns social control matters. The "fill" serves to maintain status quo position, since it serves no "work" purposes. It pleases, entertains, and projects fashions and fads. It is basically aesthetical, and amoral, a-ethical. Its function is not to relieve anxieties but to increase the sum total of self-existing possibilites.
(The "human interest slant" given to popular "news" put the reader in the position of a confidant, reflecting inner-experience, inducing reverie about himself and so on—all pointed toward more existence for oneself.)

... Culture develops in play, and play enters into social control and convergent selectivity situations alike. But the play in religious practices, the armed forces, the law courts, in diplomacy, professional practices, is always more or less subject to internalized belief systems; deeply held values, loyalties, needs, and ethical matters are everywhere evident.
The play in convergent selective situations is at best indifferent to such values, needs, and beliefs.

And of course,

The mass media, in much that pertains to social control as well as convergent selectivity, do not communicate truth or reality but only a semblance of it—of a fictional, representational, or charismatic character. Reaching the truth is a matter for science, technology, reason, and work. Charisma, imagery, and fiction are characteristic of convergencies.
But this is not to be despised. On the contrary, reality is so complex that its symbolical representation is essential to give it meanings that ordinary people can appreciate. Politics is conversation about freedom, democracy, liberty...issues which need bear little relation to ongoing real conditions or legislative actions. But all these can be good fun, that is, good communication-pleasure.

(pp. 192-195).

Mostly Stephenson draws boundaries here between the "work" and the "fun," but it is not too hard to see how (and why) those boundaries have been getting blurrier: for one thing, someone has to make or "create" the "content," and that person, per Rank, is undergoing "experience," always, as they do so; and for another, when something is such great fun, it will be sacralized and moralized; and to be sure, few things have so delighted human beings as has The News in both its creation and its reception. Unfortunately, then, this wellspring of delight is also the stalking horse of the darkest hours.

Further on, Stephenson argues that "advertising has been blamed for social effects that belong, instead, to the contrary principles of social control." (203) The reason why

television can sell soap but not, it seems, citizenship...lies in the part played by mediating mechanisms in advertising; in between the advertisement on the one hand and the consumer who reads it there are the facilitating factors of supermarkets, shopping habits, and the ready availability of spending money which make it relatively easy for a consumer to be "sold" a new brand of soap.

(p. 204).

Finally, Stephenson also conducted public opinion research and is not above broaching a certain kind of behaviorist essentialism, or at least skepticism, when it comes to changes of opinion and behavior.

New Yorkers moving to California or Texas want to behave like everyone around them; they do so in terms of the trivia of modern consumer goods—cars, homes, dress, barbecue pits, swimming pools, and the rest—not out of any sense of shame but out of dissonance, followed by self-expansion, self-respect, and self-expression. They change their ways, and their social character follows suit. Whether their deeper value-systems fall in line as well is another matter; our own view is that it would be well to recognize that early internalizations remain untouched.

(p. 83).

This leads to Lakoff, who needs no further introduction.

What are words? Words are neural links between spoken and written expressions and frames, metaphors, and narratives. When we hear the words, not only their immediate frames and metaphors are activated, but also all the high-level worldviews and associated narratives—with their emotions—are activated. Words are not just words—they activate a huge range of brain mechanisms. Moreover, words don't just activate neutral meanings; they are often defined relative to conservative framings. And our most important political words—freedom, equality, fairness, opportunity, security, accountability—name "contested concepts," concepts with a common shared core that is unspecified, which is then extended to most of its cases based on your values. Thus conservative "freedom" is utterly different than progressive "freedom"...

(p. 70).

Perhaps Stephenson also did not "know" this, but he certainly understood it.




. . ."Thaler and Sunstein" . . .

Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness.

To be fair to the authors, their book is well-conceived, well-executed, and chock full of wisdom, homely and sophisticated alike. It is for the better that as many people read it as possible. But, see all above for why this cannot be expected to change very much at all.




. . . "natural selection has not been half as demanding" . . ."

Max van Duijn, The lazy mindreader: a humanities perspective on mindreading and multiple-order intentionality.
https://scholarlypublications.universiteitleiden.nl/handle/1887/38817

we will follow Apperly’s suggestion to drop the term ‘theory of mind’ (to avoid the implication that attributing mindstates is like having a theory) and refer to the set of mechanisms, routines, and tricks that humans apply to form understandings of other’s mindstates as ‘mindreading’.
(p. 77)

The need for this shift in terminology is easy enough to understand in light of this work's central concerns. The need is less easy to understand in any other context.

The "implication" of the term "mindreading" is total, whereas what actually are at issue are mere "mechanisms, routines, and tricks." van Duijn's Chapter 1 well explains why these "tricks" can serve their pragmatic purpose quite well in spite of how easily they miscarry from an abstract epistemic standpoint; but this also seems to render the term "mindreading" totally inapt.

The important "implication" overlooked above is that "theories" per se are prospective and provisional. Theories await their proof. van Duijn's "alternative view" of so-called mindreading "focuses on economy and least effort." Theory's connotation of conscious, effortful deliberation, then, must be abandoned. In this narrow matter the paper is very convincing. But the fact remains that one can have only "theories" about other people's thoughts and feelings, and indeed about one's own also. One cannot really "read" people like a book, even though that is also a phrase that is uttered from time to time. It is precisely the "least effort" approach that suggests the limits of "mindreading," namely that it is limited to the "lazy" version which arises when people "live in a socio-cultural environment that allows us to be" this way. It allows this because it "contains the coagulated experience of many generations." (p. 13) Without that crystallized experience and culture as a guide, the primate mind runs wild.

Stephanie D. Preston and Frans B. M. de Waal, "Empathy: Its Ultimate and Proximate Bases."
https://www.researchgate.net/publication/10866840

According to McDougall's theory, sympathy "is the cement that binds all animal societies together, renders the actions of all members of a group harmonious, and allows them to reap some of the prime advantages of social life."
(Sec. 1.1., "Perception-action processes facilitate group living")

Again the suggestion is that social wiring is more means than end. After all, what are "the prime advantages of group living?" The crystallized practical knowledge of culture.

Of course the catch is that "volatile, erotic primates" (Becker) either have to enjoy something or be coerced into it; otherwise the behavior will not predominate. So, people need people, the same in 2024 as at the dawn of humanity. To live in groups, the very first condition, actually, is that people want to do this, and they probably have to want this for its own sake, apart from any expectable practical benefits. All of that granted, the point is: the Eye-Stomach problem wreaks the same havoc here as at the dessert stand.

The meaning of life is to take only as much as you absolutely need, not a farthing more. Psycho-voyeurism is cake for extraverts, both in its visceral appeal and in its ultimate hazards. Introverts and ambiverts, to extend the analogy, would be gluten-intolerant, or diabetic, or in a few exceptional cases they would be people who truly do not like the taste of cake. People who do not share in others' taste for excess make excellent scapegoats whenever the junkies cannot get their fix.

Further along, Preston and de Waal enumerate a few of the competing cognitivist accounts of empathy and reconcile them with their own preferred account of "perception-action processes."

The AIM [Active Intermodal Mapping Hypothesis] is proposed to explain early facial imitation, and lay the groundwork for empathy. According to the AIM, the object's expression is perceived and compared to the subject's own current expression (from proprioceptive feedback) in a supramodal representational space. ... According to a perception-action view the perception of the object's expression automatically activates a similar motor expression in the subject (in contrast to AIM), but through a representation (in agreement with AIM). ...

Simulation theory has also been proposed to be a mechanism for empathy, where the subject understands the mental and emotional state of the object by simulating the object's state internally. Generally, the perception-action mechanism and simulation theory are not in conflict. Some descriptions of the simulation process seem more explicit and cognitive than a perception-action model would suggest, but most postulate implicit as well as explicit processes.

In the literature, simulation theory stands in contradistinction to the theory-theory, which postulates that individuals understand the world through theories that they develop. With the PAM, the two theories are compatible; simulation theory is a description at a level between metaphor and mechanism that is interested in how the state of the object is imparted to the subject while theory-theory is a description at the level of metaphor that is interested in the ways that these perceptions change during development...

(Sec. 2.1., "Existing theories")

(Remarkably, "theory-theory" is not a typo.)

Keith Oatley, "Why Fiction May Be Twice as True as Fact: Fiction as Cognitive and Emotional Simulation."
https://www.researchgate.net/publication/232509401

Keith Oatley, Raymond A. Mar, and Maja Djikic, "The psychology of fiction: Present and future."
https://www.yorku.ca/mar/Oatley%20et%20al%20in%20press_Cognition%20of%20Literature%20Chapter_preprint.pdf

It is no coincidence that "simulation theory," being comparatively "explicit and cognitive," has been seized upon by Oatley, the "cognitive literary" theorist, to describe what happens when people read novels.

Oatley and coauthors dismiss, in a sentence, the deconstructionist account of literature, calling this account "jejune." According to Preston and de Waal's perception-action model, however,

there is no empathy that is not projection, since you always use your own representations to understand the state of another.
(Sec. 3.1.1., "Representations change with experience")

If the brand-name French Theorists had chosen simpler verbiage, perhaps it would sound a lot like this.




. . . "to use the "outside" world ... to crystallize ... to take note" . . .

Joseph Henrich and Richard McElreath, "The Evolution of Cultural Evolution" (pp. 123-124).
https://henrich.fas.harvard.edu/files/henrich/files/henrich_mcelreath_2003.pdf

From the abstract:

it seems certain that the same basic genetic endowment produces arctic foraging, tropical horticulture, and desert pastoralism, a constellation that represents a greater range of subsistence behavior than the rest of the Primate Order combined. The behavioral adaptations that explain the immense success of our species are cultural in the sense that they are transmitted among individuals by social learning and have accumulated over generations.
(p. 123)

The paper begins:

In 1860, aiming to be the first Europeans to travel south to north across Australia, Robert Burke led an extremely well-equipped expedition of three men (King, Wills and Gray) from their base camp in Cooper’s Creek in central Australia with five fully loaded camels (specially imported) and one horse. Figuring a maximum round trip travel time of three months, they carried twelve weeks of food and supplies. Eight weeks later they reached tidal swamps on the northern coast and began their return. After about ten weeks their supplies ran short and they began eating their pack animals. After twelve weeks in the bush, Gray died of illness and exhaustion, and the group jettisoned most of their remaining supplies. A month later, they arrived back in their base camp, but found that their support crew had recently departed, leaving only limited supplies. Still weak, the threesome packed the available supplies and headed to the nearest outpost of “civilization,” Mt. Hopeless, 240km south. In less than a month, their clothing and boots were beyond repair, their supplies were again gone, and they ate mostly camel meat.

Faced with living off the land, they began foraging efforts and tried, unsuccessfully, to devise means to trap birds and rats. They were impressed by the bountiful bread and fish available in aboriginal camps, in contrast to their own wretched condition. They attempted to glean as much as they could from the aboriginals about nardoo, an aquatic fern bearing spores they had observed the aboriginals using to make bread. Despite traveling along a creek and receiving frequent gifts of fish from the locals, they were unable to figure out how to catch them. Two months after departing from their base camp, the threesome had become entirely dependent on nardoo bread and occasional gifts of fish from the locals. Despite consuming what seemed to be sufficient calories, all three became increasingly fatigued and suffered from painful bowel movements. Burke and Wills soon died, poisoned and starved from eating improperly processed nardoo seeds. Unbeknown to these intrepid adventurers, nardoo seeds are toxic and highly indigestible if not properly processed. The local aboriginals, of course, possess specialized methods for detoxifying and processing these seeds. Fatigued and delusional, King wandered off into the desert where he was rescued by an aboriginal group, the Yantruwanta. He recovered and lived with the Yantruwanta for several months until a search party found him.

The planning for this expedition could not have been more extensive, and these men were not unprepared British schoolboys out on holiday. However, despite their big brains, camels, specialized equipment, training, and seven months of exposure to the desert environment prior to running out of supplies, they failed to survive in the Australian desert. This bit of history makes a simple point: Humans, unlike other animals, are heavily reliant on social learning to acquire large and important portions of their behavioral repertoire. No evolved cognitive modules, “evoked culture,” or generalized cost-benefit calculators delivered to these men the knowledge of how to detoxify nardoo spores or how to make and use rat traps, bird snares, or fishing nets from locally available materials. Unlike social learning in other animals, human cultural abilities generate adaptive strategies and bodies of knowledge that accumulate over generations. Foraging, as it is known ethnographically, would be impossible without technologies such as kayaks, blowguns, bone tools, boomerangs, and bows. These technological examples embody skills and know-how that no single individual could figure out in his lifetime. Nonmaterial culture, such as seed processing techniques, tracking abilities, and medicinal plant knowledge, reveals similar locally adaptive accumulations. Interestingly, this adaptive information is often embodied in socially learned rules, techniques, and heuristics that are applied with little or no understanding of how or why they work.

Thus, understanding a substantial amount of human adaptation requires understanding the cultural learning processes that assemble our behavioral repertoires over generations. This is not, however, a call to separate humans from the rest of nature. A productive approach should seat humans within the broader context of mammalian and primate evolution while at the same time being able to explain how and why humans are so different in the diversity and nature of their behavioral adaptations.

(pp. 123-124)




. . . "varying degrees of autonomy" . . .

Lasch, Culture of Narcissism.

Sport does play a role in socialization, but the lessons it teaches are not necessarily the ones that coaches and teachers of physical education seek to impart. The mirror theory of sport, like all reductionist interpretations of culture, makes no allowance for the autonomy of cultural traditions. In sport, these traditions come down from one generation of players to another, and although athletics do reflect social values, they can never be completely assimilated to those values. Indeed, they resist assimilation more effectively than many other activities, since games learned in youth exert their own demands and inspire loyalty to the game itself, rather than to the progams ideologues seek to impose on them.
(p. 115)

In its simplest form, the argument for the existence of some such "autonomy of cultural traditions" goes: no two people are exactly alike, hence no two relationships between a person and a "cultural tradition" can ever be exactly alike either, not even if the people and the tradition never seem to change very much.

This is yet another way of saying that some of practice (or perhaps all of it) lives "outside" of the human beings who comprise it. Practice cannot be perfectly amenable to their will, not even if they are its creators; most of it is made by and for someone else.

Obviously this does not quite jibe with the "social theory of art." The tradition is always something more than the sum of its social parts.


UPDATED: 6 Dec 2024


No comments:

Post a Comment