Category Archives: Evolution

20150625-_IGP4408

In Search of Dennett’s Free-Floating Rationales

I’ve decided to take a closer look at Dennett’s notion of free-floating rationale. It strikes me as being an unhelpful reification, but explaining just why that is has turned out to be a tricky matter. First I’ll look at a passage from a recent article, “The Evolution of Reasons” [1], and then go back three decades to a major exposition of the intentional stance as applied to animal behavior [2]. I’ll conclude with some hints about metaphysics.

On the whole I’m inclined to think of free-floating rationale as a poor solution to a deep problem. It’s not clear to me what a good solution would be, though I’ve got some suggestions as to how that might go.

Evolving Reasons

Dennett opens his inquiry by distinguishing between “a process narrative that explains the phenomenon without saying it is for anything” and an account that provides “a reason–a proper telic reason” (p. 50). The former is what he calls a how come? account and the latter is a what for? account. After reminding us of Aristotle’s somewhat similar four causes Dennett gets down to it: “Evolution by natural selection starts with how come and arrives at what for. We start with a lifeless world in which there are lots of causes but no reasons, no purposes at all.” (p. 50).

Those free-floating rationales are a particular kind of what for. He introduces the term on page 54:

So there were reasons before there were reason representers. The reasons tracked by evolution I have called “free-floating rationales” (1983, 1995, and elseswhere), a term that has apparently jangled the nerves of more than a few thinkers, who suspect I am conjuring up ghosts of some sort. Free-floating rationales are no more ghostly or problematic than numbers or centers of gravity. There were nine planets before people invented ways of articulating arithmetic, and asteroids had centers of gravity before there were physicists to dream up the idea and calculate with it. I am not relenting; instead, I am hoping here to calm their fears and convince them that we should all be happy to speak of the reasons uncovered by evolution before they were ever expressed or represented by human investigators or any other minds.

That is, just as there is no mystery about the relationship between numbers and planets, or between centers of gravity and asteroids, so there is no mystery about the relationship between free-floating rationales and X.

What sorts of things can we substitute for X? That’s what’s tricky. It turns out those things aren’t physically connected objects. Those things are patterns of interaction among physically connected objects.

Before taking a look at those patterns (in the next section), let’s consider another passage from this article (p. 54):

Natural selection is thus an automatic reason finder that “discovers,” “endorses,” and “focuses” reasons over many generations. The scare quotes are to remind us that natural selection doesn’t have a mind, doesn’t itself have reasons, but is nevertheless competent to perform this “task” of design refinement. This is competence without comprehension.

That’s where Dennett is going, “competence without comprehension” – a recent mantra of his.

It is characteristic of Dennett’s intentional stance that it authorizes the use of intentional language, such as “discovers,” “endorses,” and “focuses”. That’s what it’s for, to allow the use of such language in situations where it comes naturally and easily. What’s not clear to me is whether or not one is supposed to treat it as a heuristic device that leads to non-intentional accounts. Clearly intentional talk about “selfish” genes is to be cashed out in non-intentional talk, and that would seem to be the case with natural selection in general.

But it is one thing to talk about cashing out intentional talk in a more suitable explanatory lingo. It’s something else to actually do so. Dennett’s been talking about free-floating rationales for decades, but hasn’t yet, so far as I know, proposed a way of getting rid of that bit of intentional talk. Continue reading

20150614-_IGP4112

Dennett’s Astonishing Hypothesis: We’re Symbionts! – Apes with infected brains

It’s hard to know the proper attitude to take toward this idea. Daniel Dennett, after all, is a brilliant and much honored thinker. But I can’t take the idea seriously. He’s running on fumes. The noises he makes are those of engine failure, not forward motion.

At around 53:00 into this video (“Cultural Evolution and the Architecture of Human Minds”) he tells us that human culture is the “second great endosymbiotic revolution” in the history of life on earth, and, he assures us, he means the “literally.” The first endosymbiotic revolution, of course, was the emergence of eukaryotic cells from the pairwise incorporation of one prokaryote within another. The couple then operated as a single organism and of course reproduced as such.

At 53:13 he informs us:

In other words we are apes with infected brains. Our brains have been invaded by evolving symbionts which have then rearranged our brains, harnessing them to do work that no other brain can do. How did these brilliant invaders do this? Do they reason themselves? No, they’re stupid, they’re clueless. But they have talents the permit them to redesign human brains and turn them into human minds. […] Cultural evolution evolved virtual machines which can then be installed on the chaotic hardware of all those neurons.

Dennett is, of course, talking about memes. Apes and memes hooked up and we’re the result.

In the case of the eukaryotic revolution the prokaryots that merged had evolved independently and prior to the merging. Did the memes evolve independently and prior to hooking up with us? If so, do we know where and how this happened? Did they come from meme wells in East Africa? Dennett doesn’t get around to explaining that in this lecture as he’d run out of time. But I’m not holding my breath until he coughs up an account.

But I’m wondering if he’s yet figured out how many memes can dance on the head of a pin.

More seriously, how is it that he’s unable to see how silly this is? What is his system of thought like that such thoughts are acceptable? Continue reading

20150523-_IGP3794

Underwood and Sellers 2015: Beyond narrative we have simulation

It is one thing to use computers to crunch data. It’s something else to use computers to simulate a phenomenon. Simulation is common in many disciplines, including physics, sociology, biology, engineering, and computer graphics (CGI special effects generally involve simulation of the underlying physical phenomena). Could we simulate large-scale literary processes?

In principal, of course. Why not? In practice, not yet. To be sure, I’ve seen the possibility mentioned here and there, and I’ve seen an example or two. But it’s not something many are thinking about, much less doing.

Nonetheless, as I was thinking about How Quickly Do Literary Standards Change? (Underwood and Sellers 2015) I found myself thinking about simulation. The object of such a simulation would be to demonstrate the principle result of that work, as illustrated in this figure:

19C Direction

Each dot, regardless of color or shape, represents the position of a volume of poetry in a one-dimensional abstraction over 3200 dimensional space – though that’s not how Underwood and Sellers explain it (for further remarks see “Drifting in Space” in my post, Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction). The trend line indicates that poetry is shifting in that space along a uniform direction over the course of the 19th century. Thus there seems to be a large-scale direction to that literary system. Could we create a simulation that achieves that result through ‘local’ means, without building a telos into the system?

The only way to find out would be to construct such a system. I’m not in a position to do that, but I can offer some remarks about how we might go about doing it.

* * * * *

I note that this post began as something I figured I could knock out in two or three afternoons. We’ve got a bunch of texts, a bunch of people, and the people choose to read texts, cycle after cycle after cycle. How complicated could it be to make a sketch of that? Pretty complicated.

What follows is no more than a sketch. There’s a bunch of places where I could say more and more places where things need to be said, but I don’t know how to say them. Still, if I can get this far in the course of a week or so, others can certainly take it further. It’s by no means a proof of concept, but it’s enough to convince me that at some time in the future we will be running simulations of large scale literary processes.

I don’t know whether or not I would create such a simulation given a budget and appropriate collaborators. But I’m inclined to think that, if not now, then within the next ten years we’re going to have to attempt something like this, if for no other reason than to see whether or not it can tell us anything at all. The fact is, at some point, simulation is the only way we’re going to get a feel for the dynamics of literary process.

* * * * *

It’s a long way through this post, almost 5000 words. I begin with a quick look at an overall approach to simulating a literary system. Then I add some details, starting with stand-ins (simulations of) texts and people. Next we have processes involving those objects. That’s the basic simulation, but it’s not the end of my post. I have some discussion of things we might do with this system followed with suggestions about extending it. I conclude with a short discussion of the E-word. Continue reading

20150523-_IGP3774

Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015

Here I’m just thinking out loud. I want to play around a bit.

Conrad’s Heart of Darkness is well within the 1820-1919 time span covered by Underwood and Sellers in How Quickly Do Literary Standards Change?, while Austen’s Pride and Prejudice, published in 1813, is a bit before. And both are novels, while Underwood and Sellers wrote about poetry. But these are incidental matters. My purpose is to think about literary history and the direction of cultural change, which is front and center in their inquiry. But I want to think about that topic in a hypothetical mode that is quite different from their mode of inquiry.

So, how likely is it that a book like Heart of Darkness would have been published in the second decade of the 19th century, when Pride and Prejudice was published? A lot, obviously, hangs on that word “like”. For the purposes of this post likeness means similar in the sense that Matt Jockers defined in Chapter 9 of Macroanalysis. For all I know, such a book may well have been published; if so, I’d like to see it. But I’m going to proceed on the assumption that such a book doesn’t exist.

The question I’m asking is about whether or not the literary system operates in such a way that such a book is very unlikely to have been written. If that is so, then what happened that the literary system was able to produce such a book almost a century later?

What characteristics of Heart of Darkness would have made it unlikely/impossible to publish such a book in 1813? For one thing, it involved a steamship, and steamships didn’t exist at that time. This strikes me as a superficial matter given the existence of ships of all kinds and their extensive use for transport on rivers, canals, lakes, and oceans.

Another superficial impediment is the fact that Heart is set in the Belgian Congo, but the Congo hadn’t been colonized until the last quarter of the century. European colonialism was quite extensive by that time, and much of it was quite brutal. So far as I know, the British novel in the early 19th century did not concern itself with the brutality of colonialism. Why not? Correlatively, the British novel of the time was very much interested in courtship and marriage, topics not central to Heart, but not entirely absent either.

The world is a rich and complicated affair, bursting with stories of all kinds. But some kinds of stories are more salient in a given tradition than others. What determines the salience of a given story and what drives changes in salience over time? What had happened that colonial brutality had become highly salient at the turn of the 20th century? Continue reading

20150523-_IGP3782

Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking

In the middle of their most interesting and challenging paper, How Quickly Do Literary Standards Change?, Underwood and Sellers have two paragraphs in which they raise the specter of Whig history and banish it. In the process they take some gratuitous swipes at Darwin and Lamarck and, by implication, at the idea that evolutionary thinking can be of benefit to literary history. I find these two paragraphs confused and confusing and so feel a need to comment on them.

Here’s what I’m doing: First, I present those two paragraphs in full, without interruption. That’s so you can get a sense of how their thought hangs together. Second, and the bulk of this post, I repeat those two paragraphs, in full, but this time with inserted commentary. Finally, I conclude with some remarks on evolutionary thinking in the study of culture.

Beware of Whig History

By this point in their text Underwood and Sellers have presented their evidence and their basic, albeit unexpected finding, that change in English-language poetry from 1820-1919 is continuous and in the direction of standards implicit in the choices made by 14 selective periodicals. They’ve even offered a generalization that they think may well extend beyond the period they’ve examined (p. 19): “Diachronic change across any given period tends to recapitulate the period’s synchronic axis of distinction.” While I may get around to discussing that hypothesis – which I like – in another post, we can set it aside for the moment.

I’m interested in two paragraphs they write in the course of showing how difficult it will be to tease a causal model out of their evidence. Those paragraphs are about Whig history. Here they are in full and without interruption (pp. 20-21):

Nor do we actually need a causal explanation of this phenomenon to see that it could have far-reaching consequences for literary history. The model we’ve presented here already suggests that some things we’ve tended to describe as rejections of tradition — modernist insistence on the concrete image, for instance — might better be explained as continuations of a long-term trend, guided by established standards. Of course, stable long-term trends also raise the specter of Whig history. If it’s true that diachronic trends parallel synchronic principles of judgment, then literary historians are confronted with material that has already, so to speak, made a teleological argument about itself. It could become tempting to draw Lamarckian inferences — as if Keats’s sensuous precision and disillusionment had been trying to become Swinburne all along.

We hope readers will remain wary of metaphors that present historically contingent standards as an impersonal process of adaptation. We don’t see any evidence yet for analogies to either Darwin or Lamarck, and we’ve insisted on the difficulty of tracing causality exactly to forestall those analogies. On the other hand, literary history is not a blank canvas that acquires historical self-consciousness only when retrospective observers touch a brush to it. It’s already full of historical observers. Writing and reviewing are evaluative activities already informed by ideas about “where we’ve been” and “where we ought to be headed.” If individual writers are already historical agents, then perhaps the system of interaction between writers, readers, and reviewers also tends to establish a resonance between (implicit, collective) evaluative opinions and directions of change. If that turns out to be true, we would still be free to reject a Whiggish interpretation, by refusing to endorse the standards that happen to have guided a trend. We may even be able to use predictive models to show how the actual path of literary history swerved away from a straight line. (It’s possible to extrapolate a model of nineteenth-century reception into the twentieth, for instance, and then describe how actual twentieth-century reception diverged from those predictions.) But we can’t strike a blow against Whig history simply by averting our eyes from continuity. The evidence we’re seeing here suggests that literary- historical trends do turn out to be relatively coherent over long timelines.

I agree with those last two sentences. It’s how Underwood and Sellers get there that has me a bit puzzled. Continue reading

20150514-_IGP3710

Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction

I’ve read and been thinking about Underwood and Sellers 2015, How Quickly Do Literary Standards Change?, both the blog post and the working paper. I’ve got a good many thoughts about their work and its relation to the superficially quite different work that Matt Jockers did on influence in chapter nine of Macroanalysis. I am, however, somewhat reluctant to embark on what might become another series of long-form posts, which I’m likely to need in order to sort out the intuitions and half-thoughts that are buzzing about in my mind.

What to do?

I figure that at the least I can just get it out there, quick and crude, without a lot of explanation. Think of it as a mark in the sand. More detailed explanations and explorations can come later.

19th Century Literary Culture has a Direction

My central thought is this: Both Jockers on influence and Underwood and Sellers on literary standards are looking at the same thing: long-term change in 19th Century literary culture has a direction – where that culture is understood to include readers, writers, reviewers, publishers and the interactions among them. Underwood and Sellers weren’t looking for such a direction, but have (perhaps somewhat reluctantly) come to realize that that’s what they’ve stumbled upon. Jockers seems a bit puzzled by the model of influence he built (pp. 167-168); but in any event, he doesn’t recognize it as a model of directional change. That interpretation of his model is my own.

When I say “direction” what do I mean?

That’s a very tricky question. In their full paper Underwood and Sellers devote two long paragraphs (pp. 20-21) to warding off the spectre of Whig history – the horror! the horror! In the Whiggish view, history has a direction, and that direction is a progression from primitive barbarism to the wonders of (current Western) civilization. When they talk of direction, THAT’s not what Underwood and Sellers mean.

But just what DO they mean? Here’s a figure from their work:

19C Direction

Notice that we’re depicting time along the X-axis (horizontal), from roughly 1820 at the left to 1920 on the right. Each dot in the graph, regardless of color (red, gray) or shape (triangle, circle), represents a volume of poetry and its position on the X-axis is volume’s publication date.

But what about the Y-axis (vertical)? That’s tricky, so let us set that aside for a moment. The thing to pay attention to is the overall relation of these volumes of poetry to that axis. Notice that as we move from left to right, the volumes seem to drift upward along the Y-axis, a drift that’s easily seen in the trend line. That upward drift is the direction that Underwood and Sellers are talking about. That upward drift was not at all what they were expecting.

Drifting in Space

But what does the upward drift represent? What’s it about? It represents movement in some space, and that space represents poetic diction or language. What we see along the Y-axis is a one-dimensional reduction or projection of a space that in fact has 3200 dimensions. Now, that’s not how Underwood and Sellers characterize the Y-axis. That’s my reinterpretation of that axis. I may or may not get around to writing a post in which I explain why that’s a reasonable interpretation. Continue reading

20150425-_IGP3163

On the Direction of Cultural Evolution: Lessons from the 19th Century Anglophone Novel

I’ve got another working paper available (title above):

Most of the material in this document was in an earlier working paper, Cultural Evolution: Literary History, Popular Music, Cultural Beings, Temporality, and the Mesh, which also has a great deal of material that isn’t in this paper. I’ve created this version so that I can focus on the issue of directionality and so I’ve dropped all the material that didn’t related to that issue. The last section, The Universe and Time, is new, as is this introduction.

* * * * *

Abstract: Matthew Jockers has analyzed a corpus of 19th century American and British novels (Macroanalysis 2013). Using standard techniques from natural language processing (NLP) Jockers created a 600-dimensional design space for a corpus of 3300 novels. There is no temporal information in that space, but when the novels are grouped according to close similarity that grouping generates a diagonal through the space that, upon inspection, is aligned with the direction of time. That implies that the process that created those novels is a directional one. Certain (kinds of) novels are necessarily earlier than others because that is how the causal mechanism (whatever they are) work. This result has implications for our understanding of cultural evolution in general and of the relationship between cultural evolution and biological evolution.

1. Introduction: Direction in Design Space, Telos? 2
2. The Direction of Cultural Evolution: The Child is Father or the Man 6
3. Nineteenth Century English-Language Novels 9
4. Macroanalysis: Styles 10
5. Macroanalysis: Themes 13
6. Influence and Large Scale Direction 15
7. The 19th Century Anglophone Novel 18
8. Why Did Jockers Get That Result? 20
9. What Remains to be Done? 21
10. Literary History, Temporal Orders, and Many Worlds 22
11. The Universe and Time 30

Introduction: Evolving Along a Direction in Design Space

In 2013 Matthew Jockers published Macroanalysis: Digital Methods & Literary History (2013). I devoted considerable blogging effort to it 2014, including most, but not all, of the material in this working paper. In Jockers’ final study he operationalized the idea of influence by calculating the similarity between each pair of texts in his corpus of roughly 3300 19th century English-language novels. The rationale is obvious enough: If novelist K was influenced by novelist F, then you would expect her novels to resemble those of F more than those of C, who K had never even read.

Jockers examined this data by creating a directed graph in which each text was represented by a node and each text (node) was connected only to those texts to which it had a high degree of resemblance. This is the resulting graph:

9dot3

It is, alas, almost impossible to read this graph as represented here. But Jockers, of course, had interactive access to it and to all the data and calculations behind it. What is particularly interesting, though, is that the graph lays out the novels more or less in chronological order, from left to right (notice the coloring of the graph), though there was no temporal information in the underlying data. Much of the material in the rest of this working paper deals with that most interesting result (in particular, sections 2, 6, 7, 8, and 10).

What I want to do here is, first of all, reframe my treatment of Jockers’ analysis in terms of something we might call a design space (a phrase I take from Dan Dennett, though I believe it is a common one in certain intellectual circles). Then I emphasize the broader metaphysical implications of Jockers’ analysis. Continue reading

20150329-_IGP2804

Has Dennett Undercut His Own Position on Words as Memes?

Early in 2013 Dan Dennett had an interview posted at John Brockman’s Edge site, The Normal Well-Tempered Mind. He opened by announcing that he’d made a mistake early in his career, that he opted a conception of the brain-as-computer that was too simple. He’s now trying to revamp his sense of what the computational brain is like. He said a bit about that in that interview, and a bit more in a presentation he gave later in the year: If brains are computers, what kind of computers are they? He made some remarks in that presentation that undermine his position on words as memes, though he doesn’t seem to realize that.

Here’s the abstract of that talk:

Our default concepts of what computers are (and hence what a brain would be if it was a computer) include many clearly inapplicable properties (e.g., powered by electricity, silicon-based, coded in binary), but other properties are no less optional, but not often recognized: Our familiar computers are composed of millions of basic elements that are almost perfectly alike – flipflops, registers, or-gates – and hyper-reliable. Control is accomplished by top-down signals that dictate what happens next. All subassemblies can be designed with the presupposition that they will get the energy they need when they need it (to each according to its need, from each according to its ability). None of these is plausibly mirrored in cerebral computers, which are composed of billions of elements (neurons, astrocytes, …) that are no-two-alike, engaged in semi-autonomous, potentially anarchic or even subversive projects, and hence controllable only by something akin to bargaining and political coalition-forming. A computer composed of such enterprising elements must have an architecture quite unlike the architectures that have so far been devised for AI, which are too orderly, too bureaucratic, too efficient.

While there’s nothing in that abstract that seems to undercut his position on memes, and he affirmed that position toward the end of the talk, we need to look at some of the details.

The Material Mind is a Living Thing

The details concern Terrence Deacon’s recent book, Incomplete Nature: How Mind Emerged from Matter (2013). Rather than quote from Dennett’s remarks in the talk, I’ll quote from his review, “Aching Voids and Making Voids” (The Quarterly Review of Biology, Vol. 88, No. 4, December 2013, pp. 321-324). The following passage may be a bit cryptic, but short of reading the relevant chapters in Deacon’s book (which I’ve not done) and providing summaries, there’s not much I can do, though Dennett says a bit more both in his review and in the video.

Here’s the passage:

But if we are going to have a proper account of information that matters, which has a role to play in getting work done at every level, we cannot just discard the sender and receiver, two homunculi whose agreement on the code defines what is to count as information for some purpose. Something has to play the roles of these missing signal-choosers and signal-interpreters. Many—myself included—have insisted that computers themselves can serve as adequate stand-ins. Just as a vending machine can fill in for a sales clerk in many simplified environments, so a computer can fill in for a general purpose message-interpreter. But one of the shortcomings of this computational perspective, according to Deacon, is that by divorcing information processing from thermodynamics, we restrict our theories to basically parasitical systems, artifacts that depend on a user for their energy, for their structure maintenance, for their interpretation, and for their raison d’être.

In the case of words the signal choosers and interpreters are human beings and the problem is precisely that they have to agree on “what is to count as information for some purpose.” By talking of words as memes, and of memes as agents, Dennett sweeps that problem under the conceptual rug. Continue reading

20150329-_IGP2835

Glossary of Terms for Cultural Evolution

This is a short list of terms that I have come to treat as terms of art in thinking about cultural evolution. I have no idea how stable these terms and definition will prove to be. I am posting them to a page at New Savanna so that they can be readily referenced. Most of these terms are relatively recent, but my thinking about cultural evolution is broadly scattered aross many posts and working papers and a handfull of formal articles).

Coordinator: The genetic element in cultural processes. Coordinators are physical traits of objects or processes. The emic/etic distinction in linguistics is a useful reference point. Phonetics is the study of language sounds. Phonemics is the study of those sound features, phonemes, that are active in a language.

The notion of a coordinator is, in effect, a generalization of the phoneme. A coordinator is a physical trait that is psychologically active/salient in cultural processes.

If you want to think in terms of computation, observe that computers, both abstract and real, operate on data. Some bits of data are special in that they directly influence processing by supplying the values of operating parameters. Coordinators are data of that type. Coordinators supply the values to parameters of mental “software.”

Note that coordinators are not, in this sense, Dawkinsian replicators. Nor is it obvious to me that they form lineages. Finally, where the genetic material of biology exists everywhere in the same substrate – DNA molecules – coordinators can exist on any publically accessible substrate, with most of them being either visible or audible.

Coupler: A kind of coordinator through which the temporal activities of two or more nervous systems are synchronized. When soldiers march in step the rate and length of their strides couple their motions together into a coherent ensemble. The conventions of the blues, or of a particular raga, are more complex couplers. Conversational turn taking is a coupling function too.

Cover (paint): Objects, artifacts, actions and processes, that is, actors in the mesh, are said to be covered or to be painted with coordinators.

Cultural Being: A package or envelope of coordinators along with its trajectory in the minds of all who use it. As such, cultural beings are the object on which cultural selection operates. They are thus the phenotypic entities of culture. If participating in a cultural being was pleasant, then one would be motivated to do so again. Otherwise not.

The consequences of this definition are not obvious and will require careful consideration. I’ll give an example from music to give a sense of what I’ve got in mind. Continue reading

20150404-_IGP2940

Dennett on the De-Darwinizing of Culture

This is Dennett at his best on cultural evolution, which, given the peculiar nature of his gifts, is also Dennett at his worst on cultural evolution.

This recent video (talk given 19 March 2015) gathers many of Dennett’s recent themes and examples. The central thread is worthwhile – Dennett’s only idea on culture that’s caught my interest – but it is festooned with his typical assembly of brilliant obfuscating rhetorical ornamentation. One has the impression that he’s thought more and more deeply about biology than about culture. And so he’s using biology as a vehicle for understanding culture. That’s not unreasonable providing, of course, that you have a robust understanding of culture than is not piggybacking on biology. Dennett seems rather poor in that sort of understanding of culture.

Dennett’s rhetoric in this video would reward a close analysis, but not by me, not at this time. I’ve already done some of that in my working paper, Cultural Evolution, Memes, and the Trouble with Dan Dennett; see the appendix. “Turtles All the Way Down: How Dennett Thinks: An Essay In Cognitive Rhetoric”.

As for De-Darwinizing, the idea seems to be something like this: There are things whose design is the result of what we might call a “full Darwinian” process, what Donald Campbell characterizes as blind variation and selective retention (BVSR). A lot of language seems rather like that, but in the cultural rather than the biological sphere. But there are also words that have been deliberately coined and introduced into the language, such as “meme”. So, not “full Darwinian”. It’s De-Dawinized. Continue reading