Statistics and Symbols in Mimicking the Mind

MIT recently held a symposium on the current status of AI, which apparently has seen precious little progress in recent decades. The discussion, it seems, ground down to a squabble over the prevalence of statistical techniques in AI and a call for a revival of work on the sorts of rule-governed models of symbolic processing that once dominated much of AI and its sibling, computational linguistics.

Briefly, from the early days in the 1950s up through the 1970s both disciplines used models built on carefully hand-crafted symbolic knowledge. The computational linguists built parsers and sentence generators and the AI folks modeled specific domains of knowledge (e.g. diagnosis in elected medical domains, naval ships, toy blocks). Initially these efforts worked like gang-busters. Not that they did much by Star Trek standards, but they actually did something and they did things never before done with computers. That’s exciting, and fun.

In time, alas, the excitement wore off and there was no more fun. Just systems that got too big and failed too often and they still didn’t do a whole heck of a lot.

Then, starting, I believe, in the 1980s, statistical models were developed that, yes, worked like gang-busters. And these models actually did practical tasks, like speech recognition and then machine translation. That was a blow to the symbolic methodology because these programs were “dumb.” They had no knowledge crafted into them, no rules of grammar, no semantics. Just routines the learned while gobbling up terabytes of example data. Thus, as Google’s Peter Norvig points out, machine translation is now dominated by statistical methods. No grammars and parsers carefully hand-crafted by linguists. No linguists needed.

What a bummer. For machine translation is THE prototype problem for computational linguistics. It’s the problem that set the field in motion and has been a constant arena for research and practical development. That’s where much of the handcrafted art was first tried, tested, and, in a measure, proved. For it to now be dominated by statistics . . . bummer.

So that’s where we are. And that’s what the symposium was chewing over.

* * * * *

All that’s just a set-up for some slightly older observations by Martin Kay. Martin Kay is one of the grand old men of computational linguistics. He was on the machine translation team that David Hays assembled at RAND in the late 1950s and has done seminal work in the field. In 2005 the Association for Computational Linguistics gave him a lifetime achievement award. And he gave them an acceptance speech. Here’s a passage near the end of that speech, which is worth reading from start to end (PDF); Kay is talking about the statistical vs. the symbolic approach:

Now I come to the fourth point, which is ambiguity. This, I take it, is where statistics really come into their own. Symbolic language processing is highly nondeterministic and often delivers large numbers of alternative results because it has no means of resolving the ambiguities that characterize ordinary language. This is for the clear and obvious reason that the resolution of ambiguities is not a linguistic matter. After a responsible job has been done of linguistic analysis, what remain are questions about the world. They are questions of what would be a reasonable thing to say under the given circumstances, what it would be reasonable to believe, suspect, fear or desire in the given situation.

This, BTW, has come to be known as the common sense problem. Once AI starting trying to model how we reason about the world in general, it discovered that we had thousands and tens of thousands of little bits of knowledge we relied on all the time. Like, you know: rain is wet, being wet is often unpleasant, people don’t like to get wet, umbrellas keep the rain off you, so you don’t get wet, which you don’t like, and that’s why you took an umbrella with you when you went out because you looked out the door and saw dark clouds in the sky and clouds are a sign of rain meaning that you might get wet while walking to the grocery store so better have an umbrella with you. Like that. Just endless piles and piles of such utterly trivial stuff. All of which had to be carefully hand-coded into computerese. And, while the knowledge itself is trivial, the hand-coding is not. And so, as I indicated above, that particular enterprise ground to a halt.

Kay continues:

If these questions are in the purview of any academic discipline, it is presumably artificial intelligence. But artificial intelligence has a lot on its plate and to attempt to fill the void that it leaves open, in whatever way comes to hand, is entirely reasonable and proper. But it is important to understand what we are doing when we do this and to calibrate our expectations accordingly. What we are doing is to allow statistics over words that occur very close to one another in a string to stand in for the world construed widely, so as to include myths, and beliefs, and cultures, and truths and lies and so forth.

That, I believe, is a very important point. We’re using statistics about actual language use, based on crunching billions of words of text, as a proxy for detailed and systematic knowledge of the world. How do people get such knowledge? First, through living in the world, perceiving it, moving in it, doing things. And then there’s book learning, which builds on a necessary foundation of direct physical experience.

Kay concludes his thought:

As a stop-gap for the time being, this may be as good as we can do, but we should clearly have only the most limited expectations of it because, for the purpose it is intended to serve, it is clearly pathetically inadequate. The statistics are standing in for a vast number of things for which we have no computer model. They are therefore what I call an “ignorance model”.

And that’s where we are. The question is: How can we do better? As far as I can tell, there’s no obvious answer to that question. As far as I can tell, we somehow need to get all that commonsense knowledge into computerese. I don’t think we know how to do it.

I don’t think we can hand-code it. For reasons I couldn’t explain very well even if I tried, I don’t think hand-coding can, even in principle, be very effective, no matter how clever the formalism, nor how diligent the coders. The machine is going to have to acquire that knowledge by learning it. We can hand-code the learning device, but then we’re going to have to set it free in the world and let it learn for itself.

We can certainly think about doing such things. Indeed, we are doing them in limited yet often interesting ways. But full-scale, all-out, multi-modal (seeing, hearing, touching, handling, smelling) common sense knowledge of the world. Nope, we’re not there year.

But we can dream, and we can scheme. And the obvious thing to scheme about is linking the symbolic and statistical approachs into a single system. Ideas anyone?

21 thoughts on “Statistics and Symbols in Mimicking the Mind”

  1. I wonder if there is a developmental piece to this. Lots of irregularities in grammatical words are common, learned young, and are particularly difficult for non-native speakers. Could it be that early on we acquire language with simple statistical associations and that grammatical rule based language comes later as a “Plan B” approach to expressing oneself where there is no off the shelf answer in the statistical database that is developed at a developmental state where more formal conscious reasoning, as opposed to mere statistically absorbsion in the norm?

    This could also explain why immersion works better than classroom instruction in a foreign language even for comparable numbers of hours of work. Immersion may force people into a “childlike” statistical mode of language acquisition as opposed to the more rational approach of classroom learning.

  2. Makes sense to me. Though I do think we have some structure ‘built in,’ I also think that statistical processing is the ‘leading edge’ of our encounter with the world. The structure is imposed on and carved out of a less structured mesh of associations and confluences.

  3. I suppose it’s difficult if you are working so close to this to appreciate the question I’m about to ask. How is it you’re half a century on from Chomsky’s early models , there’s whole university faculties all over the world studying this stuff , there’s even basic 2D models of the structure of language/cognition – so what
    is stopping you producing a 3D ( more D) model of the Deep Structure underlying language?

    Feck ( not a swear in my country )- I’m only doing some basic English writing skills and one of my characters has to make a DS model like this. So for authenticity I researched on the net to see what this DS model is and I find no one’s actually produced one in real life yet.

    Yet it’s so obvious how to do one if you stand back and think about it.

    Can’t get me breath, I’m flippin’ brickie and I can see it.

    F.

  4. So let your character made the DS model and be done with it. And if your character sticks with it long enough, maybe it’ll fall apart, just like that enterprise has done for much of linguistics. It’s not that there’s no DS, but it sure doesn’t seem like Chomsky envisioned it 50 years ago. Heck, Chomsky abandoned the notion of a ‘classical’ DS at some point in his thinking, though I can’t tell you just when because I’d stopped paying attention to him by the early 1970s.

  5. That’s an interesting response. Note the rider ‘and be done with it.” That’s the kind of response I get when I try and discuss anything with people who do this stuff for a living. You guy’s have an incredible pool of knowledge and resources at your disposal. I have very little of those things but I have time to think.

    I just don’t get it. I can visualise this model, a structure made of nodal connections. It’s utterly massive, exists deeper than the awareness of the conscious minds that carry and add their little bit to it , from one generation to the next . Language is only part of the structure and like DNA the structure contains all the information dating back to when consciousness first arose. It shifts ‘state’ and symmetry to give rise to meaning, cognition and awareness. Interconnected areas-like an off off switch and is reflected in measurable neurological processes. I can see how to extroplate such a model from a variety of current interdisciplinary approaches convert this ‘classical model into wave and possibly algorithm.

    It’s not a classical ‘solid’ structure I’m describing, but an more abstract representational one whist an be represented structurally and hen classically . I think ‘mapping’ it out would be useful for guys like you and I can’t understand why you haven’t done it. To me it’s totally fascinating. You’ve 3D got computer modeling now, you have formula for converting from classical structure to wave, you now have the kind of maths for example, Penrose developed, which is required for describing structures that possess five and more plane symmetries- and you’ve got computers that crunch numbers into algorithm or other expressive formula.

    Why don’t just you do this? I don’t get it. Where you are currently at is not much to show for fifty odd years worth of research papers is it?

  6. One last thought before ‘I forget this’ someone’s going to say “Oh you seem to be describing Dawkin’s memes.” No I’m not really. Meme was, as he himself suggested a vague suggestion of a much bigger idea. I’m suggesting a mapping out, a ‘coherence’ that is arrived at and represented by a variety of interdisciplinary approaches and can be described and represented by each of the systems – it would be something that ‘translates’and has validity beyond a single disciplinary approach. You would begin to have a ‘map’ of the ‘lingnome’ which is one aspect of the whole model taking into account pre-linguistics and innate and learned adaptive behaviors. I’m suggesting a structure that began simply and in replicating simple patterns developed
    the necessary complexity to give rise to language.

    I stress this is not the ‘same’ as DNA because it is more abstract model than a physically observable, measurable structure. However I would suggest it might find reflection in measurable, quantifiable neurological processes. It might be that the same kind of structure/ wave ‘form’ is observable in macro
    model mapping its evolutionary development and is also observable in the microcosm of an individual brain. How else is this structure going to be carried and modified from one species or generation of the same species?

    Anyway, I have to go and lay a patio and make riven edges tessellate today.

  7. Where you are currently at is not much to show for fifty odd years worth of research papers is it?

    Well, now, just how does one evaluate this assertion? Because in an obvious way you’re right. Fifty odd years of research and what? Well, what we know, for example, is how to get a pile of digital electronics to plan a championship game of Jeopardy, that’s what we know. And another pile of digital electronics uses statistical techniques to produce lame, but often useful, translations from one language to another. But the DS of language and thought, that eludes us.

    Looks like we’ve spent 50 years laying out in exquisite detail lots of ideas that looked promising at the ‘I have a dream stage’ but that failed when it came time to build something. Still, we’ve accomplished something, just not what we set out to. But then, Columbus was trying to get to India. That didn’t work out too well on that score, did it?

  8. I stumbled into thinking about this by accident. I didn’t want to make a total load of mumbo-jumbo up because (as in Dan Brown’s work ) it annoys me. I wanted something factual. I got interested in the whole subject to the extent I was looking at the history of DARPA research tenders to see how far they had got with it.

    1. I am completely aware that being able to visualise something existing is no guarantee that it does exist nor that it might be a useful representation of anything at all.

    2. Columbus trying to find a route to India. I’m tempted to say if he made the journey today he would be able to use GPS and find wherever he was going a lot easier.

    Maybe Chomsky was wanting to do for linguistics what Crick and Watson had done for biological sciences. That double helix became like a tantalizing symbol, but it was never going to going to be as simple as that . DS might be worth revisiting with the technology available today.

    In my ‘essay’ the main character is blind. her classroom assistant makes Chomsky trees out of wire coat hangers and sticks Brialle type writer pads on them. They are hung from the classroom ceiling.

    Coming at it from a different perspective she realizes when the coat hangers move (like childrens mobiles) there is potentially a 3D structure that can be made from their alignments.

    They raid the science lab for old bits of plastic molecule building kit.Armed with more Brialle sticky pads they almost build the 3D model and , like a Rubic’s cube, at first only achieve one aligned face. Messing with it more they start to realise to make the structure align – link nodes, requires something like a 5 plane symmetry in places, and when you move one section up to this plane another section has to drop down to four.

    My character then asks the question if this higher plane alignment is a kind on-off switch which triggers the seemingly automatic flow of language , cognition and awareness.

    As the story develops they translate the structure into wave patterns , then into mathematical formula
    and realize they have the basis of a universal language, that kind of interface systems that DARPA
    are contracting for ( in real life) but lack this common ‘language’ to make work effectively because so much is lost ‘in translation’.

    F.

  9. Well, you know, running in parallel to ‘mainstream’ linguistics we have computational linguistics, which started with the problem of machine translation. My teacher, the late David Hays, was one of the pioneers there, and, in fact, coined the term “computational linguistics.” His buddy, Sydney Lamb, also did early work in MT. And Lamb was one of the first to use generalized directed graphs — rather than just trees — to represent language structure. If you google Lamb’s name you should find your way to a website presenting his theories, of stratificational linguistics. That might give you a different take on ‘DS’.

  10. Like thanks for that link! That is awesome! I’m going to digest it later.

    When I first started looking I found this guy’s work, Kataja.

    http://www.youtube.com/watch?v=UcH9Drp0FpE&feature=related

    I can now figure he’s sort of at a mid point between Chomsky and Lamb.

    I kind of dismissed Kataja elastic node software because I was looking for a 3D node model rather like Simon Kirby’s diagrams of Meaning Space ( which I found after much searching ) but ‘flexible’ and with multiple nodal links so it formed an section of the whole structure. I couldn’t believe Kirby and his team had stopped right there with this possible starting point in 2006 and not taken it further, if not just out of curiosity to see what emerges when you join up more and more dots.

    One thing ,and it’s just a conversational observation, have you noticed how “the mechanism for reflecting items in sequence,” is a bit like the plot for the film PRIMER? 🙂

    Lamb’s stuff maybe could be expanded into more complex models and at some point it might be possible to muck about with applying different types of translational symmetry and other rules to a defined section just to see what happen ( if just for the hell of it .)

    Question : Can lines pass through/over each other without forming a new node at the intersection, can there be a sub rule , ” just passing through no node to be created here” if certain lines cross ? or do they all have to follow a rule that they avoid each other hence follow symmetry rules.

    Cheers very much for that.
    F.

  11. Think of the lines as insulated wires transmitting electrical impulses (that’s how Lamb thinks about them). So, the wires can cross without connecting. They only connect at nodes.

    I’ve not seen PRIMER, by mirror reflection is one form of plot organization; it’s generally called ring form. I’ve blogged about it at New Savanna.

  12. Hey did you read Colin Harrison’s thesis ? Jeez, I’ve just scanned it . Like wow!

    Exactly what you’ve just said: Colin Harrison p 34 paths can cross in these diagrams without necessitating new node formation.

    P227- notes parallels in natural, organic structures

    P198- Bilateral symmetry required for structure.

    There’s something running through the middle of all this, to do with a rule I read somewhere else , especially when Harrison starts to examine “flow” in these systems and kind of tests Lamb’s proximity hypothesis. This sounds very much like something called Feynman’s Path from physics.

    Did Harrison or anyone else go on to convert this into anything other than a the structure they describe here ?

    As I said well interesting read and thanks for the pointer.

    The Primer plot is here BTW and the system needs to discard one ‘item’ in order for the loop to work.

    http://en.wikipedia.org/wiki/File:Time_Travel_Method-2.svg

    F

  13. I don’t know whether Harrison’s done anything since the dissertation. If you really want to get into this, you should email Lamb. There should be an email address somewhere on that site. Tell him I sent you.

    I’ve worked with Lamb’s notions myself, taking them deep into semantics. Go here

    http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=604819#show1501784

    and download the two documents on Attractor Nets. Neither is a formal publication and one is even more obscure than the other. But there’s a bunch of diagrams in there and maybe an idea or two.

  14. Google Fenyman’s path -just look at it , I’ve checked it again , it’s almost describing the same thing as Harrison’s genralized term “flow”/Lambs “proximity” in a different non-classical way.

    Why would Fenymen apply to a linguistic-neurological model? unless , ( of course!) their model can also be described in a non classical way.

    This is what I’m trying to track , someone somewhere in those ten years that have passed since Lamb+ Harrsions’s work must have tried to convert these into, or develop independent, non-classical descriptions – most likely neurologists, but I’d bet they had a linguistic scientist working in conjunction.

    This is so bloomin’ interesting I don’t get why half of academia isn’t into it.

    I would be into it. ( We have a saying in the building trade-) “like a rat up a drainpipe”.:)

    I’m going to check your blog and then the few things you’ve mentioned. I don’t want to bother anyone until I’m a bit clearer on this.

    Catch you later.

    F.

  15. Hi Bill,

    I got to thinking about the idea of Lamb’s model being like insulated wires . Something bugged me about the notion of this and then I though about some stuff I’d read earlier this week when I’d been looking for waves. Got chugging over this in my head at work today:

    Penerose and Hameroff made a good call in a related field . They weren’t looking for clues about DS but a more general explanation of consciousness arising at a point in evolution. They were looking for the critical threshold between pre consciousness and consciousness in an organic ‘computational system’ , that system being a cytoskelton of microtubles belonging to a organism that may have lived 540 million years ago, a nematode worm ,with 300 neurons (3 x 109 tubulins) which would be enough to break the pre conscious threshold.

    Lets return to our hunt for finding a useful symbolism for DS and look at microtubles again, because for a long time they have been thought to act like insulated wires carrying information, but more recent studies show something else, much more interesting is going on.

    Penrose wasn’t the only one thinking around this area of microtubles ,and remember none of these guy’s were looking for a representational model of DS. They were and are all doing their own thing.

    1.1999 Mavromatos NE.

    “We focus on potential mechanisms for ‘energy-loss-free’ transport along the microtubules, which could be considered as realizations of Frohlich’s ideas on the role of solitons for superconductivity and/or biological matter”

    They were looking for a specific type of wave.

    2. Let’s run on to 2004-

    Ionic Wave Propagation along Actin Filaments. J. A. Tuszyński,* S. Portet,* J. M. Dixon,* C. Luxford,* and H. F. Cantiello†

    “Bearing in mind the sheath of counterions around the actin filament, we see that effectively actin polymers may act as biological “electrical wires”, which can be modelled as non-linear inhomogeneous transmission lines that are able to propagate non-linear dispersive solitary wave.”

    3. 2011, and they’ve developed the idea of the non-linear wave further .

    “We also demonstrate that the origin of non-linearity stems from the non-linear capacitance of each tubulin dimer. This brings about conditions required for the creation and propagation of solitonic ionic waves along the microtubule axis. We conclude that a microtubule plays the role of a biological nonlinear transmission line for ionic currents. These currents might be of particular significance in cell division and possibly also in cognitive processes taking place in nerve cells.”

    D.L. Sekulić1*, B.M. Satarić1, J.A. Tuszynski2,3 and M.V. Satarić1

    “So what?” You guys ask. Here’s what:

    “ It is tantalizing to speculate that ionic waves surrounding these filaments may participate or even trigger the rearrangement of intracellular actin networks” J. A. Tuszyński 2004

    Now that, to me, means we have the notion of a wave-structure interaction. Think about it ,
    ripples on the shoreline tell us the nature of waves that have interacted with grains of sand.
    This ‘ionic’ wave interacts with structure which might be a starting point to consider for a representative model of language.

    Why?

    I’m not suggesting DS of language could be visualized as anything like this:

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1304047/figure/fig1/

    but does look rather temptingly like an expanded Lamb diagram though doesn’t it ? Apply Rule 1. Part 2, -just because it ‘feels’ it might represent something that may exist, doesn’t mean it does.

    It might be possible if the ionic wave can travel from ‘node to node’ without energy dissipation , ie it can effectively ‘jump’ from node to node, and this wave could interact , shaping the structure once its reached the intended node to which its traveling, and (as other measured wave/particle dualities leave a trail of where they have been and where they are going,) these pathways/interactive structures can be mapped.

    If this wave did play a role in ‘DS’ shaping then perhaps we would have one component from which we can deduce the rest of the model, bearing in mind we would be trying to conceive of a model in both wave and structural state. (Think: Iceshapes-water ripples but simultaneously.)

    Here’s the thing that strikes me again and again in this question about a structure underlying language and it goes back to symbolism – or rather what symbolism is contemporary at the time of the authors reasoning.

    Chomsky badly wanted a structural model as valid as DNA for linguistics which mapped out DS 1960’s

    Lamb represented his model like neurons. 1990’s, MRI made this a big promising science developing at the time.

    Kirby’s 1990’s team use nodal maps -straight lines in meaning space- meaning space feels like the concept of “space time” and their lines look like
    Schwrtzchild geometry.

    In the field of broader studies of consciousness Penrose is looking for a quantum explanation in 1990’s ( interestingly he postulates quantum wormholes as a means by which necessary levels of inter-connectivity occur – node jumping extreme?)

    Examine the correlation between the big ideas in natural science at the time of the author’s writing and the models of cognition and linguistics they developed .

    So here’s the question- since you guys started looking for DS are you actually representing anything with original models, or might you have been borrowing concepts and models from science and applied them as an interim means of representation in linguistics?

    If this is the case, then this to me, seems more like a very meaningful, often beautiful art, than science. And that’s not a negative criticism, just a statement of how things are .

    F.

  16. 1. The quantuum folks are working at a different level of analysis and explanation from Lamb.

    2. The insulation around axons is imperfect. Where neurons meet, the synapses, the junctures are open to intercellular space.

    3. Impulses propagate along axons without dissipation.

  17. Sorry this is my fault, a writer must always take responsibility for not communicating their intended meaning effectively.

    Basically when I started looking into this I was astounded that no one had developed a visual representative model of language that maintained consistency with developments in other fields and represented it either as a wave model ( say like the images of BEC waves) or as some kind of macro organic or crystal structure. It didn’t make sense.

    What seems to happen is those researching this area borrow models from science, DNA, Quantum Physics, Classical Physics, Neurology and see if their ‘surface readings ‘ of language make any more sense
    when measured within the framework these models. I included Penrose because language is one aspect of consciousness .

    I made the point non- statistically, symbolically representing language and borrowing a framework from another discipline may be more akin to an artist visualizing meaning than an application of a hard science, but at the same time links it to other disciplines and facilitates communication.

    I suggested that if we were looking for a model today
    that complimented statistical and other disciplinary approaches, then maybe some kind of wave-structure model could be visualized , if only to give us less numerate folk a picture in our heads- now that might be a series of images visualized as waves as well as rendered structural models and simplified diagrams.
    I suggested using an ionic wave as an information carrier hence nodal link because was an interesting thing to think about if you are trying to visualise a model that gets beyond lines and wires.

    To see how close this comes, and I’m suggesting a positive consensual ‘inspiration’ from hard sciences, I don’t think for example, crystallographers lifted models from linguistics to describe the structures revealed by diffraction or the geometry of something like a quantum spin/hypercube came about because a pyscicist looked at something like Dunn and Greenhill’s diagrams and adapted their ideas to 4D – they might have but that would be very implausible. But there is a striking resemblance between the development of 3d geometry models in science and those used as proposed representations of a structure of language. – Again we are back to borrowing symbols, and I suggest you need a new symbol.

    Maybe the next step is visualize those linguistic models say ‘4 dimensionally” and see what happens. and visualize them also as waves, (there’s graphics to do all this with).- You might get gobeldygook when the surface is plotted against the nodal connections, but you don’t know, because you haven’t tried it.

    I’d do it for the hell of it, and I can’t understand why you lot aren’t mucking about with this yet, even for fun to see what happens when you do, because it’s all so interesting.

    Maybe Wittgenstein was right, language can’t adequately describe itself. I hope not.

    F.

  18. 1. FWIW, the late Kenneth Pike was talking about waves and particles in language some 30 or 40 years ago. Whether or not he used diagrams, I don’t know.

    2. If you take a look at Dominic Widdows, Geometry and Meaning (2004), you’ll find a bit of math, a bunch of diagrams of one sort or another, and a chapter entitled “Logic with Vectors: Ambiguous Words and Quantum States”, which I’ve book-marked but not read.

    3. The people who do statistical work on language are likely to use spaces with 10s, 100s, or 1000s of dimensions.

    Linguists is not in need of math and visualizations from other disciplines, much less quantuum mechanics. Someone or another’s been fiddling with a bit of all this for decades. What we need are models that do something useful, like explain patterns of empirical observations, or allow effective computer simulation of language processes. That’s rather more difficult to come up with.

  19. Bill you’ve been incredibly patient with me here and you are quite welcome to cite this discussion as a an example of that quality if you ever go for a lecturing interview.

    It’s been a total, genuine buzz following the leads you gave me and I’m going to follow them up further. I’m going to have a massive hangover when I force myself stop thinking about this, but I’ve got a 5000 word sample of novel to prepare and a sort of deadline.

    I’m not going to try to start to work through statistical models yet, but I do get the idea.

    I’m left with a starting point and a little more than a tantalizing glimpse of a beautiful structure and lot more insight into it. I can’t believe it’s not possible to or at least shouldn’t be represented.

    Thanks mate, I’ll drop in on your excellent blog from time to time and I’ll get my lads to do the same. One son’s on for a First in Scriptwriting and he loves AN, the other is well up on Banksey and street art. Me? I’ll be stood in a windblasted field surrounded by random bits of stone making them all tessellate into a sometimes excellent structure and thinking about all this.

    Take care,

    F.

  20. Good chatting with you.

    I hope all this tesselates into something interesting for your novel. Perhaps you can send it along when you’re done?

    The thing about linguistics, or any intellectual discipline, is that it’s got inertia to burn. Academic institutions are conservative by nature; they have to be, otherwise energies would be scattered chasing every little idea that comes down the pike. This means, however, that it’s very difficult to make fundamental changes when the time comes. Linguistics, and a whole bunch of other disciplines, need to made deep and fundamental changes, NOW. But it’s very very difficult getting it to happen. That’s why I’m without academic post, taking potshots from the periphery. But at least I’ll not go down when the ship finally sinks.

Leave a Reply