Tag Archives: Brain and Behavior

Darwin’s Unfinished Symphony

Darwin’s Unfinished Symphony: How Culture Made the Human Mind is a new book on cultural evolution in humans from a biological perspective.

This is an interesting book and a good book, and I recommend it, but I need to add a strong caveat. The author could have made a more compelling argument had he more carefully studied and used some of the prior work that makes a similar argument. He strangely cites Terry Deacon in two places (once as a psychologist, incorrectly) for work Deacon has done, but seems to ignore Deacon’s key thesis, which is pretty much the same as Laland’s key thesis. (See: Incomplete Nature: How Mind Emerged from Matter.) There are other examples of prior work not known about, apparently, or incorporated. But, nonetheless, Laland does present a reasonable stab at how to think about human culture in relationship to evolution and an interesting “theory” of how it all came to be, even if it is presented as more original than it actually is.

From the publisher’s description:

Humans possess an extraordinary capacity for cultural production, from the arts and language to science and technology. How did the human mind–and the uniquely human ability to devise and transmit culture–evolve from its roots in animal behavior? Darwin’s Unfinished Symphony presents a captivating new theory of human cognitive evolution. This compelling and accessible book reveals how culture is not just the magnificent end product of an evolutionary process that produced a species unlike all others–it is also the key driving force behind that process.

Kevin Laland shows how the learned and socially transmitted activities of our ancestors shaped our intellects through accelerating cycles of evolutionary feedback. The truly unique characteristics of our species–such as our intelligence, language, teaching, and cooperation–are not adaptive responses to predators, disease, or other external conditions. Rather, humans are creatures of their own making. Drawing on his own groundbreaking research, and bringing it to life with vivid natural history, Laland explains how animals imitate, innovate, and have remarkable traditions of their own. He traces our rise from scavenger apes in prehistory to modern humans able to design iPhones, dance the tango, and send astronauts into space.

This book tells the story of the painstaking fieldwork, the key experiments, the false leads, and the stunning scientific breakthroughs that led to this new understanding of how culture transformed human evolution. It is the story of how Darwin’s intellectual descendants picked up where he left off and took up the challenge of providing a scientific account of the evolution of the human mind.

Further discussion of the 2016 apparent plethora of celebrity death

A few days ago I posted this item asking if it was really true that more celebrities have died this year than usual. That post went viral, so of course, the famous Doug McIntyre (who is, by the way, originally from Minnetonka, Minnesota) asked me to join him on KABC, McIntyre in the Morning, an LA based drive time radio show, 790 on your dial.

We had an interesting conversation, along with Randy Wang, and here it is:

Is 2016 really killing more celebrities than other years did?

Before we address this question, let us recognize that years do not really kill people. That’s just a poetic way of putting it, in common use.

I believe that every year starting around September or October, there is a random spate (spats are generally random, as are small droughts) of celebrity deaths, which lead to conjecture that more celebs are dying off than usual. This idea is then reinforced every time yet another celebrity dies for the remainder of the year, until we finally get to late December, and then everyone is trying to have that year arrested for mass murder. Strangely, people forget that this happened the year before.

And, of course, it is happening again now.

I briefly looked at the list of dead celebrities in Wikipedia for this year and last year, and found out two things: 1) About 300 celebrities die each year and b) the vast majority of “celebrities” listed in these Wikipedia entries are people I’ve never personally heard of, so it is unlikely that they are all really celebrities. I assume this is just another case of Wikipedia, which does an amazing and wonderful job at many things, running into something where there is a matter of definition. Such are things that Wikipedia generally handles poorly.

______________
UPDATE: Yes, it is true, that the mother of Carrie Fisher, Debbie Reynolds, has also died.

See this homage to Carrie Fisher.
______________

As for Wikipedia, I think they simply list the individuals who has Wikipedia pages who died that year. That is probably not very meaningful in the context of the current conversation.

So, I went back to Google and searched around for Celebrity Death Lists. I found one list of people who were expected to die in the upcoming year. That is a whole ‘nuther story. Eventually, I discovered the TV Guide list of dead stars. That, I figure, has got to be a useful source for this. If anybody knows who the stars are, it’s TV Guide. The list is published every year. Here is what TV guide says for the last few years:

deadcelebsaccordingtotvguide
According to TV Guide, 2016 is not a very big year for celebrity deaths at all. 2013 was remarkably more deathy, and 2015, last year, was on the high end of average. If this is true, I wonder how much the extra deathosity in 2015 is spilling over onto 2016. There were a lot of deaths in December, 2015, so maybe that matters. Remember that last year included Leonard Nimoy, Maureen O’Hara, Oliver Sachs, and others whose names may have lingered in our minds to add to the perception of 2016 as a killer year.

I checked some other sources to see if the TV Guide pattern held.

screen-shot-2016-12-27-at-3-28-11-pm

Look first at the CNN data. I assume that the very low number for 2012, and the lack of a clear page on this topic in 2010, indicates that we should ignore those years and only look at 2012 onward. If we do that, it is confirmed that 2016 is nothing special, relatively low, or maybe average.

I also looked at MSN’s pages, and there the numbers are reversed. 2016 is very high, and much higher than the earlier years.

Excepting CNN in 2011, the CNN and TV Guide years seem to be of a believable (using my gut instincts only) range of variation, and as a matter of fact, the difference between the two data sets is believable, if we simply assume that CNN includes more people than TV guide because they are an international news agency with a broader focus. In this context, MSN makes no sense and I would argue that it should be ignored.

Or, perhaps, MSN is the truth and everything else is a lie.

Personally, I think there is something else going on. I think 2016 was not an exceptional year for celebrity deaths, in terms of numbers. The same number as usual died, this was not a record year.

But, the particular celebrities that did die included a disproportionate number of people that those who inhabit Facebook and Twitter, or perhaps, who simply exist in the modern Western world, were attached to. (See the graphic I made for the top of the post for a sampling of iconic individuals who died in 2016.)

I can think of ways to test that idea, but they all involve data collection, calibration, analysis, etc. at a masters level. I’ll leave it to a communications or marketing graduate student, or an anthropologist, to work that out!

Very Smart Birds, Very Smart Bird Book

Crows are smart. Anyone who watches them for a while can figure this out.

But that is true of a lot of things. Your baby is smart (not really). Your dog is smart (not really). Ants are smart (sort of).

It takes a certain degree of objective research, as well as some serious philosophy of intelligence (to define what smart is) to really address this question. But when the research is done and the dust settles, crows are smart.

We were all amazed (or not, because we already knew that crows are smart) to find that New Caledonian crows made and used tools. Now, we know (see my most recent post at 10,000 Birds) that a nearly extinct Hawaiian crow is also a tool user. The interesting thing about this new finding is that it is highly unlikely that the Hawaiian crow and the New Caledonian crow descend from a tool using ancestor, according to the researchers who did this work. Rather, tool use arose independently in the two species. But, really, not so independently.

They are all crows, and crows are smart, and both of these species live in a particular habitat where this tool use makes sense, and competing species of bird that might otherwise be going after the resources the tool use allows access to are absent. So, the trait evolved twice, but not unexpectedly.

The Evolution and Development of Bird Intelligence

I want to point out two things about birds that you probably know. First, they share modalities with humans to a greater degree than most other species, even our fellow mammals. Second, many birds live under conditions where complex behavior would be selected for by long term Darwinian processes.

Most mammals are solitary, small and nocturnal, or if large, are diurnal herd animals or some sort of predator. They tend to be olfactory and have varying degrees of vision, etc. We, on the other hand, are highly visual, not very olfactory, diurnal, and have a complex social system, and so on. We share these traits, for the most part, with our fellow primates, but humans live in many non-primate habitats these days, so we tend to stand out as a bit odd. If you are reading this blog post, chances are that the nearest non-pet and non-human mammal that you could locate right now is a squirrel, and the actual nearest mammal is some sort of rodent that you would have a hard time finding.

But, the nearest animal with an interesting brain, and interesting behavior, is a bird. Go look out your window and report back. I’ll study this diagram on the evolution of intelligence while I await your return.


bird_brain_nathan_emery_figure_evolution

OK, I hope that was fun. Let us know what species it was in the comments, please.

The visual orientation, together with that second trait of smartness, combine to make birds and their smartness akin to human’s smartness to the degree that we subjectively see birds as “intelligent,” and that alone is interesting. But likely, we are both intelligent by objective criteria, about certain things.

Bird Brain: An Exploration of Avian Intelligence was written by Nathan Emery, who is a Senior Lecturer (that’s like a Professor of some sort, in America) at Queen Mary University, London. He researches the evolution of intelligence in animals, including primates and various birds, and yes, including the crows!

He and his team “…have found striking similarities in the behaviour, ecology, neurobiology and cognitive mechanisms of corvids (crows, rooks, jackdaws and jays) and apes. [Suggesting that] these similarities are adaptations for solving similar social and ecological problems, such as finding, protecting and extracting food and living in a complex social world.”

The book is really great, the best book out there right now on animal intelligence, possibly the best book so far this year on birds. This is the kind of book you want laying around the house or classroom to learn stuff from. If you are writing or teaching about anything in evolution or behavior, this is a great way to key into the current work on bird intelligence.

Bird Brain is also going to earn a place on my Holiday Shopping Guide in the “Best gifts to give a science oriented youngster or your local life science teacher to encourage thinking about evolution” category. Yes, this is definitely a gift level book. Nobody will not like this book.

This is like a coffee table book in that it is slightly larger (not huge, just a little big) format, and full of great pictures, and the kind of book you can pick up and start reading anywhere. But it is also a book with a story, in a sense, or at least, an arc organizing the research being reported on. It is engagingly and well written and, very importantly, written by an expert.

I do respect journalists who become very interested in a topic and learn all about it and write it up, but there are limitations to such work. It is possible for various errors, minor or not, to sneak into such a work because the author is not deeply engaged in the way that a lifelong commitment to a work allows for. Bird Brain is written by an expert, so that is not going to happen here.

I highly recommend Bird Brain, for anyone who does not want to be a bird brain about birds, intelligence, evolution, or the evolution of intelligence in birds.

Here’s the TOC:

  • Foreword by Frans ee Waal
  • Introduction
  • 1 From Bird Brain to Feathered Ape
  • 2 Where Did I Hide that Worm?
  • 3 Getting the Message Across
  • 4 Feathered Friends (and Enemies)
  • 5 The Right Tool for the Job
  • 6 Know Thyself, and Other
  • 7 No Longer Bird-Brains
  • Reductionism in Art and Science

    In the old days, the words “art” and “science” did not mean the same thing they mean today, at least in academia. Today, unfortunately, they have almost come to mean opposites. You can’t be doing both at once. Or, at least, that’s what people who haven’t thought about it much may think.

    Art can be used to engage people in science, and science can provide a subject for art, and in various ways, the twain shall meet.

    But in Reductionism in Art and Brain Science: Bridging the Two Cultures, Erik Kandel does something both more extreme and more specific than simply joining the two endeavors. Kandel has a long career in the neurosciences, and a long standing interest in art, and he’s combined these two lived experiences to make a very interesting book.

    Reductionism is the distillation of something complex into something simpler while still maintaining central or key meaning. Grab the nearest art book and find two pictures of the same thing, one with nearly photographic detail and the other using just a few colors and shapes. Like this:

    screen-shot-2016-09-16-at-4-19-38-pm

    See the difference? Two bulls, not the same picture.

    I won’t show you a picture of science being reductionist because science is reductionist most of the time.

    You can reduce art, and you can reduce science. And, you can artfully reduce science and scientifically reduce art. And, the New York School of abstract art and other abstract traditions (people like Turner, Monet, Pollock, de Kooning, Rothko, Louis, Turrell, and Flavin, Kandinsky, Schoenberg, and Mondrian) scientifically reduced art, which forms a good part of the focus of Kandel’s book. A major contribution of this work is a deep and unique understanding of the origin of what we generally call modern art.

    Kandel explains this.
    Kandel explains this.
    Kandel examines cognition and perception through a radically reduced bottom up approach in a similar way that early 20th century artists did, and examines art in the same way. His book is full of understanding of the evolution of thinking about cognition and of art.

    The book includes excellent illustrations, is carefully documented, and comprises a scholarly work accessible by any interested party.

    Here’s the TOC:

    Part I: Two Cultures Meet in the New York School
    Introduction
    1. The Emergence of an Abstract School of Art in New York
    Part II: A Reductionist Approach to Brain Science
    2. The Beginning of a Scientific Approach to the Perception of Art
    3. The Biology of the Beholder’s Share: Visual Perception and Bottom-Up Processing in Art
    4. The Biology of Learning and Memory: Top-Down Processing in Art
    Part III: A Reductionist Approach to Art
    5. Reductionism in the Emergence of Abstract Art
    6. Mondrian and the Radical Reduction of the Figurative Image
    7. The New York School of Painters
    8. How the Brain Processes and Perceives Abstract Images
    9. From Figuration to Color Abstraction
    10. Color and the Brain
    11. A Focus on Light
    12. A Reductionist Influence on Figuration
    Part IV: The Emerging Dialogue Between Abstract Art and Science
    13. Why Is Reductionism Successful in Art?
    14. A Return to the Two Cultures

    Are Pigs Really Like People?

    We hear this all the time. Pig physiology is like people physiology. Pigs and humans have the same immune system, same digestive system, get the same diseases. Pigs are smart like people are smart. Pigs are smarter than dogs. And so on. Ask a faunal expert in archaeology or a human paleoanatomist: Pig teeth are notoriously like human teeth, when fragmented. Chances are most of these alleged similarities are overstated, or are simply because we are all mammals. Some are because we happen to have similar diets (see below). None of these similarities occur because of a shared common ancestor or because we are related to pigs evolutionarily, though there are people who claim that humans are actually chimpanzee-pig hybrids. We aren’t.

    But what if it is true that pigs and humans ended up being very similar in a lot of ways? What if many of the traits we attribute to our own species, but that are rare among non-human animals, are found in pigs? Well, before addressing that question, it is appropriate to find out if the underlying assumption has any merit at all. A new study by Lori Marino and Christina Colvin, “Thinking Pigs: A Comparative Review of Cognition, Emotion, and Personality in Sus domesticus,” published in the International Journal of Comparative Psychology, provides a starting point.

    There are two things you need to know about this study. First, it is a review, looking at a large number of prior studies of pigs. It is not new research and it is not a critical meta-study of the type we usually see in health sciences. The various studies reviewed are not uniformly evaluated and there is no attempt at assessing the likelihood that any particular result is valid. That is not the intent of the study, which is why it is called a review and not a meta-study, I assume. But such reviews have value because they put a wide range of literature in one place which forms a starting point for other research. The second thing you need to know is that the authors are heavily invested in what we loosely call “animal rights,” as members of the Kimmela Center for Animal Advocacy and the Someone Project (Farm Sanctuary). From this we can guess that a paper that seems to show pigs-human similarities would ultimately be used for advocating for better treatment for domestic pigs, which are raised almost entirely for meat. There is nothing wrong with that, but it should be noted.

    In a moment I’ll run down the interesting findings on pig behavior, but first I want to outline the larger context of what such results may mean. The paper itself does not make an interpretive error about pig behavior and cognition, but there is a quote in the press release that I’m afraid will lead to such an error, and I want to address this. The quote from the press release is:

    Dr. Marino explains that “We have shown that pigs share a number of cognitive capacities with other highly intelligent species such as dogs, chimpanzees, elephants, dolphins, and even humans. There is good scientific evidence to suggest we need to rethink our overall relationship to them.”

    What does that mean? In particular, what does the word “relationship” mean? In a behavioral comparative study, “relationship” almost always refers to the evolutionary structure of the traits being observed. For example, consider the question of self awareness, as often tested with the Gallup Test, which measures Mirror Self Recognition (MSR). If a sufficient sample of test animals, when looking in a mirror almost always perceive a conspecific, then that species is considered to not have MSR. If most, or even many, individuals see themselves, then that species is said to have MSR, a kind of self awareness that is linked to a number of important other cognitive capacities.

    Humans have MSR. So, do our nearest relatives, the chimps have it? Do the other apes have it? Other primates? Is this a general mammalian capacity or is it a special-snowflake trait of our own species? It turns out that all the great apes have MSR, but primates generally do not. It may or may not appear among other primates (mostly not). So MSR reflects something that evolved, likely, in the common ancestor of humans and all the other apes. So, the relationship among the primates with respect to MSR, phylogenetically, is that MSR is a shared derived trait of the living apes, having evolved in or prior to that clade’s last common ancestor.

    But we also see MSR in other species including, for example, elephants. The presents of MSR in elephants does not mean MSR is a widespread trait that humans and elephants both have because a common ancestor hat it. Rather, in some cases (the great apes), MSR is clustered in a set of closely related species because it evolved in their ancestor, and at the same time, it appears here and there among other species for either similar reasons, or perhaps even for different reasons.

    This is why the word “relationship” is so important in this kind of research.

    It is clear that Dr. Marino does not use the word “relationship” in that press release to mean that pigs and humans share interesting cognitive and behavioral traits because of common ancestry, but rather, I assume, the implication is that we may want to think harder about how we treat pigs because they are a bit like us.

    One could argue, of course, that a species that is a lot like us for reasons other than shared evolutionary history is a bit spooky. Uncanny valley spooky. Or, one could argue that such a species is amazing and wonderful, because we humans know we are amazing and wonderful so they should be too. Indeed one could argue, as I have elsewhere, that similarity due to shared ancestry and similarity due to evolutionary convergence are separate and distinct factors in how we ultimately define our relationship to other species, how we treat them, what we do or not do with them. The important thing here, that I want to emphasize, is that human-pig similarity is not the same thing as human-chimp similarity. Both are important but they are different and should not be conflated. I honestly don’t think the paper’s authors are conflating them, but I guarantee that if this paper gets picked up by the press, conflation will happen. I’ll come back to a related topic at the end of this essay.

    I’ve been interested in pigs for a long time. I’ve had a lot of interactions with wild pigs while working in Africa, both on the savanna and the rain forest. One of the more cosmopolitain species, an outlier because it is a large animal, is the bush pig. Bush pigs live in very arid environments as well as the deepest and darkest rain forests. There are more specialized pigs as well. The forest pig lives pretty much only in the forest, and the warthog does not, preferring savanna and somewhat dry habitats. Among the African species, the bush pig is most like the presumed wild form of the domestic pig, which for its part lived across a very large geographical area (Eurasia) and in a wide range of habitats. I would not be surprised if their populations overlapped at some times in the past. This is interesting because it is very likely that some of the traits reviewed by Marino and Colvin allow wild pigs to live in such a wide range of habitats. There are not many large animals that have such a cosmopolitain distribution. Pigs, elephants, humans, a few others. Things that know something about mirrors. Coincidence? Probably not.

    Pigs (Sus domesticus and its wild form) have an interesting cultural history in the west. During more ancient times, i.e., the Greek and Roman classical ages, pigs were probably very commonly raised and incorporated in high culture. One of Hercules seven challenges was to mess with a giant boar. Pigs are represented in ancient art and iconography as noble, or important, and generally, with the same level of importance as cattle.

    Then something went off for the pigs. Today, two of the major Abrahamic religions view pigs as “unclean.” Ironically, this cultural insult is good for the pigs, because it also takes them right off the menu. In modern Western culture, most pigs are viewed as muddy, dirty, squealing, less than desirable forms. Bad guys are often depicted as pigs. One in three pigs don’t understand their main predator, the wolf. There are important rare exceptions but they are striking because they are exceptions. This denigration of pigs in the West is not found globally, and in Asia pigs have always been cool, sometimes revered, always consumed.

    I should note that I learned a lot of this stuff about pigs working with my good fiend and former student Melanie Fillios, who did her thesis (published here) on complexity in Bronze Age Greece, and that involved looking at the role of pigs in the urban and rural economies. At that time Melanie and I looked at the comparative behavioral and physical biology of cattle vs. pigs. This turns out to be very interesting. If you started out with a two thousand pounds of pig and two thousand pounds of cattle, and raised them as fast as you could to increase herd size, in a decade you would have a large herd of cattle, but if you had been raising pigs, you’d have enough pigs to cover the earth in a layer of them nine miles thick. OK, honesty, I just made those numbers up, but you get the idea; Pigs can reproduce more than once a year, have large litters, come to maturity very quickly, and grow really fast. Cattle don’t reproduce as fast, grow slower, take longer to reach maturity, and have only one calf at a time.

    On the other hand, if you have cattle, you also have, potentially, milk (and all that provides), hoof and horn (important in ancient economies) and maybe better quality leather. I’ll add this for completeness: Goats are basically small cows with respect to these parameters.

    Now, having said all that, I’ll summarize the material in the paper so you can learn how amazing pigs are. From the press release:

  • have excellent long-term memories;
  • are whizzes with mazes and other tests requiring location of desired objects;
  • can comprehend a simple symbolic language and can learn complex combinations of symbols for actions and objects;
  • love to play and engage in mock fighting with each other, similar to play in dogs and other mammals;
  • live in complex social communities where they keep track of individuals and learn from one another;
  • cooperate with one another and show signs of Machiavellian intelligence such as perspective-taking and tactical deception;
  • can manipulate a joystick to move an on-screen cursor, a capacity they share with chimpanzees;
  • can use a mirror to find hidden food;
  • exhibit a form of empathy when witnessing the same emotion in another individual.
  • Pigs are very snout oriented. They have lots of nerve endings in their snouts and can use the information they get from this tactile organ for social interactions and finding food. They can tell things apart very easily, learn new classifications, and remember objects and things about them. This makes sense for an animal that forages at the ground surface, including underground, for a very wide range of food types.

    One of the cool human traits we often look for in other animals is the ability to time travel. We don’t actually travel in time, but in our minds, we can put ourselves in other places and other times, and run scenarios. Some of the basic capacities required to do this include a sense of lengths of times for future events or situations, and an understanding of these differences. Pigs can learn that of two enclosures they can choose from, one will let them out sooner than the other one, for example.

    Pigs have excellent spatial memory and can learn where things are and how to find them. They can do mazes as well as other animals that have been tested in this area.

    Pigs have individual personalities, to a large degree, and can discriminate among other individuals and recognize certain aspects of their mental state. This applies to other individual pigs as well as individuals of other species (like humans).

    Pigs have a certain degree of Machiavellian intelligence. This is rare in the non-human animal world. If a pig has the foraging pattern for a given area down well, and a potential competitor pig is introduced, the knowledgable pig will play dumb about finding food. They don’t have MSR but they can use mirrors to find food.

    Now, back to the evolutionary context. I’ve already hinted about this a few times. Pigs and humans share their cosmopolitain distribution, with large geographic ranges and a diversity of habitats. We also share a diverse diet. But, it goes beyond that, and you probably know that I’ve argued this before. Pigs are root eaters, as are humans, and this feature of our diet is probably key in our evolutionary history. From my paper, with Richard Wrangham, on this topic:

    We propose that a key change in the evolution of hominids from the last common ancestor shared with chimpanzees was the substitution of plant underground storage organs (USOs) for herbaceous vegetation as fallback foods. Four kinds of evidence support this hypothesis: (1) dental and masticatory adaptations of hominids in comparison with the African apes; (2) changes in australopith dentition in the fossil record; (3) paleoecological evidence for the expansion of USO-rich habitats in the late Miocene; and (4) the co-occurrence of hominid fossils with root-eating rodents. We suggest that some of the patterning in the early hominid fossil record, such as the existence of gracile and robust australopiths, may be understood in reference to this adaptive shift in the use of fallback foods. Our hypothesis implicates fallback foods as a critical limiting factor with far-reaching evolutionary e?ects. This complements the more common focus on adaptations to preferred foods, such as fruit and meat, in hominid evolution.

    Pigs and humans actually share dental and chewing adaptations adapted, in part, for root eating. The pig’s snout and the human’s digging stick have been suggested (see the paper) as parallelisms. And so on.

    Yes, humans and pigs share an interesting evolutionary relationship, with many of our traits being held in common. But this is not because of shared ancestry, but rather, because of similar adaptive change, independent, in our evolutionary history. This whole root eating thing arose because of a global shift from forests to mixed woodland and otherwise open habitats, which in turn encouraged the evolution of underground storage organs among many species of plants, which in turn caused the rise of a number of above ground root eaters, animals that live above the surface but dig. Not many, but some. Pigs, us, and a few others.

    That does not make us kin, but it does make us kindred.

    Axon Growth Possible in Central Nervous System

    I don’t have time to read the original or make much comment on this, but since this topic has come up here before, I thought I’d pass on the press release from Burke REhabilitation and Research:

    Burke Medical Research Institute Scientists Show Axon Growth Possible in Central Nervous System

    White Plains, NY – May 21, 2014 –Recent findings by Burke Medical Research Institute scientists could one day pave the way for new treatments for spinal cord injuries. The study, published as a cover story, with commentary, in the current issue of the Journal of Experimental Medicine, found, for the first time, that activating a protein known as B-RAF promotes the regeneration of injured axons in the central nervous system of mice. Until now, it was thought that axons—which conduct signals between neurons—could not re-grow or be restored after an injury in higher animals such as mice, or in humans. Injuries, such as those affecting the spinal cord, can damage these axons, making their regeneration an important first step towards possible recovery.

    Since earlier studies found that axon growth can be blocked by disabling B-RAF, the researchers wanted to find out if activating B-RAF could—in contrast—help promote axon growth and regeneration.

    The team, led by Jian Zhong, Ph.D., director of the Molecular Regeneration and Neuroimaging Laboratory at the Burke Medical Research Institute in White Plains and assistant professor of neurology and neuroscience at Weill Cornell Medical College in New York City, found that axon growth was promoted in three distinct scenarios. These were: in a developing mouse embryo that didn’t have an important normal axon growth signal, in injured sensory neurons whose axons grow into the central nervous system, and then in an injured optic nerve, which is part of the central nervous system.

    “Not very long ago, we were not sure if neurons in the mammalian central nervous system could ever regrow axons to any useful lengths at all,” said Dr. Zhong. “Now, we see that by activating the B-RAF protein, the possibility is there. And that possibility could lead to exciting progress in the field of spinal cord injury treatment and rehabilitation.”
    While there is no conclusive data on spinal cord injury at the moment, the optic nerve data makes it very likely that the B-RAF activation will also stimulate regeneration after spinal cord injury—though additional research needs to be done, said Dr. Zhong.

    “These significant findings represent the importance of basic research for rehabilitation and the effects it will continue to have on how we approach treatment and help patients with various injuries, including those to the spinal cord,” says Rajiv R. Ratan, M.D., Ph.D, executive director of Burke Medical Research Institute and professor of neurology and neuroscience at Weill Cornell Medical College.

    Scientists from the Burke Medical Research Institute included Dr. Zhong as well as Kevin J. O’Donovan, Ph.D., Kaijie Ma, B.M., and Hengchang Guo, Ph.D. Also contributing to the study were scientists from Harvard Medical School, Temple University School of Medicine, Icahn School of Medicine at Mount Sinai, and Centre Hospitalier Universitaire de Quebec in Canada. The study was supported by the National Institutes of Health, the Whitehall Foundation and the Burke Foundation.

    Why Do Men Hunt and Women Shop?

    The title of this post is, of course, a parody of the sociobiological, or in modern parlance, the “evolutionary psychology” argument linking behaviors that evolved in our species during the long slog known as The Pleistocene with today’s behavior in the modern predator-free food-rich world. And, it is a very sound argument. If, by “sound” you mean “sounds good unless you listen really hard.”

    I list this argument among the falsehoods that I write about, but really, this is a category of argument with numerous little sub-arguments, and one about which I could write as many blog posts as I have fingers and toes, which means, at least twenty. (Apparently there was some pentaldactylsim in my ancestry, and I must admit that I’ll never really know what they cut off when I was born, if anything.)

    Before going into this discussion I think it is wise, if against my nature, to tell you what the outcome will be: There is not a good argument to be found in the realm of behavioral biology for why American Women shop while their husbands sit on the bench in the mall outside the women’s fashion store fantasizing about a larger TV on which to watch the game. At the same time, there is a good argument to be made that men and women should have different hard wired behavioral proclivities, if there are any hard wired behavioral proclivities in our species. And, I’m afraid, the validity from an individual’s perspective of the various arguments that men and women are genetically programmed to be different (in ways that make biological sense) is normally determined by the background and politics of the observer and not the science. I am trained in behavioral biology, I was taught by the leading sociobiologists, I’ve carried out research in this area, and I was even present, somewhat admiringly, at the very birth of Evolutionary Psychology, in Room 14A in the Peabody Museum at Harvard, in the 1980s. So, if anyone is going to be a supporter of evolutionary psychology, it’s me.

    But I’m not. Let me ‘splain….
    Continue reading Why Do Men Hunt and Women Shop?

    Kurzweil: How to create a mind

    How to Create a Mind: The Secret of Human Thought Revealed is Ray Kurzweil’s latest book. You may know of him as the author of The Singularity Is Near: When Humans Transcend Biology. Kurzweil is a “futurist” and has a reputation as being one of the greatest thinkers of our age, as well as being One of the greatest hucksters of the age, depending on whom you ask. In his new book…

    Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse engineering the brain to understand precisely how it works and using that knowledge to create even more intelligent machines.

    Kurzweil discusses how the brain functions, how the mind emerges from the brain, and the implications of vastly increasing the powers of our intelligence in addressing the world’s problems. He thoughtfully examines emotional and moral intelligence and the origins of consciousness and envisions the radical possibilities of our merging with the intelligent technology we are creating.

    Certain to be one of the most widely discussed and debated science books of the year, How to Create a Mind is sure to take its place alongside Kurzweil’s previous classics.

    I think there are three key ideas in this book about which I have varying opinions. First, he presents a model of how the brain works. Second, he suggests that we can, in essence, reverse engineer the brain using computing technology. Third, he discusses the rate at which computing technology becomes more capable of doing such a thing, both qualitatively and quantitatively. He ties these idea together with reference to artificial intelligence theory.

    Regarding the second point, I have no doubt that we will someday be able to produce a non-biological brain. Brains are physical entities that emerge with very little specification as to architecture, have incredibly dense circuitry that carries enough information for otherwise reasonable people to assert that its information storage capacity is infinite (which it is not, of course), and that involves interactivity among components that allows for some amazing things to happen. I think that when we get close to making a mechanical brain, we would probably want to set aside many of the ways in which actual brains function, in order to create a more effective computing solution, because the brain is a product of Natural Selection and is thus not necessarily all that well deigned. The trick will be sorting out that which is good design for mechanical implementation of human braininess from that which is not good design. Regarding the third point, the expansion of computational abilities, I’m sure the basic ideas Kurzweil lays out are reasonable but furturism about technology seems to run into the same problem over and over again: Somebody invents a qualitatively distinct way of doing something that totally changes the game, and after that, this new way of doing things quantitatively evolves. Predicting the qualitative shifts has been difficult.

    My biggest problem with Kurzweil’s book is in relation to the first point, a theory about how the brain’s cortex works. He asserts that the cortex is a self organizing entity that responds to information, creating an ability to manage and recognize patterns. My problem with this is that Kurzweil seems to have not read Deacon’s work (such as The Symbolic Species: The Co-evolution of Language and the Brain and Incomplete Nature: How Mind Emerged from Matter. I’m not saying that Kurzweil is wrong in thinking of the cortex as self organizing in response to the challenges and inputs of pattern recognition. I’m simply saying that this property of the cortex, and of the human mind, has already been identified (mainly by Deacon) and that Kurzweil should sit down with Deacon and have a very long conversation before writing this book! (Well, ok, the next book.) I don’t think they’ve done that yet.

    No new nose neurons?

    Elizabeth Norton has an interesting write-up in Science Now. Some years ago, after a long period of suspicion, it was seemingly demonstrated that neurogenesis (the formation of new neurons) happened in the human nose. This research was based on the identification of proteins that would be associated with the early formation of baby neurons. Therefore, it was not possible to prove that full grown and functioning neurons were being grown in the nose, but it was assumed to be a reasonable possibly.

    However, it really isn’t a reasonable possibility. If there was an Intelligent Designer, then sure, why would baby neurons pop up and then not turn into functioning adult neurons? But if there is no Intelligent Designer, and instead, things evolved, then it is quite possible that the lack of novel fully formed and hooked up neurons in an adult human (which seems to be the general rule of thumb, for whatever reason) is not necessarily achieved via some highly sensible planned out feature. Rather, it is most likely that an evolved feature is a kludge. If it turns out that neurogenesis occurs in the adult human nose but that those nascent neurons never enervate, well, that is what we might expect evolution, which is not intelligent but, rather, pragmatic, to come up with.

    The method of testing this idea, applied by Jonas Frisén of the Karolinska Institute in Stockholm, is just as interesting as the finding itself. The idea is to date the neurons in the nose. One way to date organic tissue might be to use C-14 dating like archaeologists use, but that method is not precise enough. The neural tissue in a living human might be something like “50 years old plus or minus 80 years” which would not be too useful. But there is a way to use C-14 after all. Since atomic testing started, there has been a LOT more C-14 pushed into the atmosphere, and the added radiocarbon allows for a more precise atomic clock, if the clock is properly calibrated. This method was initially pioneered a few years ago in the forensic case of two sisters who were found dead, long after they had expired, in their home in Vienna. Both sisters had considerable wealth, and the one who died first would have passed on that wealth to the second, living sister. The relatives of the second-to-die sister would therefore receive a considerably larger inheritance than the relatives of the first-to-die sister. The two sisters’ bodies were found semi-mummified, and a couple of years after death, in their apartment which was surrounded by neighbors who never noticed they were no longer around.

    The post-A-bomb calibrated C-14 method was used to determine that the sisters had in fact died about a year apart. This method has subsequently been used for other fine-tuned post atomic dating. (There is a write-up of this here.)

    OK, now back to the nose.

    In the new study, published this week in Neuron, Frisén, Spalding, and colleagues measured levels of 14C in olfactory bulb tissue taken during autopsy from the brains of 15 subjects who were born either before or after the atomic testing period. The researchers found that the neurons in the olfactory bulb were all the same age: the age of the individual they came from. “[That’s] evidence that in humans, in this area, neurogenesis doesn’t occur,” says Frisén.

    There is still evidence, i.e. from mice, that neurogenesis of useful neurons does happen in some mammals. The question of novel nose neurons is not entirely settled. But, when the question comes up “Do humans generate new neurons as adults” please make sure that the assumption that they do is not based on this earlier nose research, or on any studies that merely looked for new neuron proteins.

    In addition, Macklis points out that the tissue samples may have biased the results. The donors in the study died at the Karolinska Institute, he notes, and some had a history of substance abuse or psychiatric illness, both of which have been shown to decrease neurogenesis. He says that a better test would be to repeat the experiment in healthy people constantly exposed to new scents—chefs, sommeliers, perfumers, or travelers to exotic locales.

    Face it: there is still some head scratching going on. We will need to keep an eye on this nose research before sealing our lips on it, and in the mean time, keep your chin up.


    Photo courtesy of flickr user Lawrence Whittemore

    IQ Varies with Context

    ResearchBlogging.orgIn a very interesting way.

    As a regular reader of this blog, you know that IQ and similar measures are determined by a number of factors, and for most “normal” (modal?) individuals, one’s heritage (genes) is rarely important. Putting it another way, variation across individuals in IQ and other measures have been shown again and again to be determined by things like home environment, diet and nutrition, and even immediate social context. Here’s another finding supporting this:
    Continue reading IQ Varies with Context

    The argument that different races have genetically determined differences in intelligence

    The presumption being examined here is that humans are divisible into different groups (races would be one term for those groups) that are genetically distinct from one another in a way that causes those groups to have group level differences in average intelligence, as measured by IQ. More exactly, this post is about the sequence of arguments that are usually made when people try to make this assertion.

    The argument usually starts out noting that there are dozens of papers that document group differences in IQ. I’ll point out right now that most of those papers are published in journals with editorial boards staffed in part or in total with well known racist scientists such as J. Philippe Rushton. That fact is not too important to what I have to say here, but since the usual argument about race and IQ starts out with “Hey, look at all these papers in these great journals” it is worth noting.

    Heritability of IQ measures is then proffered, often in reference to the famous “twin studies” which show a high heritability for IQ. Heritability is a measure derived from covariance between relatedness and some phenotype. Heritability is not genetic inheritance. It is scientifically incorrect and probably academically dishonest to assume or insist that a high heritability value means that something is genetic. It often is, but it need not be. The truth is, that there are many things that could have a high heritability value but that we know are not genetic, so we don’t make a heritability estimate. There are other things for which we have strong a priori biological arguments that hey are genetic, and we thus make heritability estimates as part of the research on those things. Then there are things that we don’t know the cause of, and in those cases, making an estimate of heritability is useful as an exploratory tool. But, and this is important, arriving at a high value for heritability does not indicate genetic inheritance.

    If you apply the methodology of the twin studies to language, you would find that having the capacity of language is of a similar heritability of having one head (as opposed to zero or two heads, for instance): Undefined. The number of heads does not vary, and heritability is a measure of covariation (I use the term “covariation” in a non-technical sense here). If you apply these methodologies to what language someone speaks, the heritability for that trait is very high, much much higher than for IQ. If you apply the same method to heritability of geography (the lat/long of where someone lives), it is even higher, especially for babies or people living in traditional societies.

    Does everyone understand why that is the case? Familial or cultural causes may be very strong but not genetic. Using this method, if high heritability means that IQ is genetic, then so is which language you speak and so is what part of the world you live in.

    The smoke and mirror part of this is equating heritability with inheritance. We speak the language we speak because it is the language of the culture we grow up in, not because of a gene for speaking French vs. a gene for speaking Sumerian.

    This makes sense because we know how a person acquires language, so no one even tries to measure heritability of which language someone speaks. (Same with heritability of geographic location. It would be an absurd measure.) But people make the assumption that intelligence is inherited. Why do they make that assumption? Because lots of people for a long time wanted to, and in some cases, needed to believe this so, and thus it has become part of our culture. It is part of our uncriticized received knowledge, along with other racialized ideas and various sexist ideas, and so on. But recent research (meaning over the last 30 years) has shown us that other than in the case if inherited neuro-developmental diseases, it is impossible to imagine how intelligence can be inherited in such a way as to explain the variability we see in the most inter-group differences. Maybe a little, but not that much. That there is some genetic component is not impossible, but it is very hard to maintain the idea that it is genetic and ethnic, or genetic and racial, or genetic and explanatory of more than a few IQ points in most people. There are no genes, there are no developmental mechanisms, that have been identified. So, to many the issue of inheritance (not heritability, but inheritance via genes) of intelligence is not really an issue.

    However, there are many who still need to hang on to this belief. Why they need to hang on is itself an interesting question. I can’t say for a given individual but I’ve been engaged in this conversation for 30 years and in my experience it is very often because of a desire to support a racialized model of human behavior.

    The evidence for the usual IQ/Group/Race/Ethnicity/Genetic model we see is always given first as group differences. When the language and geography analogs are brought up, we always see the twin studies brought in. But twins are raised together in the same environment. So they have the same language, the same cultural customs, the same geography etc. That they have the same IQ is not surprising.

    There is an interesting set of interactions between familial effects and environmental effects with any of these twin studies results, but it has to be understood that heritability is not inheritance. If you have a genetic mechanism that is real (not inferred or made up) that integrates with a developmental process that can manifest a phenotype based on a genotype (that is real, not made up or inferred) then you can translate heritability to genetic inheritance, roughly. We seem to see this in a number of psychological conditions/diseases, for instance, and obviously we see it for a lot of physical traits. If on the other and you have familial effects that would cause offspring to resemble their parents without genes then cultural/social/familial context is more likely to be the explanation.

    Variation in IQ across groups in a single society (like in the US) (which is not the same as a single culture) is known to be primarily caused by SES and home environment, and is indicated by such things as parents’ educational level. Educational levels of Americans have been going up for a hundred years. So has IQ. IQ can jump up in a generation if one generation is educated and changes home environment and SES etc., and thereafter those offspring and grand offspring have higher IQ’s. No new alleles were introduced to cause those changes. Cultural differences were introduced, and we have a concept of the mechanism by how that works.

    The difference in IQ across time within a given population is sometimes much greater than the difference in IQ across the usual groupings of people (i.e., “race”). When scientist seek societal, cultural, nutritional and educational explanations for differences in IQ they find them easily. When scientists who have this need for group differences to be genetic seek those genetic explanations for differences in IQ they have to invent new and shall we say “interesting” statistical techniques to justify how their usually cooked data underlie their biologically implausible explanations. The latest is “there are thousands of genes and there are so many we can’t see the pattern,and that is the pattern.” Funny that. The number of genes with tiny variants that “must be” the cause of variation in IQ is going up and up and up and the number of genes that are estimated actually exist in the human genome has gone way down. At this point, we are very close to saying that individual variation in IQ is best explained by … which individual you measured the IQ in!

    Let me explain that in another way, which is an analogy though it looks like a statistical argument (don’t mistake the two). If I show you two points on a graph, I can describe a line indicating their relationship with the formula Y = mX + b (the formula for a line). I can use the same formula to describe the line representing a scatter of points, but the line might be a poor describer of the scatter. How bad it is may be indicated by a statistic (a correlation value or a “R” value or something). But, if I change the formula to Y = m1X1 + m2X2 + b then I get a curvy line that may match the points better. But it will still be imperfect. But, if I add even more coefficients so there is one coefficient per point, then I go back to a (nearly) perfect describer of the line once again. Because, I’ve drawn a line (more or less) that starts with the first point, then goes to the second point, then to the third point, etc. etc.

    And that would be cheating.

    And that would be pretty close to what some of the more recently implemented statistical models of genes and IQ do. If I include every allelic variation in humans (hypothetically) and correlated that to individually measured IQ, I’ve drawn a line from one human’s genetic value (along one axis) and IQ value (along another axis) to the next person, the next person, and then the next person so on down the line. At this point, ladies and gentlemen, we show that IQ correlates (almost) perfectly with fingerprint.

    The next argument in favor of the genetic inheritance of intelligence is often to link IQ to head size or brain size. However, much of the data related to this research is very made up or cooked, and the causal arrow is problematic. Also, a third or fourth level factor in IQ is diet, which may affect brain size. Separately, a primary factor in skull shape and bone thickness is also diet (though in totally unrelated ways) which in turn is ethnic/regional… Bottom line, the system is complex, but the data do not support the assertion unless you make a big part of the data up, and Rushton has famously done so.

    Another argument that is often made to salvage the genetic determination (by racial group) of intelligence is the between national data that has been more recently assembled and foisted on us. This is no different than ethic groups in the US. IQ is a standard measure, and groups vary in this value. Other measures will also result in variation. The variation is there, and the group level distinction is there. But finding more examples of that does not lead towards the conclusion that this is racial or genetic. Across nations we see a lot of measures that we know change (often in predictable directions) over time with industrialization or various other transitions. National IQ, fertility, various health measures, and so on all do this. And, of all these measures, the most suspect in terms of quality of data is IQ (excepting some more obscure health related data). These IQ comparisons don’t tell us much.

    The final argument in favor of the inheritance of IQ via genes passed on from parent to offspring is usually to cite the twins separated at birth studies. These studies, however, simply do not show this. These twins are not separated at birth in the way most people think they are. Usually, the twins knew each other as they grew up, and/or knew commonly held family members. They lived in the same culture, usually in the same city, often in the same neighborhood, and sometimes even in the same physical house. They went to the same school and had the same diet. Separated at birth in these studies usually means grandma and grandpa took one of the twins to raise because mom and dad were strapped. Grandma and grandpa may have lived down the street. The kids may have attended the same school, even the same classes, and spent a lot of time together outside of school.

    I was separated (though not from birth) from my older brother, because he lived on the second floor of a two family house, and I lived on the first floor. By the exact criteria of the twin studies, we would be counted as separated because it happened early enough in my life. But, that household I grew up in was a single household that happened to be set up in a two family house. The two floors were connected by an internal rear stairway that led to locked doors (had we locks). I was rather shocked to realize at one point as a child that we were the only family in my neighborhood with two kitchens. (Or two bathrooms, for that matter.)

    There may be a small component of intelligence that is inherited, but it seems to be swamped by other factors. The insistence that genes determine intelligence and that these genes are divided up in our species by groups that are often defined racially is usually misguided, and is scientifically wrong. The supra-ultimate argument, after the final argument, brought up in this sort of conversation is usually that the anti-racist argument is a Politically Correct argument, yada yada yada. But it is actually a scientific argument, and the racialized intelligence argument is not. Making the latter a politically incorrect argument.

    Which is kind of funny.