Superlative: The Biology of Extremes is almost as extreme, or shall we say, hopeful, in its marketing-cover claims as the animals discussed are outlandish. If the cure for cancer was going to be found in a shark, we would have already found it. But despite what the book promises on its cover, Matthew D. LaPlante’s book is a detailed, engaging, and informative look at ongoing and recent scientific research from the perspective of an experienced journalist.
There are three categories of science book authors: Scientists, who write the best ones most of the time, science-steeped (often trained-as-scientists) science writers, who can write some pretty good books, and journalists who delve into the science and sometimes write amazing books, other times write books that are good books but not necessarily good science books. Superlative: The Biology of Extremes is in the higher end of the last category. It is about the scientists, the teams, the work more than the cells and polymers.
Also, LaPlante has another set of credentials: He is deeply, severely, hated by Bill O’Reilly and Glenn Beck. Oh, also, the book is at present deeply on sale.
This is a series of essays by biologist Chrisiane Nusslein-Volhard, engagingly and skillfully illustrated by Suse Grutzmacher (and translated by Jonathan Howard) about the aesthetic sense talked about by Darwin, its evolution, distribution, function, meaning, across animals. The essays take a Tinbergian approach to explore most aspects of how thinks look or are looked at, how paterns, colors, and other features play ar ole in sexual selection, and how the underlying genetic connect to these important surface features, allowing us to understand the phylogeny of this physical-behavioral nexus. This is the scientist talking about the science. The book itself is also a bit unusual, as it is designed to fit comfortably in a pocket or purse. Take it to the dentist office or hair stylist! (When the Pandemic is over.)
If you read a lot of books about cosmology and the universe, you will not find much new in this book, but you will find newways to think about all that old stuff. If you really do have a newtheory of everything, this book will give you some useful advice on how to buy your ticket into the physics game. Like, that you have to make sure your theory of everything works in a way that does not result in the night sky being as bright as the day sky, or makes light do something it does not do, and so on. Also, do not use many different TYPEFACES AND all caps in your write-up.
Interestingly, one of the things the actual-cosmologists-authors do NOT say is something I often hear from pro-physicists about TOE-pushers. They don’t say “if you don’t have a mathematical formula for your theory, it isn’t a theory.” I hear that all the time and I always thought there was something wrong with that. Seems to me that a totally wrong mathematical theory is too much of a likelihood.
The best overview of this book, which you SHOULD read, is from the authors themselves who made a video talking about the book. Here:
It is an interesting idea, taking a classic work and rewriting it for a modern audience, with adjustments. I took on the task of doing this with a Lovecraft tale a few years ago. I’m still working on it. I wanted to eliminate the racism and the misogyny, and I did. But that helped reveal the fact that the story itself was more of an interesting treatment than a fully formed story, so my work has expanded considerably.
Award winning author Angela McAllister did the opposite with Charles Dickens. Instead of expanding, the stories in A World Full of Dickens Stories by re-written by McAllister and illustrated by Jannicke Hansen distills, or in literary terms, digessts, Dicken’s classics, including Oliver Twist, The Old Curiosity Shop, David Copperfield, Great Expectations, Hard Times, A Christmas Carol, Nicholas Nickleby and A Tale of Two Cities.
How does it come out? Pretty good, given that giant novels summarized tend to try up or lose power. The stories are still good stories, and the writing is good, so the short stories convey the sense Dickens was going for, and the reader learns what these classics are about. The reading level is rated at 9-11. I agree with the low end of the range just because littler kids tend to like sillier stories. I would not put the upper end at 11; older kids and adults can enjoy these stories as well. But the combination of writing and illustrations are designed to be read to 8 year olds and read back (like David Copperfield reading to Peggotty at older ages.
The book itself is large format and very well designed and printed.
To give you an idea of what the book looks like, here’s a typical page layout:
A World Full of Dickens Stories is a good book to get, an would make a very presentable gift to a ready kid or a family with children in that age range, or any adult who happens to be a major Dickens fan.
Sometimes it is because my glasses are dirty. Sometimes it is a sundog, or a light pillar, or, if I happen to be near an exploding volcano….
Nature’s Light Spectacular: 12 stunning scenes of Earth’s greatest shows, written by Katy Flint and illustrated by Cornelia Li is a top notch earth and science book across a very wide age range, but classed as a 4-8 year old book. This book sits across that divide of read to vs. read by, and I think it is a great read-by (the kid) book for up to 10 or 11 years old.*
There is a story that pulls it all together, about two young explorers who improbably encounter several different light phenomena in nature, some common like sun dogs, some much less common, like the waterfall of fire at Yosemite National Park. Each two page layout demonstrates the phenomenon, with additional graphics and well written text to explain the science behind it. The format is large and guess what: The book glows in the dark.
Good science, good teaching, good book. The glowing in the dark part is not the reason you want this book, but you do want this book if you have a kid in elementary school.
What is a regular expression? We can talk about that in detail some other time. Briefly, it is a string of symbols that is designed to match a specified set of symbols, or a range of a set of symbols, in a larger body or stream of text. For example, if you pass a stream of information (say, all your emails) through a filter with the regular expression:
then any part of that stream of information that looks like a phone number (not using parens), such as 636-555-3226, will be isolated.
The new edition includes pattern matching with regular expressions, input validation, reading and writing files, organizing files, web scraping, manipulating Excel spreadsheets and Google Sheets, PDF and Word documents, CSV and JSON files, email, images, and automating your keyboard and mouse.
The great benefit of a book like this is that you learn Python (the first part of the book gives you all you need to know to program in Python) in the context of things you actually want to do with Python. If you are interested in learning Python, or coding in general, this can be your first book.
The book is well done, as all in this series are, and fun. There are strong on line resources including all the code, and that information is regularly updated. Generally, “No Starch” press books are great, and this is one of those!
I would like to have seen at least sidebars on manipulating things using Libreoffice software, but note that the book focuses on documents, and OpenSource software does work with normal Excel and Word documents, so it is there.
The second edition adds a new chapter on input validation. The Gmail and Google Sheets sections, and the information on CSV files is also new. I plan on using the software tips and tricks to develop my own highly specialized and targeted search software. I’m often looking for files that have specific extensions, and certain kinds of content, in certain locations. Just the ability to hard-wire where to search for files will save me a lot of time and trouble.
Author Al Sweigart is a professional software developer who teaches programming to kids and adults, and who is author of Invent Your Own Computer Games with Python, Cracking Codes with Python, and Coding with Minecraft, all of which are quite nice. We need a new edition of Coding with Minecraft, by the way, that looks at a wider range of coding options and keeps up with the major advances in that software environment! So, get to work, Al!
It is said that scientists are lousy at communication, lousy at telling everyone else about their science, in understandable and compelling terms.
This is of course absurd. There are tens of millions of scientists, and dozens of them are really excellent communicators!
Among the many sciences, there is a science of science communication. It overlaps, unironically, with the science of conspiracy ideation, and borrows a great deal from the broader communication fields.
One of the leading science communicators of the day is cognitive scientist John Cook. John is at George Mason University. He is so tightly linked to the founding and development of the Skeptical Science project that “Skeptical Science” is the name of his Wikipedia entry. This binds John and his mission to a lot of us. Where we once might have said, “I am Spartacus,” we now say, “I am Skeptical. Science!” For John, it is just “I am SkepticalScience.”
Cook is likely known to you for the Consensus project. There were two main projects, a few years back, in which scientist attempted to measure the degree of consensus over the idea that anthropocentric climate change is real. (It is real, and the consensus is near 100% in both peer reviewed literature and the conclusions of actual scientists.) John and his colleagues did one of those, and beyond that, widely promoted the results so that everyone knows about it.
Like I said above, there are tens of millions of scientists. Developing and disseminating the results of consensus research in climate scientist was equivalent to being the only guy sticking your head up out of the trench in that movie, 1917. Science deniers, both avocational and bought-and-paid-for, got all over cook like skin on a grape. Didn’t phase him, though. He continued to develop a series of new projects including a massive online course (Making Sense of Climate Science Denial), an artificial intelligence system for detecting fake science, and most recently, the Cranky Uncle project.
I don’t have a cranky uncle anymore (he died). But I do have a lot of neighbors who like to write in ALL CAPS. They show up when I give a talk on climate change, and they bring their conspiracy theories, logical fallacies, cherry picked “facts”, absurd expectations, and references to fake research done by fake experts. It is a lot to deal with. But now, I can use the Lewis Black technique for dealing with evolution deniers, but instead of pulling out a trilobite, holding it up and saying “Fossil!” I can pull out a copy of Cranky Uncle vs. Climate Change and say “Oh yeah? Imma look up what you just said in this BOOK!” or words to that effect.
Rachel Maddow is the Charles Darwin of Cable News.
Darwin’s most important unsung contribution to science (even more important than his monograph on earthworms) was to figure out how to most effectively put together multiple sources into a single argument — combining description, explanation, and theory — of a complex phenomenon in nature. His first major work, on coral reefs, brought together historical and anecdotal information, prior observation and theory from earlier researchers, his own direct observations of many kinds of reefs, quasi experimental work in the field, and a good measure of deductive thinking. It took a while for this standard to emerge, but eventually it did, and this approach was to become the normal way to write a PhD thesis or major monograph in science.
Take any major modern news theme. Deutsche Bank. Trump-Nato-Putin. Election tampering. Go to the standard news sources and you’ll find Chuck Todd following the path of “both sides have a point.” Fox News will be mixing conspiracy theory and right wing talking points. The most respected mainstream news anchors, Lester Holt, Christiane Amanpour, or Brian Williams perhaps, will be giving a fair airing of the facts but moving quickly from story to story. Dig deeper, and find Chris Hayes with sharp analysis, Joy Reid contextualizing stories with social justice, and Lawrence O’Donnell applying his well earned in the trenches biker wisdom.
I’m an American who has spent considerable time in South Africa, so I enjoy a good novel that is set there. Harbinger by Louis du Toit and CL Raven is set, instead, in the memory of that fraught and beautiful country, written by a South African author. I live in a place where racial tension, especially anti-Muslim or anti-Middle Eastern feelings rest at a low level below the surface, and this is also a place where I accompany my son to the bus stop where he is the only child who is NOT an immigrant, a Muslim, a Hindu, or, egads, a French Canadian Catholic. I consider us both lucky to be among such diverse friends. Continue reading Harbinger by Louis du Toit and CL Raven→
Superlative: The Biology of Extremes by Matthew D. LaPlante is not just about extremes, but about all the things in between that make the extremes extreme. LaPlante looks at size, speed, age, intelligence. For all the various subtopics that come up in such an exploration, LaPlante does a great job of bringing in the latest research. Mostly, this is a collection of interesting evolutionary and biological stories that happen to involve tiny things, giant things, old things, fast things, or things that are in some other way — superlative.
Go for a swim with a ghost shark, the slowest-evolving creature known to humankind, which is teaching us new ways to think about immunity. Get to know the axolotl, which has the longest-known genome and may hold the secret to cellular regeneration. Learn about Monorhaphis chuni, the oldest discovered animal, which is providing insights into the connection between our terrestrial and aquatic worlds.
I’m not endorsing every idea or story in this book. One can not write a book about adaptations and have any evolutionary biologist worth their salt not bump on things. But the author does an honest and straightforward job of representing the research, and you’ll learn quite a bit that is new, see new perspectives on things you’ve considered in the past, and you’ll enjoy LaPlante’s writing.
I will probably be recommending this volume as a holiday gift for the Uncle who has everything or the teenager who likes natural history. Teachers of wildlife biology, evolution, or related topics will be able to mine this volume for stories. The use of footnotes is notable.* I recommend Superlative
For many years, scientists who studied biology, behavior, and ecology (under the name of various disciplines) looked at resources, including and especially food, as a major determinant of social structure in social animals, herd structure in herd animals, and so on. Then, there was a revolution and it quickly became apparent that sex, not food, underlies everything and is the ultimate explanation for the variation we see in nature. That pair of dimes lasted for a while, then the other penny dropped and thanks to key research done by a handful of people (including me, in relation to human evolution), it became apparent that there was a third significant factor, that ultimately trumped sex as an organizing force. Food. Continue reading Food Or War by Julian Cribb: Excellent new book→
Aside from evolutionary theory itself, the teaching of Human evolution involves physiology and reproductive biology, behavioral biology, genetics, and the fossil record itself with details of a concomitant history.
And finally, there is a children’s book that addresses the latter, in amazing detail!
There are very few good (or even bad) children’s books about evolution, and far fewer about human evolution. And when a children’s book touches on human evolution, it is usually just about Neanderthals.
When We Became Humans: The Story of Our Evolution by Michael Bright with illustrations by Hannah Bailey is a very good book on human evolution. The book is over 60 pages long in large format, and my copy is cloth bound. The production quality of the book is outstanding. (That is generally the case with this publisher.)
I am am impressed with this title, and I strongly recommend it for anyone looking for a book for a kid of a certain age to read, or a younger kid to get read to.
What is that certain age? I’m thinking 10 plus or minus 2, depending on the kid. The publishers say 8-11. So somewhere around there. A 10 year old who absorbs the material in this book will do OK on an intro college human evolution midterm that focuses on the fossil and archaeological record. Or at least, the child will be able to effectively challenge the professor in a grade grubbing situation.
When We Became Humans: The Story of Our Evolution covers primate evolution, key moments in hominin history, bipedalism, early tools, brain evolution, the origin of fire (nice to see my research embodied as fact in an actual children’s book!), Homo erectus and Neanderthals, modern humans, foragers, early agriculture, holicene history, language, art, early burial, and other things such as hobbits.
There are only four places where I would take issue with the facts as presented here. The root hypothesis for the human-chimp split is left out, I would discuss early tools differently, the author embraces the scavenging hypothesis too kindly, and the great global diversity and overall craziness of the agricultural transition is glossed in favor (mostly) of the old Fertile Crescent story, which is not wrong, just limited. Given that this book presnets roughly 165 facts or perspectives, me disagreeing with this small number is rather remarkable.
The art is great, the typefaces well chosen, the layout is artful and foregrounds the aforementioned are and the facts.
You can preorder this book now; it will be out mid July.
Is that just art that is rendered in raster? Not exactly. Pixel art is the sort of art you draw for digital cartoons or similar things. The skills and tools of making pixel art would apply to designing icons or logos used in electronic products as well.
This book will give you an introduction to the tricks of the trade of making technologically simply but artistically potent drawings, including ways to animate them.
The non-OpenSource (boo) software that is used throughout the book is not expensive and is easy to use, and yes, OpenSource alternatives are suggested and briefly discussed. The book relies on Aseprite and Pro Motion, with GraphcsGale (Windows only, boo) being a free alternative.
Techniques covered include shading, texture, proper use of color, motion and animation, and making things look sentient. Apparently, you can make money doing this sort of thing! This book is probably a good investment, at the very least to see if you have the talent and interest.
Author Jennifer Dawe is an animator and character designer who has been a professional pixel artist for the past 15 years. Author Matthew Humphries is Senior Editor at PCMag.com and a professional game designer.
OpenSource science means, among other things, using OpenSource software to do the science. For some aspects of software this is not important. It does not matter too much if a science lab uses Microsoft Word or if they use LibreOffice Write.
However, since it does matter if you use LibreOffice Calc as your spreadsheet, as long as you are eschewing proprietary spreadsheets, you might as well use the OpenSource office package LibreOffice or equivalent, and then use the OpenSource presentation software, word processor, and spreadsheet.
OpenSource programs like Calc, R (a stats package), and OpenSource friendly software development tools like Python and the GPL C Compilers, etc. do matter. Why? Because your science involves calculating things, and software is a magic calculating box. You might be doing actual calculations, or production of graphics, or management of data, or whatever. All of the software that does this stuff is on the surface a black box, and just using it does not give you access to what is happening under the hood.
But, if you use OpenSoucre software, you have both direct and indirect access to the actual technologies that are key to your science project. You can see exactly how the numbers are calculated or the graphic created, if you want to. It might not be easy, but at least you don’t have to worry about the first hurdle in looking under the hood that happens with commercial software: they won’t let you do it.
Direct access to the inner workings of the software you use comes in the form of actually getting involved in the software development and maintenance. For most people, this is not something you are going to do in your scientific endeavor, but you could get involved with some help from a friend or colleague. For example, if you are at a University, there is a good chance that somewhere in your university system there is a computer department that has an involvement in OpenSource software development. See what they are up to, find out what they know about the software you are using. Who knows, maybe you can get a special feature included in your favorite graphics package by helping your new found computer friends cop an internal University grant! You might be surprised as to what is out there, as well as what is in there.
In any event, it is explicitly easy to get involved in OpenSource software projects because they are designed that way. Or, usually are and always should be.
The indirect benefit comes from the simple fact that these projects are OpenSource. Let me give you an example form the non scientific world. (it is a made up example, but it could reflect reality and is highly instructive.)
Say there is an operating system or major piece of software competing in a field of other similar products. Say there is a widely used benchmark standard that compares the applications and ranks them. Some of the different products load up faster than others, and use less RAM. That leaves both time (for you) and RAM (for other applications) that you might value a great deal. All else being equal, pick the software that loads faster in less space, right?
Now imagine a group of trollish deviants meeting in a smoky back room of the evile corporation that makes one of these products. They have discovered that if they leave a dozen key features that all the competitors use out of the loading process, so they load later, they can get a better benchmark. Without those standard components running, the software will load fast and be relatively small. It happens to be the case, however, that once all the features are loaded, this particular product is the slowest of them all, and takes up the most RAM. Also, the process of holding back functionality until it is needed is annoying to the user and sometimes causes memory conflicts, causing crashes.
In one version of this scenario, the concept of selling more of the product by using this performance tilting trick is considered a good idea, and someone might even get a promotion for thinking of it. That would be something that could potentially happen in the world of proprietary software.
In a different version of this scenario the idea gets about as far as the water cooler before it is taken down by a heavy tape dispenser to the head and kicked to death. That would be what would certainly happen in the OpenSource world.
You collect and manage data. You write code to process or analyze data. You use statistical tools to turn data into analytically meaningful numbers. You make graphs and charts. You write stuff and integrate the writing with the pretty pictures, and produce a final product.
The first thing you need to understand if you are developing or enhancing the computer side of your scientific endevour is that you need the basic GNU tools and command line access that comes automatically if you use Linux. You can get the same stuff with a few extra steps if you use Windows. The Apple Mac system is in between with the command line tools already built in, but not quite as in your face available.
You may need to have an understanding of Regular Expressions, and how to use them on the command line (using sed or awk, perhaps) and in programming, perhaps in python.
You will likely want to master the R environment because a) it is cool and powerful and b) a lot of your colleagues use R so you will want to have enough under your belt to share code and data now and then. You will likely want to master Python, which is becoming the default scientific programming language. It is probably true that anything you can do in R you can do in Python using the available tools, but it is also true that the most basic statistical stuff you might be doing is easier in R than Python since R is set up for it. The two systems are relatively easy to use and very powerful, so there is no reason to not have both in your toolbox. If you don’t chose the Python route, you may want to supplement R with gnu plotting tools.
You will need some sort of relational database setup in your lab, some kind of OpenSource SQL lanaguge based system.
You will have to decide on your own if you are into LaTex. If you have no idea what I’m talking about, don’t worry, you don’t need to know. If you do know what I’m talking about, you probably have the need to typeset math inside your publications.
Finally, and of utmost importance, you should be willing to spend the upfront effort making your scientific work flow into scripts. Say you have a machine (or a place on the internet or an email stream if you are working collaboratively) where some raw data spits out. These data need some preliminary messing around with to discard what you don’t want, convert numbers to a proper form, etc. etc. Then, this fixed-up data goes through a series of analyses, possibly several parallel streams of analysis, to produce a set of statistical outputs, tables, graphics, or a new highly transformed data set you send on to someone else.
If this is something you do on a regular basis, and it likely is because your lab or field project is set up to get certain data certain ways, then do certain things to it, then ideally you would set up a script, likely in bash but calling gnu tools like sed or awk, or running Python programs or R programs, and making various intermediate files and final products and stuff. You will want to bother with making the first run of these operations take three times longer to set up, so that all the subsequent runs take one one hundredth of the time to carry out, or can be run unattended.
Nothing, of course, is so simple as I just suggested … you will be changing the scripts and Python programs (and LaTeX specs) frequently, perhaps. Or you might have one big giant complex operation that you only need to run once, but you KNOW it is going to screw up somehow … a value that is entered incorrectly or whatever … so the entire thing you need to do once is actually something you have to do 18 times. So make the whole process a script.
Aside form convenience and efficiency, a script does something else that is vitally important. It documents the process, both for you and others. This alone is probably more important than the convenience part of scripting your science, in many cases.
Being small in a world of largeness
Here is a piece of advice you wont get from anyone else. As you develop your computer working environment, the set of software tools and stuff that you use to run R or Python and all that, you will run into opportunities to install some pretty fancy and sophisticated developments systems that have many cool bells and whistles, but that are really designed for team development of large software projects, and continual maintenance over time of versions of that software as it evolves as a distributed project.
Don’t do that unless you need to. Scientific computing often not that complex or team oriented. Sure, you are working with a team, but probably not a team of a dozen people working on the same set of Python programs. Chances are, much of the code you write is going to be tweaked to be what you need it to be then never change. There are no marketing gurus coming along and asking you to make a different menu system to attract millennials. You are not competing with other products in a market of any sort. You will change your software when your machine breaks and you get a new one, and the new one produces output in a more convenient style than the old one. Or whatever.
In other words, if you are running an enterprise level operation, look into systems like Anaconda. If you are a handful of scientists making and controlling your own workflow, stick with the simple scripts and avoid the snake. The setup and maintenance of an enterprise level system for using R and Python is probably more work before you get your first t-test or histogram than it is worth. This is especially true if you are more or less working on your own.
Another piece of advice. Some software decisions are based on deeply rooted cultural norms or fetishes that make no sense. I’m an emacs user. This is the most annoying, but also, most powerful, of all text editors. Here is an example of what is annoying about emac. In the late 70s, computer keyboards had a “meta” key (it was actually called that) which is now the alt key. Emacs made use of the metakey. No person has seen or used a metakey since about 1979, but emacs refuses to change its documentation to use the word “alt” for this key. Rather, the documentation says somethin like “here, use the meta key, which on some keyboards is the alt key.” That is a cultural fetish.
Using LaTeX might be a fetish as well. Obliviously. It is possible that for some people, using R is a fetish and they should rethink and switch to using Python for what they are doing. The most dangerous fetish, of course, is using proprietary scientific software because you think only if you pay hundreds of dollars a year to use SPSS or BMD for stats, as opposed to zero dollars a year for R, will your numbers be acceptable. In fact, the reverse is true. Only with an OpenSource stats package can you really be sure how the stats or other values are calculated.
This book focuses on Python and not R, and covers Latex which, frankly, will not be useful for many. This also means that the regular expression work in the book is not as useful for all applications, as might be the case with a volume like Mastering Regular Expressions. But overall, this volume does a great job of mapping out the landscape of scripting-oriented scientific computing, using excellent examples from biology.
Mastering Regular Expressions can and should be used as a textbook for an advanced high school level course to prep young and upcoming investigators for when they go off and apprentice in labs at the start of their career. It can be used as a textbook in a short seminar in any advanced program to get everyone in a lab on the same page. I suppose it would be treat if Princeton came out with a version for math and physical sciences, or geosciences, but really, this volume can be generalized beyond biology.
Stefano Allesina is a professor in the Department of Ecology and Evolution at the University of Chicago and a deputy editor of PLoS Computational Biology. Madlen Wilmes is a data scientist and web developer.