Tag Archives: Books

Books On The Energy Transition

Be informed, have a look.

Drawdown: The Most Comprehensive Plan Ever Proposed to Reverse Global Warming edited by Paul Hawken.

This is a great resource for understanding the diverse strategies available to decarbonize. There is a flaw, and I think it is a fairly significant one. Drawdown ranks the different strategies, so you can see what (seemingly) should be done first. But the ranking is highly susceptible to how the data are organized. For example, on shore vs. off shore wind, if combined, would probably rise to the top of the heap, but separately, are merely in the top several. Also, these things change quickly over time in part because we do some of these things, inevitably moving them lower in ranking. So don’t take the ranking too seriously.

Free Market Environmentalism for the Next Generation by Terry Anderson and Donald Leal.

I mention this book because I hope it can help the free market doe what it never actually does. The energy business is not, never was, and can’t really be a free market, so expecting market forces to do much useful is roughly the same as expecting the actual second coming of the messiah. Won’t happen. This book is not an ode to those market forces, though, but rather, a third stab (I think), and a thoughtful one, at a complex problem.

Related, of interest: Windfall: The Booming Business of Global Warming by McKenzie Funk. “Funk visits the front lines of the melt, the drought, and the deluge to make a human accounting of the booming business of global warming. By letting climate change continue unchecked, we are choosing to adapt to a warming world. Containing the resulting surge will be big business; some will benefit, but much of the planet will suffer. McKenzie Funk has investigated both sides, and what he has found will shock us all. ”

Designing Climate Solutions: A Policy Guide for Low-Carbon Energy by Hal Harvey, Rovbbie Orvis and Jeffrey Rissman. ” A small set of energy policies, designed and implemented well, can put us on the path to a low carbon future. Energy systems are large and complex, so energy policy must be focused and cost-effective. One-size-fits-all approaches simply won’t get the job done. Policymakers need a clear, comprehensive resource that outlines the energy policies that will have the biggest impact on our climate future, and describes how to design these policies well.”

Little Myth on the Prairie

I have been slowly and steadily working on a project that involves an old topic of interest: the dynamic changes in society, economy, and settlement pattern as Euro-Americans ensnared the middle and western parts of the continent in their material and political net of civilization, sometimes known as the Westward Expansion. And for this reason, I came across a book, a NYT Book Review “Best Ten” for 2017, of interest, that happens also to be on sale cheap in Kindle format.

Prairie Fires: The American Dreams of Laura Ingalls Wilder by Caroline Fraser.

As pointed out in a review by Patricia Nelson Limerick, the exploitation and eastward shipping, for profit, of bison hides and precious metals (and everything in between) was not the only gig in the west. The story itself, the stories of pioneering, gun fighting, Indian, er, relations, and everything else, collected in situ and refined through the myth-mills of the publishing industry, amounted to a significant and valuable commodity. One of the most productive ore lodes of daring narrative in the plains and midwest was the one tapped by Laura Ingalls Wilder via the Little House series, and other tales. Also, her daughter was in on it.

Prairie Fires pulls back the switch-grass curtain. To quote from PNL’s review:

Rendering this biography as effective at racking nerves as it is at provoking thought, the story of Wilder’s emergence as a major sculptor of American identity pushes far past the usual boundaries of probability and plausibility. For anyone who has drifted into thinking of Wilder’s “Little House” books as relics of a distant and irrelevant past, reading “Prairie Fires” will provide a lasting cure. Just as effectively, for readers with a pre-existing condition of enthusiasm for western American history and literature, this book will refresh and revitalize interpretations that may be ready for some rattling. Meanwhile, “Little House” devotees will appreciate the extraordinary care and energy Fraser brings to uncovering the details of a life that has been expertly veiled by myth. Perhaps most valuable, “Prairie Fires” demonstrates a style of exploration and deliberation that offers a welcome point of orientation for all Americans dismayed by the embattled state of truth in these days of polarization.

-Patricia Nelson Limerick, review, The New York Times

Check it out!

How to do science with a computer: workflow tools and OpenSource philosophy

I have two excellent things on my desk, a Linux Journal article by Andy Wills, and a newly published book by Stefano Allesina and Madlen Wilmes.

They are:

Computing Skills for Biologists: A Toolbox by Stefano Allesina and Madlen Wilmes, Princeton University Press.

Open Science, Open Source, and R, by Andy Wills, Linux Journal

Why OpenSource?

OpenSource science means, among other things, using OpenSource software to do the science. For some aspects of software this is not important. It does not matter too much if a science lab uses Microsoft Word or if they use LibreOffice Write.

However, since it does matter if you use LibreOffice Calc as your spreadsheet, as long as you are eschewing proprietary spreadsheets, you might as well use the OpenSource office package LibreOffice or equivalent, and then use the OpenSource presentation software, word processor, and spreadsheet.

OpenSource programs like Calc, R (a stats package), and OpenSource friendly software development tools like Python and the GPL C Compilers, etc. do matter. Why? Because your science involves calculating things, and software is a magic calculating box. You might be doing actual calculations, or production of graphics, or management of data, or whatever. All of the software that does this stuff is on the surface a black box, and just using it does not give you access to what is happening under the hood.

But, if you use OpenSoucre software, you have both direct and indirect access to the actual technologies that are key to your science project. You can see exactly how the numbers are calculated or the graphic created, if you want to. It might not be easy, but at least you don’t have to worry about the first hurdle in looking under the hood that happens with commercial software: they won’t let you do it.

Direct access to the inner workings of the software you use comes in the form of actually getting involved in the software development and maintenance. For most people, this is not something you are going to do in your scientific endeavor, but you could get involved with some help from a friend or colleague. For example, if you are at a University, there is a good chance that somewhere in your university system there is a computer department that has an involvement in OpenSource software development. See what they are up to, find out what they know about the software you are using. Who knows, maybe you can get a special feature included in your favorite graphics package by helping your new found computer friends cop an internal University grant! You might be surprised as to what is out there, as well as what is in there.

In any event, it is explicitly easy to get involved in OpenSource software projects because they are designed that way. Or, usually are and always should be.

The indirect benefit comes from the simple fact that these projects are OpenSource. Let me give you an example form the non scientific world. (it is a made up example, but it could reflect reality and is highly instructive.)

Say there is an operating system or major piece of software competing in a field of other similar products. Say there is a widely used benchmark standard that compares the applications and ranks them. Some of the different products load up faster than others, and use less RAM. That leaves both time (for you) and RAM (for other applications) that you might value a great deal. All else being equal, pick the software that loads faster in less space, right?

Now imagine a group of trollish deviants meeting in a smoky back room of the evile corporation that makes one of these products. They have discovered that if they leave a dozen key features that all the competitors use out of the loading process, so they load later, they can get a better benchmark. Without those standard components running, the software will load fast and be relatively small. It happens to be the case, however, that once all the features are loaded, this particular product is the slowest of them all, and takes up the most RAM. Also, the process of holding back functionality until it is needed is annoying to the user and sometimes causes memory conflicts, causing crashes.

In one version of this scenario, the concept of selling more of the product by using this performance tilting trick is considered a good idea, and someone might even get a promotion for thinking of it. That would be something that could potentially happen in the world of proprietary software.

In a different version of this scenario the idea gets about as far as the water cooler before it is taken down by a heavy tape dispenser to the head and kicked to death. That would be what would certainly happen in the OpenSource world.

So, go OpenSource! And, read the paper from Linux Journal, which by the way has been producing some great articles lately, on this topic.

The Scientists Workflow and Software

You collect and manage data. You write code to process or analyze data. You use statistical tools to turn data into analytically meaningful numbers. You make graphs and charts. You write stuff and integrate the writing with the pretty pictures, and produce a final product.

The first thing you need to understand if you are developing or enhancing the computer side of your scientific endevour is that you need the basic GNU tools and command line access that comes automatically if you use Linux. You can get the same stuff with a few extra steps if you use Windows. The Apple Mac system is in between with the command line tools already built in, but not quite as in your face available.

You may need to have an understanding of Regular Expressions, and how to use them on the command line (using sed or awk, perhaps) and in programming, perhaps in python.

You will likely want to master the R environment because a) it is cool and powerful and b) a lot of your colleagues use R so you will want to have enough under your belt to share code and data now and then. You will likely want to master Python, which is becoming the default scientific programming language. It is probably true that anything you can do in R you can do in Python using the available tools, but it is also true that the most basic statistical stuff you might be doing is easier in R than Python since R is set up for it. The two systems are relatively easy to use and very powerful, so there is no reason to not have both in your toolbox. If you don’t chose the Python route, you may want to supplement R with gnu plotting tools.

You will need some sort of relational database setup in your lab, some kind of OpenSource SQL lanaguge based system.

You will have to decide on your own if you are into LaTex. If you have no idea what I’m talking about, don’t worry, you don’t need to know. If you do know what I’m talking about, you probably have the need to typeset math inside your publications.

Finally, and of utmost importance, you should be willing to spend the upfront effort making your scientific work flow into scripts. Say you have a machine (or a place on the internet or an email stream if you are working collaboratively) where some raw data spits out. These data need some preliminary messing around with to discard what you don’t want, convert numbers to a proper form, etc. etc. Then, this fixed-up data goes through a series of analyses, possibly several parallel streams of analysis, to produce a set of statistical outputs, tables, graphics, or a new highly transformed data set you send on to someone else.

If this is something you do on a regular basis, and it likely is because your lab or field project is set up to get certain data certain ways, then do certain things to it, then ideally you would set up a script, likely in bash but calling gnu tools like sed or awk, or running Python programs or R programs, and making various intermediate files and final products and stuff. You will want to bother with making the first run of these operations take three times longer to set up, so that all the subsequent runs take one one hundredth of the time to carry out, or can be run unattended.

Nothing, of course, is so simple as I just suggested … you will be changing the scripts and Python programs (and LaTeX specs) frequently, perhaps. Or you might have one big giant complex operation that you only need to run once, but you KNOW it is going to screw up somehow … a value that is entered incorrectly or whatever … so the entire thing you need to do once is actually something you have to do 18 times. So make the whole process a script.

Aside form convenience and efficiency, a script does something else that is vitally important. It documents the process, both for you and others. This alone is probably more important than the convenience part of scripting your science, in many cases.

Being small in a world of largeness

Here is a piece of advice you wont get from anyone else. As you develop your computer working environment, the set of software tools and stuff that you use to run R or Python and all that, you will run into opportunities to install some pretty fancy and sophisticated developments systems that have many cool bells and whistles, but that are really designed for team development of large software projects, and continual maintenance over time of versions of that software as it evolves as a distributed project.

Don’t do that unless you need to. Scientific computing often not that complex or team oriented. Sure, you are working with a team, but probably not a team of a dozen people working on the same set of Python programs. Chances are, much of the code you write is going to be tweaked to be what you need it to be then never change. There are no marketing gurus coming along and asking you to make a different menu system to attract millennials. You are not competing with other products in a market of any sort. You will change your software when your machine breaks and you get a new one, and the new one produces output in a more convenient style than the old one. Or whatever.

In other words, if you are running an enterprise level operation, look into systems like Anaconda. If you are a handful of scientists making and controlling your own workflow, stick with the simple scripts and avoid the snake. The setup and maintenance of an enterprise level system for using R and Python is probably more work before you get your first t-test or histogram than it is worth. This is especially true if you are more or less working on your own.

Culture

Another piece of advice. Some software decisions are based on deeply rooted cultural norms or fetishes that make no sense. I’m an emacs user. This is the most annoying, but also, most powerful, of all text editors. Here is an example of what is annoying about emac. In the late 70s, computer keyboards had a “meta” key (it was actually called that) which is now the alt key. Emacs made use of the metakey. No person has seen or used a metakey since about 1979, but emacs refuses to change its documentation to use the word “alt” for this key. Rather, the documentation says somethin like “here, use the meta key, which on some keyboards is the alt key.” That is a cultural fetish.

Using LaTeX might be a fetish as well. Obliviously. It is possible that for some people, using R is a fetish and they should rethink and switch to using Python for what they are doing. The most dangerous fetish, of course, is using proprietary scientific software because you think only if you pay hundreds of dollars a year to use SPSS or BMD for stats, as opposed to zero dollars a year for R, will your numbers be acceptable. In fact, the reverse is true. Only with an OpenSource stats package can you really be sure how the stats or other values are calculated.

And finally…

And my final piece of advice is to get and use this book: Computing Skills for Biologists: A Toolbox by Allesina and Wilmes.

This book focuses on Python and not R, and covers Latex which, frankly, will not be useful for many. This also means that the regular expression work in the book is not as useful for all applications, as might be the case with a volume like Mastering Regular Expressions. But overall, this volume does a great job of mapping out the landscape of scripting-oriented scientific computing, using excellent examples from biology.

Mastering Regular Expressions can and should be used as a textbook for an advanced high school level course to prep young and upcoming investigators for when they go off and apprentice in labs at the start of their career. It can be used as a textbook in a short seminar in any advanced program to get everyone in a lab on the same page. I suppose it would be treat if Princeton came out with a version for math and physical sciences, or geosciences, but really, this volume can be generalized beyond biology.

Stefano Allesina is a professor in the Department of Ecology and Evolution at the University of Chicago and a deputy editor of PLoS Computational Biology. Madlen Wilmes is a data scientist and web developer.

Time itself as a resource that drives evolution

Many of the key revolutions, or at least, overhauls, in biological thinking have come as a result of the broad realization that a thentofore identified variable is not simply background, but central and causative.

I’m sure everyone always thought, since first recognized, that if genes are important than good genes would be good. Great, even. But it took a while for Amotz Zahavi and some others to insert good genes into Darwin’s sexual selection as the cause of sometimes wild elaboration of traits, not a female aesthetic or mere runaway selection. Continue reading Time itself as a resource that drives evolution

Violence in the United States Congress

There is probably a rule, in the chambers of the United States Congress, that you can’t punch a guy. Living rules are clues to the past. Where I live now, there is probably one Middle or High School age kid across 130 homes, but we have a rule: You can’t leave your hockey goals or giant plastic basketball nets out overnight. So all the old people who live on my street have to drag those things into the garage at the end of every day, after their long sessions of pickup ball. Or, more likely, years ago, there were kids everywhere and the “Get off my lawn” contingent took over the local board and made all these rules. So, today, in Congress, you can’t hit a guy.

But in the old days, that wasn’t so uncommon. You have heard about the caning of Charles Sumner. Southern slavery supporter Preston Brooks beat the piss out of Senator Charles Sumner, an anti-slave guy from Massachusetts. They weren’t even in the same chamber. Brooks was in the House, Sumner was in the Senate. Sumner almost didn’t survive the ruthless and violent beating, which came after a long period of bullying and ridicule by a bunch of southern bullies. Witnesses describe a scene in which Brooks was clearly trying to murder Sumner, and seems to have failed only because the cane he was using broke into too many pieces, depriving the assailant of the necessary leverage. Parts of that cane, by the way, were used to make pendants worn by Brook’s allies to celebrate this attempted murder of a Yankee anti-slavery member of Congress.

Here’s the thing. You’ve probably heard that story, or some version of it, because it was a major example of violence in the US Congress. But in truth, there were many other acts of verbal and physical violence carried out among our elected representatives, often in the chambers, during the decades leading up to the civil war. Even a cursory examination of this series of events reveals how fisticuffs, sometimes quite serious, can be a prelude to a bloody fight in which perhaps as many as a million people all told were killed. Indeed, the number of violent events, almost always southerner against northerner, may have been large enough to never allow the two sides, conservative, southern, right wing on one hand vs. progressive, liberal not as southern, on the other, to equalize in their total level of violence against each other. Perhaps there are good people on both sides, but the preponderance of thugs reside on one side only.

Which brings us to this. You hears of the caning of Sumner, but you probably have not read The Field of Blood: Violence in Congress and the Road to Civil War by Yale historian Joanne B. Freeman.

Professor Freeman is one of the hosts of a podcast I consider to be in my top free favorite, Backstory, produced by Virginia Humanities. Joanne is one of the “American History Guys,” along with Ed Ayers (19th century), Brian Balogh (20th Century), Nathan Connolly (Immigration history, Urban history) and emeritus host Peter Onuf (18th century). Freeman writes in her newest book of the first half of the 19th century, but her primary area of interest heretofore is the 18th century, and her prior works have focused, among other things, on Alexander Hamilton: Affairs of Honor: National Politics in the New Republic about the nastiness among the founding fathers, and two major collections focused on A.H., The Essential Hamilton: Letters & Other Writings: A Library of America Special Publication and Alexander Hamilton: Writings .

I strongly urge you to have a look at Freeman’s book, in which she brings to light a vast amount of information about utter asshatitude among our elected representatives, based on previously unexplored documents. I also strongly urge you to listen to the podcast. The most recent edition as of this writing is on video games and American History. The previous issue is covers the hosts’ book picks for the year.

Gift Guide: Books About Trump And The Fall of America

Might as well admit it. America has been ruined. Oh, it is fixable, not “totaled” like your car after you roll it down a hill during an ice storm. More like you failed to set the parking break and it got loose and crashed into a brick wall, then some hoodlum broke through the window and ripped out your radio, then there was a hail storm…

Anyway, here is a carefully selected list of books related to Trump and the Trump fake Presidency, integrated with a list of books that are NOT about that, but rather, leadership in history. The former are to get you steamed up, the latter, they are the control rods. A few are just about attacks on democracy from the elite and powerful.

I thought it would be fun if everybody gave at least one of these books to somebody as a holiday gift this year. I’ll be giving a few. Continue reading Gift Guide: Books About Trump And The Fall of America

Learn SQL and Tell Stories With Your Data

First, let’s get this one thing out of the way. How do you pronounce “SQL.”

Donald Chamberlin, the co-developer of SQL, pronounces it by saying the letter out louse. Ess Cue Ell. However, many computer science teachers prefer “sequel” and in at least one poll, the latte won out. One of the most common implementations of the database language is mySQL, and that piece of software is officially pronounced “My Ess Cue Ell” and not “Mysequel.”

I myself have never once uttered the word “sequel” when referring to this database system. I have also never once uttered either the term “Jiff” or “Giff” in relation to *.gif files. They are, to me, “Gee Eye Eff” files. I admit, however, to calling *.jpg files “Jay pegs” even when they are not *.jpeg.

But I digress. We are here to talk about a new book, on SQL.

The book is Practical SQL: A Beginner’s Guide to Storytelling with Data by the award winning journalist Anthony DeBarros. DeBarros is as much of a writer as he is a database geek, which gives this book a pleasant twist.

The book provides what you need to know to create databases and set up relationships. But don’t get excited, this is not a dating book.

See, a “database” isn’t really a thing, but a collection of things. Normally, at the root of a database is a set of tables, which look like squared off sections of spreadsheets, or highly organized lists, if you lay them out. But then, the different fields (columns) of the tables are related to each other. This is important. Let’s say you have a table of individuals in your club, and each individual has a set of skills they bring to the table. It is a model railroad club, so you’ve got engineers, artificial vegetation experts, landscape sculptors, background and sky painters, and so on. Also, each club member has a known set of days of the week and hours that they are available to meet or to manage some event you are having. Plus, they each have lunch food and drinks preferences for when you order out. Three of the members drive wheelchairs. And so on.

You have a table of dates and times that will be when your club will meet over the next year. You have a list of venues you will meet in. Each venue is associated with a different deli where you order out. Some of the venues are not wheelchair friendly, while some are.

Imagine putting together a big chart that shows all the events, who is going to them, what everyone will eat, what everyone will do, and special needs requirements, for the next ten years.

If that was one single giant structured table, each time a given member was included on a sublist because he or she, there would also be all the information about the person’s address, phone number, email, food preference, skill, etc. etc.

So you don’t do that. Instead, the database is taught to associate the name of each member with that member’s personal file, including all that personal information, in a way that lets you selective ignore or include that information. Then, the database lets you construct new, novel, virtual tables that combine the information in a clever way.

For instance, for an upcoming event, you can have a to-do list that includes which materials to order for a build of a new model, and whether or not the person who helps Joe with the wheelchair thing should be sent a note to remind him to definitely come, and a precise list to send to the corner deli, as well as the phone number of the deli, for lunch, and so on.

Tables, linked together with relationships, which are then mined to make these novel tables which are called queries.

You may need to import data, export data, clean up errors, you may be using a GIS system, creating automatic emails or mail merge documents, and at some point you might even want to analyze the data.

Practical SQL: A Beginner’s Guide to Storytelling with Data tells you all the stuff you need to do in order to carry out these tasks. As is the usual case with No Starch Press books, the code that is used in the book is downloadable.

The book assumes you are using PostgreSQL, which is free (and there are instructions to get it) but all SQL systems are very similar, so that really doesn’t matter too much.

Everybody who works with data should know some SQL. All desktop operating systems (Linux, MacOS, Windows) use this sort of software and it runs about the same way on all of them. Just so you know, you are using SQL now reading this blog, because SQL or something like it lies at the base of pretty much every common way of serving up web pages. Prior to you clicking on the link, these very words were in a database file, along with the name of the post, a link to the graphic used, etc. etc. A bit of PHP code accessed the data from the SQL database and rendered it into HTML, which was then fed to your browser. SQL is one of those things that lies at the root of how we communicate on line, and the basics of how it works and what you can do with it have not changed in a long time. The first relational models go back to 1970. Remember “dbase”? That was an early version, deployed in the early 1980s. By the mid 1980s, everything you can do with modern SQL, to speak of, was implemented.

Enjoy and learn from Practical SQL: A Beginner’s Guide to Storytelling with Data.

Birds of Central America: Review

Belize, Guatemala, Honduras, El Salvador, Nicaragua, Costa Rica, and Panama make up Central America. Notice that had I not used the Oxford Comma there, you’d be thinking “Costa Rica and Panama” was a country like Trinidad and Tobago. Or Antigua and Barbuda. Or Bosnia and Herzegovina. Anyway, those countries have about 1261 species of birds, and the newly minted Birds of Central America: Belize, Guatemala, Honduras, El Salvador, Nicaragua, Costa Rica, and Panama (Princeton Field Guides) by Andrew Vallely and Dale Dyer covers 1,194 of them (plus 67 probably accidentals). Obviously, many (nearly all) of those birds exist outside that relatively small geographic area, up in to North America and down into South America. But I’ll remind you that there are some 10,000 bird species, so this region has a bird list that represents 10% of that diversity. Nothing to shake a beak at.

This is a classic Peterson/Petrides style guide, with the usual front matter about bird id, geography, habitats, etc. Species draswings are on the left leaf while descriptions and range maps on the left. The drawings do not have Peterson Pointer lines, but there are a lot of drawings to clarify regional versions and life history stages. In fact, the attention to regional variation is a notable and outstanding feature of this file guide.

There is also an extensive bibliography with over 600 references. The book is medium format, not pocket but not huge, and just shy of 600 pages long. Also, last time I clicked through it was on sale. Know somebody going to Central America over winter break? Get this for them as their holiday gift!

Like the Princeton guides tend to be, this is a very nice book, well written, well constructed, and likely to become the standard for that region for the foreseeable future.

Now is your last chance to read Isaac Azimov’s Foundation Trilogy

… before it gets made into a TV show.

There have been, I think, two earlier failed starts for a project that turns what might be the number one interstellar long-history science fiction book written. This project looks like it is going to happen. The producer is Apple, so you will probably have to buy their latest computing device to get permission to watch it. (And therein could lie the plot for a very Azimov-like science fiction story…)

Anyway, you need to read the books before you watch the show, so get started. There is no information available as to when this series will be released. And, despite my snark above, it is not know where it will be shown, but it will be streamed. And, it will be in 10 parts.

The three books in the Foundation Trilogy are:

Foundation
Foundation and Empire
Second Foundation

There is a complex publication history, and there are other stories and books, but that is the central bunch of words.

Alternatively, the Foundation Trilogy plus: The Foundation Trilogy (Foundation, Foundation and Empire, Second Foundation), The Stars, Like Dust; The Naked Sun; I, Robot

These may also be among the most commonly available used books in science fiction, so check your local used book store, if you can still find one. (Hint: On line, the cost of one of these volumes, because of their continued popularity, used, is above $4.00 with shipping, with the shipping price dropping as the volume price increases, to make the actual cost per volume between $8.00 and $12.00. So, don’t bother with used on line.)

When the Uncertainty Principle goes to 11

When the Uncertainty Principle Goes to 11: Or How to Explain Quantum Physics with Heavy Metal is a new book by the amazing Philip Moriarty. You may know Moriarty from the Sixty Symbols Youtube Channel.

You can listen to an interview Mike Haubrich and I conducted with Philip Moriarty here, on Ikonokast. Our conversation wanders widely through the bright halls of education, the dark recesses of of philosophy of science and math(s), the nanotiny, and we even talk about the book a bit.

Moriarty, an experienced and beloved teacher at the University of Nottingham, uses heavy metal to explain some of the most difficult to understand concepts of nano science. Much of this has to do with waves, and when it comes to particle physics, wave are exactly half the story. This idea came to him in part because of what he calls the great overlap in the Venn Diagram of aspiring physicists and intense metal fans. Feedback, rhythm, guitar strings twanging (or not), are both explained by the same theories that help us understand the quantum world, and are touchstones to explaining that world.

I’ve read all the books that do this, that attempt to explain this area of physics, and they are mostly pretty great. When the Uncertainty Principle Goes to 11 does it the best. Is this because it is the most recent? Does Philip Moriarty stand on the shoulders of giants? Or is it because the author has hit on a better way of explaining this material, and thus, owes his greatness to the smallness of his contemporaries? We may never know, but I promise you that When the Uncertainty Principle Goes to 11 is a great way to shoulder your way into the smallness of the smallest worlds.

As you will understand if you check out the Ikonokast interview, Moriarty has taken the risk of using math in this book. The math is straight forward and accompanied by explanation, so you do not have to be a math trained expert to use it and understand. Most importantly, while Moriarty uses music, metal, and other real life things to explain quantum physics, these analogies are more than just analogies. They are examples of similar phenomena on different scales. As Philip told me during the interview, we don’t diffract when we walk walk through a doorway, because the things that happen on nano scales don’t scale up. But wave functions function to pick apart both quantum mechanics and Metallica, so why not explore guitar strings, feedback, and mosh pits together with condensed particle physics?

I strongly recommend this book. Just get it, read it. Also, the illustrations by Pete McPartlan are fun and enlightening. Even if you think you understand quantum physics very well already, and I know most of my readers do, you will learn new ways of thinking or explaining.

Philip Moriarty is a professor of physics, a heavy metal fan, and a keen air-drummer. His research focuses on prodding, pushing, and poking single atoms and molecules; in this nanoscopic world, quantum physics is all. Moriarty has taught physics for more than twenty years and has always been struck by the number of students in his classes who profess a love of metal music, and by the deep connections between heavy metal and quantum mechanics. He’s a father of three — Niamh, Saoirse, and Fiachra – who have patiently endured his off-key attempts to sing along with Rush classics for many years. Unlike his infamous namesake, Moriarty has never been particularly enamored of the binomial theorem.

The Nature and History of Presidential Leadership: A book of olden times.

The timing of Doris Kearns Goodwin’s latest book is perfect.

She is an excellent historian and writer, and you probably know of her as the author of several of the best, or at least very nearly the best, volumes on a range of key subjects in American History. She wrote Team of Rivals: The Political Genius of Abraham Lincoln about Lincoln, Lyndon Johnson and the American Dream: The Most Revealing Portrait of a President and Presidential Power Ever Written about Johnson, No Ordinary Time: Franklin and Eleanor Roosevelt: The Home Front in World War II about FDR, and The Bully Pulpit: Theodore Roosevelt and the Golden Age of Journalism about TR.

And now, we have Leadership: In Turbulent Times by Doris Kearns Goodwin.

Are leaders born or made? Where does ambition come from? How does adversity affect the growth of leadership? Does the leader make the times or do the times make the leader?

In Leadership, Goodwin draws upon the four presidents she has studied most closely—Abraham Lincoln, Theodore Roosevelt, Franklin D. Roosevelt, and Lyndon B. Johnson (in civil rights)—to show how they recognized leadership qualities within themselves and were recognized as leaders by others. By looking back to their first entries into public life, we encounter them at a time when their paths were filled with confusion, fear, and hope.

Leadership tells the story of how they all collided with dramatic reversals that disrupted their lives and threatened to shatter forever their ambitions. Nonetheless, they all emerged fitted to confront the contours and dilemmas of their times.

No common pattern describes the trajectory of leadership. Although set apart in background, abilities, and temperament, these men shared a fierce ambition and a deep-seated resilience that enabled them to surmount uncommon hardships. At their best, all four were guided by a sense of moral purpose. At moments of great challenge, they were able to summon their talents to enlarge the opportunities and lives of others.

This seminal work provides an accessible and essential road map for aspiring and established leaders in every field. In today’s polarized world, these stories of authentic leadership in times of apprehension and fracture take on a singular urgency.

John Le Carre’s Smiley Books

This started out as one of those posts I put up pointing to a cheap book on the Kindle. And it still is a post pointing to a cheap book, but then, I have a pitch for you to read John le Carré’s Smiley series (and one other book).

If you have never red John le Carré’s Smiley series, you should. Well, you may or may not like Le Carré’s writing style. He requires work on the part of the reader and can be dense and intense. The stories can be grueling in their detail. But that all makes it very realistic. If you have been keeping up with all the newspaper accounts and findings regarding the Trump-Russian scandal, and if you have been doing so over the last two years, then you are experiencing something much like reading all of Le Carré’s novels in sequence, except a) Le Carré is a better writer than reality and b) reality is much scarier.

The Smiley series happens in the context of the Cold War (as to all of le Carré’s books up until the cold war ends, more or less). You pretty much need to read them in sequence, then, when you are done, watch the various movies and TV series based on them.

I bring this all up now because the seventh book in the series, Smiley’s People: A George Smiley Novel (George Smiley Novels Book 7), is now in Kindle form for cheap.

And, for general reference, John le Carré’s Smiley books in order:

Pre-Karla Trilogy, from the author’s page-turner period:

Call for the Dead: A George Smiley Novel (which is also JlC’s first novel.)

A Murder of Quality: A George Smiley Novel

The Spy Who Came in From the Cold

The Looking Glass War: A George Smiley Novel

Karla Trilogy:

At this point the novels shift in several ways. The dynamic at the British intelligence agency is set up around factions that involve class and ethnic differences (in this case, “ethnic” means one kind of British white guy vs. a different kind of British white guy), and Karla (East German) emerges as the main bad guy. The next three books are a trilogy. You can read them without having read the above, but this is the point where JlC’s writing style changes from something you might really like/not like to something you might really like/not like, so I’d not skip the titles listed above. (I think what happened is, le Carre made it big enough that he was able to tell his editors what to do, instead of the other way round. Sort of like JK Rowling after Harry Potter and the Chamber of Secrets.)

Tinker, Tailor, Soldier, Spy: A George Smiley Novel

The Honourable Schoolboy

Smiley’s People: A George Smiley Novel

Latter day Smiley novels

The Secret Pilgrim: A Novel

A Legacy of Spies: A Novel

Le Carré wrote several other novels (and continues to do so) but they vary a lot in how much I like them. I won’t discuss them here. But, there is one book I want to mention.

If the Smiley series (above) is one of the greatest stories ever told set in the world of spies and espionage of the 20th century, then it is possible that A Perfect Spy: A Novel, by Johyn le Carré, is one of the single best books in this genre (and beyond). It is shocking, wrenching, fascinating, and, while you read it, you should know that it is autobiographical to a certain extent. It is likely that John Le Carré, who was (with a different name) an officer in the British intelligence agency MI6, would be dead or in prison for life had he committed all the acts of his counterpart in this book. But otherwise it is pretty autobiographical, including the character that is the “perfect spy’s” over the top father. I recommend reading the Smiley series first, then, if you like Le Carré’s writing, read and enjoy A Perfect Spy.