The Ebola Test: Civilization Fails

Spread the love

We really only know things work when we test them to the limit and see what it takes to make them fail, or nearly fail. All those air planes and space ships and regular shops and nice cars that usually don’t fail have a pedigree of prototypes or prototypes of parts that were pushed until they broke. Chickens fired into running Boeing 757 engines with a special Chicken Cannon. Crash dummies driving vehicles into specially built walls. Rocket engines exploding on test ranges. But many systems are never tested that way, and really can’t be. We build the systems and convince those who need convincing that they are stable, adaptable, appropriately designed, and ready. Then, real life comes along and pulls the fire alarm. It is not a drill. The system is stressed, and if it fails, that may be the first time we learn it wasn’t good enough.

Obamacare’s computer nightmare is a good example. It actually worked, ultimately, but at first it was one of the largest interactive computer services ever built and brought to so many users in such a short amount of time. There is general agreement that the system was built improperly and that is why it failed, but I don’t think that is necessarily the case. It may simply be that we can’t know that such a large and complex system is going to work when it is deployed, we should probably expect failure, and we should probably be ready to jump in and patch and repair and redo as needed. And, as a society, be a bit more grown up about the failure.

Three systems have been tested by the current Ebola outbreak and found wanting. One is the system of rational thinking among people. That is just not working very well. We have people in villages in West Africa thinking that health care workers who have come to help them are the cause of the scourge. We have tin-hat wearing Internet denizens insisting that that Ebola has already gone airborne, and that the US Government has a patent on the virus, and somehow it all makes sense, thanks Obama Bengazi! The failure of rational thought, which is a system supported by home grown culture and formal education, has been stressed and found wanting. We are not surprised, of course. I bring it up mainly because I want to point out that this is a general human failure, not just a failure among the victims in Africa who are so easily overtly blamed.

The global public health system has been tested and proved to be an utter failure. WHO and the CDC and all that have done a pretty good job with earlier, smaller, outbreaks of Ebola and other diseases, when they can fly in more people than even live in some remote African village, and most likely the hardest part of those missions is the logistics of getting to the field. That has been facilitated in the past by on the ground aid workers, missionaries, and in some cases, public health researchers who already knew the terrain. But they had a plan, they had gear, and it all mostly worked very well. We assumed the plan and gear and expertise and personnel was in place for a major outbreak. It wasn’t. That system has been tested and failed.

And now we are seeing a third system showing itself to be a failure, and it is actually kind of surprising. In speaking of the problem of screening for possible Ebola carriers coming in to the US on planes we learn that there isn’t a way to keep track of people flying to the US from other countries. From CNN:

“All options are on the table for further strengthening the screening process here in the U.S., and that includes trying to screen people coming in from Ebola-affected countries with temperature checks,” a federal official said… “It’s not as easy as it sounds. There aren’t that many direct flights from Ebola-affected countries to the U.S. anymore. Many passengers are arriving on connecting flights from other parts of the world, and then they come here, so that makes it more of a challenge.”

So, a couple of dozen well funded and well trained terrorists get on airplanes and destroy the World Trade Center and mess up the Pentagon, etc. This makes us consider more carefully the threat of terrorists attacking the US. We set up draconian laws and expensive systems that have the net effect of measurably removing freedoms for Americans, annoying people in other countries, and nudging us closer to a police state than ever before. We’ve even closed the border with Canada to anyone without passports, and even there, US and Canadian citizens can no longer assume they can freely travel back and forth. We fly drones over villages in other countries and blow people up (It’s OK, they were all bad) and we keep closer track of everything all the time everywhere than ever before.

But we can’t tell where a person getting off an international flight originated? Wut? I would have thought that would be the number one thing that would be implemented as part of the Homeland Security Upgrade. First thing.

Homeland Security in the US, the biggest shiniest newest system on Earth, fails the Ebola test.

In some ways, that is actually a bit comforting. But it is also terribly annoying.

Have you read the breakthrough novel of the year? When you are done with that, try:

In Search of Sungudogo by Greg Laden, now in Kindle or Paperback
*Please note:
Links to books and other items on this page and elsewhere on Greg Ladens' blog may send you to Amazon, where I am a registered affiliate. As an Amazon Associate I earn from qualifying purchases, which helps to fund this site.

Spread the love

41 thoughts on “The Ebola Test: Civilization Fails

  1. Re No. 3, Here you are:

    Airline companies! You have 1 week to begin supplying country-of-origin information (which we all know you have in your computer systems) on every international traveller, or…

    …those possessing west African passports will be put on buses from their arrival terminals and driven to your corporate headquarters for a free tour and a handshake with your executives.

    There. Fixed that for ya.

  2. “It’s in the computer.” Getting it out, organized, and reported is merely a matter of *motivation*.

    But we know that In the case of this industry, motivation takes the form of tombstones.

    ::FAIL!::

  3. “But many systems are never tested that way, and really can’t be. We build the systems and convince those who need convincing that they are stable, adaptable, appropriately designed, and ready.”

    I agree with you and I think computer scientist Edsger Wybe Dijkstra explained why complex computer-based systems still suffer from this problem. In the article “On the cruelty of really teaching computing science”, 1988, he wrote:

    A number of these phenomena have been bundled under the name “Software Engineering”. As economics is known as “The Miserable Science”, software engineering should be known as “The Doomed Discipline”, doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter “How to program if you cannot.”
    http://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD1036.html

  4. There exist some very good public health systems which are robust and reliable. Drug trial procedures exist which are very good, for example.

    When GMO cheerleaders insist that these protocols not be followed for corporate patented technology duplicitously called both “natural food” and “proprietary secrets”, their boo-hoo’ing over lacking “rationality” rings a bit hollow.

  5. They’re real chickens, cosmiccomics. In fact, they don’t even thaw them out! Water on it’s own is sort of a shear-thickening fluid {stick your finger in no problem, slap it (the water, not your finger) and it hurts}. At those cannon speeds, it doesn’t matter that they’re a little undercooked.

    ^^ I know that’s true for windscreens; Maybe not for engines.

  6. Now would be the perfect timing to ‘opt out’ {we’ve got an opt out!} of the body scanners (x-ray and tHz) and see if rubdowns, cavity searches, and sniffing liquids are still only for ‘mericuns and their kids.

    WTC7 won’t go away.

  7. Buck I probably should have been more narrow or explicit in my definition. I’m talking specifically about systems that are designed to handle large scale problems that arise suddenly (like starting enrollment in the largest health care insurance system ever, and expecting it to be mostly done in a matter of months) and that can’t really be tested until D-day. Imagine no one ever having done a large scale double blind testing study, then implementing the largest one ever. It would not go too well.

    1. Hi Greg,

      With that clarification, I don’t think the Obamacare computer services system are very good example, since it was set up to offer a portfolio of services. We probably would be better served invoking the customer support systems to deal with massive numbers of diverse problems. True, these would require IT support, (which was lacking), but even with this – I doubt the assertion that “real” (i.e.: severe & robust) testing was not possible.

      As the primary designer of Citigroup’s e-commerce division (now largest in the world), my work frequently does not allow us the luxury of such assertions/assumptions.

  8. cosmicomics: that;’s interesting.

    Yes, they are real chickens. In fact, my brother in law worked at one time on that team. And yes, the are frozen as Tim stated. They used to used cooked chickens but the engineers kept eating them, I heard. (only kidding)

  9. Civilization Fails? Let’s see. In response to the Ebola outbreak, the government of Cuba sent hundreds of trained doctors into Liberia. The government of the U.S. sent 3000 armed soldiers. Which of these two is the ‘civilized’ response?

  10. Buck I think you are speaking of what could have happened rather than what did happen.

    Again, a clarification: I’m not suggesting that it is inevidable that large complex not-yet-deployed systems have to fail. Just that they do sometimes.

    I don’t know about Citigroup’s e-commerce example. I can site Peoplesoft, though. This was a huge enterprise system deployed at the UMN to handle everything from HR needs to registrar needs (which are at least as complicated as HR needs) to library needs. When it was deployed more or less all at once it was a disaster. Paychecks not sent out, students unable to register because they had huge library fines that did not exist, etc. After a year and a half it calmed down, but it was bad. (And I’m not speaking from a mix of personal end user experience, which can be biased . I was on the two oversight committees for that work, and heavily involved in using the registrar end of it, and a few other features).

    Was the Citigroup’s case a large scale short term deployment of a huge system that wasn’t there before, or was it more incremental?

    1. Hi Greg,

      The Citigroup deployment was huge, and far more complex by standard measures than a brand new system because of existing, incompatible systems were already in place. A system to unify global operations was widely considered impossible by most national heads and Board members.

      Proving them wrong was my job.

      My objection is not to the relatively mild claim solid testing often isn’t done. We probably agree good testing is the exception rather than the rule.

      However, the first paragraph seems to make the stronger claim that there exist “many systems” which simply *can’t* be robustly tested in a manner sufficient to identify stress or failure points – and I don’t believe that’s true.

      For example, I doubt UNM’s PeopleSoft could not have been tested in this manner, while I’m happy to acknowledge the weaker claim that it *wasn’t*.

      If there are purported examples of the stronger claim, I’m very interested, since my job usually depends on managing risk and in my biz, we study root causes a lot, but inherent complexity is no excuse! 🙂

  11. OK, so that is an interesting and important question, the difference between “can’t” and “aren’t”

    Either way, there is the phenomenon of “we know it will work” without either suitable testing or because in some cases it can’t.

    Likely, it is theoertically possible to pre test every system, but not adequately doing so isn’t just a function of not doing it or not knowing it, in some systems. Some systems may have built in forces that work against knowing in advance how something will come out.

    1. Focus on the existence of uncertainty typically seems used to justify inaction, of which we might want to be instinctively skeptical.

      If there’s a complex system with uncertainty which we cannot take as a prompt to build more flexible response plans, I’d be interested to learn of it.

  12. Greg, the Peoplesoft impementation manual has you covered:

    “System failure can be difficult to overcome.”

    (I used to collect PS literature inanities – that’s one’s my favourite.)

  13. Some systems can only be tested “by proxy” — that is, by making multiple instances and testing some lots, then “shipping” the others (untested) if the tests passed on the others.

    Example: Rockets/rocket engines, the Mars descent vehicles, the lunar landers, etc. Anything where “use” causes the article to either be destroyed or so fundamentally altered that you can never “go back to square zero” with it.

    These systems are very complex themselves. (Google “seven minutes of terror” to get an idea.)

    When you put a man on top of a rocket and light it up, you damn well better “know it will work” even though you can in no way test his actual vehicle beforehand.

    Failure is not an option… when the world is watching and national interests are on the line. How motivated are you?

  14. We have a “system of rational thinking”?

    I know I’d like to believe that, which is why I do not watch TV, but my power for self-deception is limited and so I am sceptical about there being any rational basis for my optimism.

    I don’t see any “system of rational thinking” delivering us government, nor do I see it present in the media drivel that is presented to us and informs not just what people believe but even the conversations they think we should be having.
    I see academia lost in a tsunami of navel-gazing while rationalists such as Michael Mann have to fight tooth and nail against deceptive propaganda that paints him as a malfaiteur.

    Incidentally, I quite enjoyed this article:
    http://20committee.com/2014/10/03/the-ebola-crisis-and-medical-intelligence/
    And I’m sure you will enjoy reading yet again that we apparently fear Ebola “going airborne”.

    1. Craig,

      Your comments suggest you would reject the existence of frameworks within which producing deceptive propaganda (etc.) is rational.

      Is that a fair interpretation?

  15. I think it is irrational behaviour.

    After all, in formal religion there is structure and method as well as hierarchy, but the underpinning motivation is irrational, therefore all output is logically fallacious, (even when occasionally accidentally correct).

    The amorphous collective of anti-science drivellers may include some minority of people who actually have a plan, a plan that they think addresses their best interests even, but I do not believe they have correctly identified their best interests, and their use of dishonesty and fallacious logic paints a big picture of utter irrationality.

    1. I would suggest any motivation at all could be plausibly categorized as irrational, if we regard motivation as a desire.

  16. And of course these people enjoy pretty solid institutional support within our society, which is what makes me doubt Greg’s “system of rational thinking” is much more than aspirational.

  17. As for other uses of “deceptive propaganda”, I have mixed feelings. Chapter 6 in Le Carre’s “Secret Pilgrim” describes how I am inclined to think about it.

  18. Perhaps classification by the time frame in consideration?

    Many self-centered thinkers are thinking rationally for short-term gains — even if, when considered in the longer-term “what then?” time frame, their thinking is quite irrational.

    The climate change deniers are just such people: Crush the policy changes that climate research indicates is necessary in order to achieve the short-term gains of not having to give up cherished lifestyles. Rational…

    …Until you begin considering the longer-term “what about when all the coasts flood and species go extinct and crops fail later?”, which shows that ultimately, their thinking is irrational.

    This is childish thinking, but remember that to the child, it makes perfect sense. “And wishing makes it so.” It’s the adults in the room that have the perspective and wisdom to know otherwise.

    One of our problems with climate change is that we have too many children who can vote, write blogs, libel scientists, and influence politicians in their short-term rational ways — which are long-term irrational to the mature folks who know how to plans for consequences.

    I do hope the adults win this one.

    1. @Brainstorms It hardly seems irrational to point to the most complex climate change models with monstrously large error bars, and observe that we can’t say what weather will result from climate change.

      In the denier camp, it appears difficult to provide objective criteria that evenly applied, distinguish between “rational” views we like as good science and “irrational” ones we don’t. AFAICT, there exists a Goldilocks problem, where we can include only instances we want, and exclude only those we want.

  19. >Do we buy insurance?
    Some people some of the time.

    > Why?
    A million reasons, or no reason other than “feeling”.

    >Is it rational to do so?
    Sometimes.

    >To avoid doing so?
    Sometimes.

    Your point?

  20. If it is rational to purchase insurance (i.e., planning for an uncertain outcome), then the “large error bars” should call for “purchasing insurance” to cover those unknown consequences when it comes to what climate change may bring. Especially when high stakes are involved.

    In this case, we need not worry about the Goldilocks problem; we acknowledge that there exists risk of (great) loss, and so we spend money (in the form of public policy) to mitigate what research indicates has a significant chance of happening.

    The deniers are of the camp “I don’t want to buy insurance, I’d rather spend the money on myself.” Short-term thinking, but may have catastrophic long-term consequences.

  21. The same point seems to hold for those in government charged with protecting against an Ebola epidemic. Resistance to spending the money to create a sufficient in-place system to deal with the threat, vs “I don’t want to spend the money” to ensure public safety.

    The “it can’t happen to us” mentality. Just like AGW deniers.

  22. You are still asserting the quality in question: rationality.

    What is the criteria we use for a decision to qualify as “rational”?

    Bigfoot & ghost hunters can (and do) plan long term.

  23. I’ve been asserting that there is no universal, absolute way to define or use “rational”. it’s too subjective and “in the eye of the decider”.

    A schizophrenic may think he’s rational — but his therapist would not agree. Someone watching their discussion would likely choose sides on this issue. Who’s correct?

    So I suggest that planning for ways to mitigate undesirable events that have a (perceived) significant chance of occurring is being rational. Denying that there is any need to address this *in the face of evidence to the contrary* (first- or second-hand) is irrational.

    In the U.S., there are laws that compel automobile drivers to purchase insurance even if they are irrational enough to deny that there is significant risk involved in driving. There is too much evidence at hand.

  24. I really don’t think that pointing at some large error bars on some of the results of rational, fact-based analysis in order to assert “uncertainty” as a prescription for inaction is in any way rational.

    Don’t forget – error bars cut both ways, and this is what the anti-science mob refuses to accept.

  25. A schizophrenic might even consciously modify his behaviour in order to deceive his doctor into a course of action such as reducing his prescription. The surface motivation is an unwillingness to submit to medication, leading to an apparently rational course of action, but the underlying motivation is a psychological condition known as “denial”, which is irrational.

  26. Buck:
    “any motivation at all could be plausibly categorized as irrational, if we regard motivation as a desire.”

    Great observation.

    This has been the natural state of thinking in our species. Only in the last 350 years has a movement developed which aimed to convert the natural human thinking process to a new method which aims to discard irrationality in favour of an objective scientific method.
    A million years of evolution is not going to vanish quite so quickly.
    It is notable that even today, a scientist is capable of, for example, running an irrational anti-science blog where right-wing fruitloops and their nutty ideas are courted and rational objective scientists and their science are denigrated.
    If a trained scientist can behave like this, what hope for the rest of humanity?

  27. This is why I have a problem with the binary label of Rational vs Irrational; it’s so often used as a cudgel against anyone who doesn’t agree with the speaker. Is it rational to buy insurance? If your goal is to maximize the statistically average amount of money you will end up with – not my goal in life, but what some economists and their ilk in other contexts seem to think is the only thing that counts – then no, it is not! After all, the premiums charged by your home or life insurance company include all the costs an average policy-holder will incur, plus a fat profit. That means that the average policyholder, like the average casino gambler, must lose money. (Note that car insurance is different in that an auto driver risks inflicting a future loss on an unconsenting third party, which he is morally and legally bound to pay to alleviate. I speak only of insurance that is meant to protect the policyholder himself or his own family.)

    OTOH: if your goal is to avoid the possibility of incurring devastating financial losses, even by accepting the certainty of a much smaller loss, then it is totally rational to buy insurance. But what if you have a working-class income and have a certain amount of money that can be used to protect you against some possible future loss or to meet real current needs, but not both? Then you have a value judgement to make, just as you did in making the original decision to prefer risk-avoidance over average ultimate savings. The same applies to decisions about playing the lottery, or accepting medical interventions that save a few lives while causing nonfatal harm to many more, or any number of situations in which reasonable people’s goals might conflict. Neither logic nor Reason can dictate which goals we should prefer; they only help us pursue the goals we have settled upon, with the aid of our emotions and values, more efficiently.

    1. Jane, I would assert that reason and logic can “dictate” decisions based on the existence of tools which have been designed to help us with just such problems.

      In the field of portfolio management for example, (a type of decision science), it is the job of people in my line of work to advise leaders on how to make such evaluations.

      In fact, your post points out examples of some of the critical factors, such as relative risk-aversion bias, probabilities of future states of nature, and downside risk severity.

  28. Mother Nature will, um, “bake it out of us”, Craig.

    Well, I hope not. But that’s not rational, I suppose. Given the evidence of sufficient mitigation action being taken…

Leave a Reply

Your email address will not be published. Required fields are marked *