Tag Archives: Linux

This year’s biggest ripoff is also this year’s best gift idea

Here’s an idea. You have an old beat up computer running, say, Windows. You want to make it faster, crisper, more secure, and generally, better.

What can you do short of buying a new computer? Well, install Linux. Linux is so much more efficient as an operating system, your computer will simply run better. Guaranteed. Continue reading This year’s biggest ripoff is also this year’s best gift idea

A really good computer setup

I’ve reached a very nice resting point in the ongoing effort to develop a very useful, powerful, stable, and cool computer setup.

This started a while back when I built a computer. In particular, this computer. There are several advantages to building a computer. You can save money or get more bang for your buck even if you don’t pay less. On the saving money side, maybe you have components on hand that you don’t have to buy. I did, mainly mass storage. The case I had, thinking I’d save money there, ended up not working out. You get more bang for the buck because the parts you buy will be better than the ones in the equivilant off the line but cheaper computer, and you’ll have more control over what happens in future upgrades. Continue reading A really good computer setup

The KDE Dolphin File Manager

I’ve noticed that many file managers in Linux are changing in the way many Linux desktop environments are changing. They are becoming simpler. That is a bad thing. File management has not gotten simpler. If anything, it has gotten more complicated. I need a powerful tool, not a dumbed down stick. That’s why I like the KDE file manager, Dolphin.

Here are a few tips and tricks to tweak the dolphin.

Some of the most obvious things you can do are right in front of you, but this will depend on exactly what version of Dolphin you have. When I open Dolphin, I see a “preview” and a “split” icon on top. The utility of these buttons is obvious, and both are useufl. Note that with preview you can use a scaling bar at the bottom of the window to change the size of the preview.

I also see a “control” button over to the right. That leads to the menu that does most of the work in configuring the software. Play around with it. Below are a few specific suggestions that generally involve diving deepish into the menu structure.

Also, don’t forget to right click on everything until you’ve found out what all the right clicking mojo is. There is a lot of right-click mojo in KDE generally, and in Dolphin in particular.

Adding information to icons in Icon View

Dolphin has three modes for viewing files. One is “details” and to be honest, that’s all I want to know most of the time. But if you use the “Icon view” you may want to see the file size under the filename (below the icon) and the number of files under the folder icon, for each item. Or, you might want other info, like creation date, or file type. If you do, change the view mode to Icon, then under Control pick “Additional Information” and click whatever info you want to appear with the icon.

Obviously this also works with the detailed view to give you new columns on which to sort things. It also works with “compact” mode.

Making all folders behave the same way

One of the nice things about Dolphin, and this is true of many file managers, is that you can set individual folders to have specific behaviors, and those behvaviors will be there when you reopen the folder at another time. You can always change the behavior if you need to. But, an even nicer thing in Dolphin (but not in many other file managers) for some people is this: You can tell Dolphin to “Use Common Properties for All Folders” by going to Control, then Configure Dolphin, then under the “General” page, pick the Behavior tab, then check “Use common properties for all folders” as opposed to “Remember properties for each folder.”

Personally I prefer to have each folder remember its properties, but you may prefer the consistancy across all folders.

Group folders and files

Under “Control” you can select “show in groups.” Interesting things happen. Mainly, the files and folders get grouped up either alphabetically or by time. It is kinda freaky. You might like it.

Panels that show more information

Under Control >> Panel you can select several cool options.

A “details” panel appears on the right, and is like a “properties” tab, with information like file size, type, when it was modified, etc. There is additonal information that depends on file type. A picture will show the size of the picture, a video the length of the video, etc. This panel allows you to add tags and comments, and rate a file.

A “terminal” panel appears down below the file manager. As you switch around between directories, the termnial changes directory (and you can even see the cd [directory] command being implemented).

The “Places” Panel is probably on by default, and that is the strip to the left of the icon or details area. A navigation tree is also a panel, and it apears on the left.

You can resize the panels. And, if you unlock them (right click) you can move them around to a certain extent!

Depending on where on Dolphin you right click, you can bring up a menu with the various panels listed, as well as the main tool bar.

So, one thing you can easily do, is to set Dolphin to show previews, be in a folder with pictures you want to review, then turn off all the panels so your pictures can take up maximum room on your screen. Then you’ve got what acts like a dedicated photo album viewer, sort of.

KDE Window Behavior

I usually do nothing fancy with my windows. I open them. Later, I close them. In between, I may maximize them or unmaximize them. I move them around the screen.

The two fancy things I do are: 1) “Maximize” a window onto a portion of a screen using drag magic of some kind (most Linux desktop environments have this) or b) tell a window to move itself to a different workspace.

Most, nay, all, desktop environments have a larger set of fancy window behavior control than this. The whole idea of controlling, or even having, windows in which software runs, is fundamental to the *Nix environment, and Linux is the modern and most widely used version of *Nix. But I think it is possible that KDE has the mostest and bestest of these abilities.

For example. You can right click on the top bar on the window and pick “more actions” from the context menu. This gives you “move,” “resize” and such, which you have access to in other ways. GBut it also gives you check boxes to “keep above others” or “keep below others” which is very hand when your multiple monitors start to fill up with stuff because your workflow has gone fractal.

Burrow deeper and you can get to “special window settings.” This allows you to control behavior of a particular window in very detailed and even scary ways. You should probably not do any of this, but you should have a look.

In between these two cantos of configuration, you can find “Windows Manager Settings” in the window title bar context menu. This allows you to mess with windows decorations, screen edges, desktop effects, etc. You can get to all this via other configuration tools, but this is a handy way to make adjustments on the fly while you are actually using software.

One thing you may want to adjust here is when and how windows become translucent. I never used that feature before, and having the windows become semi-translucent when being moved is the default in KDE. I think people like this because it is a quick and dirty way to see what is behind the window. I find it a bit disconcerting because I sometimes am still reading what it is a window while I’m moving it. I wonder if there is a way to make a window go translucent optionally. Probably. OK just checked, there is.

A key feature you will want to adjust is active screen edges and corners. Here you can turn on or off features that maximize, either to a full screen or a “tile,” the window you drag to an edge. This allows for quarter tiling. Right now I have the ability to mazimize a window by dragging the title area of the title bar to the top middle of a screen, to tile over the left or right half by dragging it to the appropriate side edge, or quarter-tiling the window by dragging it to a corner. It is a bit funky when I drag towards the second monitor … can get confused as to which monitor to tile the window on.

Windows. Not just for Windows any more. Never were, really.

KDE Icon Magic (Linux)

In some Linux desktops, what you get is what you get when it comes to desktop icons.

You can usually specify if you want network locations or storage devices shown as icons, or maybe a trash can, shown, but not much else. This is where Linux looks stupid compared to at least some earlier versions of Windows and the Mac, where you can do more with icons.

But in KDE, icons are very very configurable.

(See this post for a short diatribe on why you should try KDE even if you haven’t considered it lately … I myself am a recent convert to this particular desktop.)

In KDE, you can right click on the desktop, then chose “icons” on the context menu.

You can then arrange the icons horizontally or vertically on the screen.

You can align them to the left or right of the screen.

You can sort them by the usual sorting criteria.

You can specify sizes, ranging from “tiny” to “huge.”

And you can lock them. When unlocked, you can move them around.

The images shown here are exemplars of some of these options, in various combinations.

Using KDE

I’m pretty sure the very first Linux desktop I ever used was KDE. I didn’t realize that it was actually a bit painful until I later discovered Gnome. I switched to Gnome because it worked better for me, and seemed to use fewer resources.

I never left Gnome, but Gnome left me. I won’t go into the details here, but as most Linux users know, Gnome 2.x was the high point of that particular world of Linux desktops (see THIS POST for definition of term “desktop”). With the demise of Good Gnome, mainly caused by Ubuntu (a distribution I otherwise have a great deal of respect for), I poked around among the various Gnome 2.0 desktop alternatives. Among them eventually emerged Mate, which at first, I thought was great. I used it as my main desktop for several years, until just recently.

But Mate had two major flaws. The first flaw was an attempt to simply everything. Mate never made an application to be part of its own desktop environment, but rather, it took old Gnome applications, then broke them slightly or failed to maintain them (but the Mate project developers did rename them all, to take credit for them, and add confusion). The second flaw was not fully maintaining the parts of the desktop environment it was responsible for, or fixing basic problems. For example, it has always been true that most people have a hard time grabbing window boundaries with their mouse in Mate. To fix this you have to go down into configuration files and manually change numbers. That is a bug that should have been fixed three years ago. I can only assume that the maintainers of Mate don’t have that problem on their particular desktops.

Among the main functions of a maintained desktop environment is keeping basic system configuration tools clean and neat and functional, but Mate messed that up from the beginning. I vaguely remember that an early version of Mate left off the screen saver software, so in order to have or use a screen saver, you had to install the old Gnome screensaver. The configuration and settings capacities of the Mate desktop are distributed across three or four different applications, at least one of which you have to find out about, find, install, and learn to use yourself, just to carry out simple functions. Basic categories of settings or configurations are distributed among these applications in a haphazard way. To do basic things like change the desktop appearance or mess with screen savers, etc, you have to be a power user.

But I thought Mate was still better than KDE partly because KDE was so strange. For one thing, single clicking in KDE was like double clicking everywhere else in the universe. Yes, you could reconfigure that, but it is still strange. The nature of the desktop, of panels, or widgets, of all of it, was just a little odd for me. Everything felt a little funny.

But over time, KDE did two things that Mate did not do. First, KDE continued to maintain, develop, improve, debug, make more efficient and powerful, all of its software. Instead of key software components going brain dead or not being maintained, or losing functionality like in Mate, KDE software got more powerful and more useful. At the same time, the software, and the overall desktop environment, got slicker, cleaner, more like the old Gnome 2.0 in many ways, and leaner, and less strange (single clicking is no longer a default!).

In the old days, it was probably true that Gnome used fewer of your computer’s resources than KDE. But the most current versions of Gnome and gnome like alternatives such as Mate probably use about 25% more resources than KDE out of the box. And, KDE out of the box is more configurable and overall more cool than Mate and many other desktops.

Here’s the key thing. When I first started using Linux, the feature I fell in love with was the workspace switcher, allowing one to maintain a number of virtual desktops, each with various things open on them. This is how I organize my work. It isn’t all that systematic, but in a given day, I’ll organically end up with all my stuff related to one project on one virtual desktop, and another project on a different virtual desktop. Gnome and gnome variants actually moved away from this standard. You can still have virtual desktops in current Gnome, but they are not there by default. Mate still has them by default, but I don’t trust Mate maintaners to maintain that.

But it is easily done in KDE, and with extra (mostly unnecessary) perks. In the KDE desktop environment, I can have the desktop background be different on my different virtual desktops on my desktop computer. Which sits on my desk. I can have other things be different on the different desktops. For me, this doesn’t do much because, as noted, my virtual workspaces evolve organically over time frames of hours or days. But someday, I may want a special desktop configured all special for some special purpose.

A couple of months ago, I had some problems with Mate. I uncovered an important and easily fixed bug. I told the maintainers about it. They told me to screw off. So I told them to screw off, and I started to explore other desktop environments. After realizing that they had been too rude to me, the Mate maintainers, to their credit, did fix the bug and tried to make nice. But I had already moved on. It did not take me long to get KDE up and running and configured as I like. And, I’ve hardly explored all the cool stuff it can do.

But I am exploring it now, and I’ll keep you posted.

See: KDE Icon Magic

Things to do after installing Ubuntu Mate 18.04

1) Uninstall it. It is flawed in key ways. It will be difficult to get your Dropbox working, if you use that, installing software form .deb files is not automatic and requires hacking. There are some other problems too.

2) Check back here in a few months, see if I’ve updated with good news. Meanwhile, get back to whatever you were doing, because you don’t want to be doing this.

Added:

I tried to get info from Mate about the problems I encountered. They really provide no way to do that, so I tweeted about it referring to their handle, so they would see.

The tweeted back two responses. The first one said a combination of “nothing is broken” and “tough luck.” The second twee is shown in this screen grab:

That is a moving GIF with the boy’s eyes blinking. It is intended to mean, “tough shit, sucker!” or words to that effect.

(I provide a screen shot because I assume cooler heads will prevail, maybe, at Mate HQ, and the immature dickhead who tweeted that will be countermanded. Or maybe not. We don’t ever hear anything good about their development community. Only bad things.)

So, don’t look for an Ubuntu Mate explainer on this blog.

Preparing to install Xubuntu right now!

After you install Ubuntu 18.04 Bionic Beaver

Installing Bionic Beaver

I’m not going to tell you how to install the latest stable release of Ubuntu’s Linux desktop. For that, just go to Ubuntu and follow the appropriate instructions. I recommend using a bootable USB stick, and how you manage that depends on exactly what computer you are going to make it on. All three major operating systems have their own way of doing it. A quick google search will find simple instructions. The general pattern is to download the “DVD image” onto your hard drive, put a large enough USB stick that contains nothing of value into the slot, open the correct program, and tell it to put the DVD image on that stick in bootable form.

I do have this advice. If you ultimately want a certain desktop (such as XFCE or Mate or KDE or whatever), use the Ubuntu “flavor” for that desktop, things will go more smoothly. For this particular iteration, I decided to install the main Ubuntu desktop, and I’m going to try Gnome 3 for a while and see if I end up liking it.

Ubuntu 18.04 walks away from 32 bit support, and ditches Unity. The default desktop is Gnome, but this is the modern Gnome that is not that different from Unity. I generally prefer a Gnome 2.0 style desktop, so I usually use Mate (pronounced Matt ay).

Post Installation

Most of these suggestions are pretty standard for any install of any Linux system. Also, you can ignore much of this. Continue reading After you install Ubuntu 18.04 Bionic Beaver

I knew it, I saw this coming! (Microsoft-Linux)

Some time ago it dawned on me that a future Microsoft operating system, a version of Windows, would be based on Linux. It only makes sense. There is no better operating system to base a desktop, server, or other specialized OS on, for normal hardware. Eventually, this would dawn on Microsoft. I thought it might have a few years ago when Microsoft went from being openly aggressive against Linux and OpenSource, to being neutral, to being nice, and eventually contributing.

And now… Continue reading I knew it, I saw this coming! (Microsoft-Linux)

Writing Software for Writers

This is especially for writers of big things. If you write small things, like blog posts or short articles, your best tool is probably a text editor you like and a way to handle markdown language. Chances are you use a word processor like MS Word or LibreOffice, and that is both overkill and problematic for other reasons, but if it floats your boat, happy sailing. But really, the simpler the better for basic writing and composition and file management. If you have an editor or publisher that requires that you only exchange documents in Word format, you can shoot your text file with markdown into a Word document format easily, or just copy and paste into your word processor and fiddle.

(And yes, a “text editor” and a “word processor” are not the same thing.)

But if you have larger documents, such as a book, to work on, then you may have additional problems that require somewhat heroic solutions. For example, you will need to manage sections of text in a large setting, moving things around, and leaving large undone sections, and finally settling on a format for headings, chapters, parts, sections, etc. after trying out various alternative structures.

You will want to do this effectively, without the necessary fiddling taking too much time, or ruining your project if something goes wrong. Try moving a dozen different sections around in an 80,000 word document file. Not easy. Or, if you divide your document into many small files, how do you keep them in order? There are ways, but most of the ways are clunky and some may be unreliable.

If you use Windows (I don’t) or a Mac (I do sometimes) then you should check out Scrivener. You may have heard about it before, and we have discussed it here. But you may not know that there is a new version and it has some cool features added to all the other cool features it already had.

The most important feature of Scrivener is that it has a tree that holds, as its branches, what amount to individual text files (with formatting and all, don’t worry about that) which you can freely move around. The tree can have multiple hierarchical levels, in case you want a large scale structure that is complex, like multiple books each with several parts containing multiple chapters each with one or more than one scene. No problem.

Imagine the best outlining program you’ve ever used. Now, improve it so it is better than that. Then blend it with an excellent word processing system so you can do all your writing in it.

Then, add features. There are all sorts of features that allow you to track things, like how far along the various chapters or sections are, or which chapters hold which subplots, etc. Color coding. Tags. Places to take notes. Metadata, metadata, metadata. A recent addition is a “linguistic focus” which allows you to chose a particular construct such as “nouns” or “verbs” or dialog (stuff in quotation marks) and make it all highlighted in a particular subdocument.

People will tell you that the index card and cork board feature is the coolest. It is cool, but I like the other stuff better, and rarely use the index cards on the cork board feature myself. But it is cool.

The only thing negative about all these features is that there are so many of them that there will be a period of distraction as you figure out which way to have fun using them.

Unfortunately for me, I like to work in Linux, and my main computer is, these days, a home built Linux box that blows the nearby iMac out of the water on speed and such. I still use the iMac to write, and I’ve stripped most of the other functionality away from that computer, to make that work better. So, when I’m using Scrivener, I’m not getting notices from twitter or Facebook or other distractions. But I’d love to have Scrivener on Linux.

If you are a Linux user and like Scrivener let them know that you’d buy Scrivener for Linux if if was avaialable! There was a beta version of Scrivener for Linux for a while, but it stopped being developed, then stopped being maintained, and now it is dead.

In an effort to have something like Scrivener on my Linux machine, I searched around for alternatives. I did not find THE answer, but I found some things of interest.

I looked at Kit Scenarist, but it was freemium which I will not go near. I like OpenSource projects the best, but if they don’t exist and there is a reasonable paid alternative, I’ll pay (like Scrivener, it has a modest price tag, and is worth it) . Bibisco is an entirely web based thing. I don’t want my writing on somebody’s web cloud.

yWriter looks interesting and you should look into it (here). It isn’t really available for Linux, but is said to work on Mono, which I take to be like Wine. So, I didn’t bother, but I’m noting it here in case you want to.

oStorybook is java based and violated a key rule I maintain. When software is installed on my computer, there has to be a way to start it up, like telling me the name of the software, or putting it on the menu or something. I think Java based software is often like this. Anyway, I didn’t like its old fashioned menus and I’m not sure how well maintained it is.

Writers Cafe is fun to look at and could be perfect for some writers. It is like yWrite in that it is a set of solutions someone thought would be good. I tried several of the tools and found that some did not work so well. It cost money (but to try is free) and isn’t quite up to it, in my opinion, but it is worth a look just to see for yourself.

Plume Creator is apparently loved by many, and is actually in many Linux distros. I played around with it for a while. I didn’t like the menu system (disappearing menus are not my thing at all) and the interface is a bit quirky and not intuitive. But I think it does have some good features and I recommend looking at it closely.

The best of the lot seems to be Manuskript. It is in Beta form but seems to work well. It is essentially a Scrivener clone, more or less, and works in a similar way with many features. In terms of overall slickness and oomph, Manuskript is maybe one tenth or one fifth of Scrivener (in my subjective opinion) but is heading in that direction. And, if your main goal is simply to have a hierarchy of scenes and chapters and such that you can move around in a word processor, then you are there. I don’t like the way the in line spell checker works but it does exist and it does work. This software is good enough that I will use it for a project (already started) and I do have hope for it.

Using Scrivener on Linux the Other Way.

There is of course a way to use Scrivener on Linux, if you have a Mac laying around, and I do this for some projects. Scrivener has a mode that allows for storing the sub documents in your projects as text files that you can access directly and edit with a text editor. If you keep these in Dropbox, you can use emacs (or whatever) on Linux to do your writing and such, and Scrivener on the Mac to organize the larger document. Sounds clunky, is dangerous, but it actually works pretty well for certain projects.

Scrivener can look like this.

How to share keyboard and mouse between two computers?

I use two different computers, each with a different operating system, to do my stuff. Actually, I use five, but only two where I would ideally like to switch between them while I’m using them. I’ve experimented with some solutions, so I can offer some advice. Continue reading How to share keyboard and mouse between two computers?

What is New in Ubuntu 17.10, the Artful Aardvark

The next release of Ubuntu, the most commonly used and thought of by normal people and a few others version of Linux, is set to be released on Thursday, October 19th. The exact set of changes and improvements is not known, but a few key ones are, and some can be guessed at from the multiple pre-release releases.

This is a momentous occasion because this will be the first version of Ubuntu’s main flavor that does NOT include Unity as its default desktop.

If you don’t know, Unity was a menu and control system for the desktop, your main interface when working with the computer other than, obviously, while using a particular application. It was the look and feel, the essence, of the operating system. Unity was supposed to unify things, like diverse features of a typical desktop, like Ubuntu running on a cell phone, a desktop, a laptop, a whatever.

Unity used a modus operendus that many other interfaces were shifting towards. I hear there are versions of Windows that looked a bit like this, and Gnome from version 3.0 onwards had this basic approach. Continue reading What is New in Ubuntu 17.10, the Artful Aardvark

How To Avoid Future WannaCry Style Ransomware Attacks

This is very simple, and it has more to do with the philosophy and marketing of operating systems than the technology of the operating systems themselves, though the technology does matter a great deal as well. First, lets have a look at how this ransomware attack was allowed to happen to begin with.

The vast majority of affected systems in this latest world wide cyber attack were Windows based computers that were not updated with recently available and easily deployed patch. The attack did not affect other operating systems, and Windows systems that had a recently released security patch were not affected. (I was going to put a link here to direct people to the Microsoft web page with info on what to do if you were attacked, but a minute or two of perusal on the Microsoft site mostly told me about Microsoft’s new products, and I did not find any such page. If you have a link, please place it in a snark free comment below.)

Why was the patch not deployed on so many computers? For several reasons.

Some of the operating systems were running under administrative policies that did not allow patching for some reason or another. I’ve only heard rumors of this but it sounds like a blind-future style pre-decision, in the same category of other bone-headed human processes like no tolerance policies for knives in schools and three strikes you are out sentencing policies. It works like this: You remove thinking from the process by making all decisions in advance, and then get the heck out of there with limited liability and whatever happens happens. If you do this you are probably a member of congress or a school board member planning on retiring soon. It never goes well. Telling security IT people in advance what they can and can’t do because of HR or personnel regulations is like going to a doctor and telling them what your diagnosis and treatment is going to be, in advance. You will die of something curable, eventually, if you do that regularly.

Some of the operating systems were running on computers that are, in theory, never supposed to be turned off. This is similar to the first reason in its stupidity level. For one thing, making it impossible to patch an OS ever is really not smart. For another thing, that computer you plan to never turn off is going to turn itself off now and then. But it is also bad at another level, the level of the operating system. Windows has operated, for years, under the principle that when enough stuff goes wrong, you turn off the computer and start again, and if that does not work you reinstall the operating system from scratch. Now, I know, you Windows lovers will jump in at this point and tell me that “Windows doesn’t work that way any more” but you know what? After decades of hearing how Windows Past is not Windows Present, when it really is, I don’t care what you say. Also, actual on the ground Windows users have been trained, by Microsoft policy, to reboot or reinstall for decades. Anyway, the point is, Windows can not be updated on the fly, and thus, the system utterly fails in a situation where updating is critical, which by the way is all the time and all machines, because even computers you use for nothing but curating recipes for muffins, if hooked to the Internet (where all the good muffin recipes are), can still be the platform for launching a secondary cyber attack.

Some of those operating systems were in health related fields (referring here to both of these first two excuses) and that is why so many health related facilities were hit initially.

Another reason, which is a bit tricky, is the problem with updating stolen software. If you stole the OS it might be hard to get an update or patch. It seems like a good idea for the company making the OS to do this, as it encourages buying the product and discourages stealing it. Yet, many tens of thousands of computers, maybe hundreds of thousands, are currently locked down by WannaCry because they were pirated, and not updated. This becomes a public health (cyber-health, eHealth) risk. It is like vaccination. We all suffer because so many others get the disease, even those of us who did not fail to do the right thing.

This is a moment when we look at something like computer operating systems and realize that they are actually a public good as much as, or more then, they are a commercial product. Think of roads and canals in the old days. Roads and canals were often privately owned (as were fire departments and police forces in many cases) and eventually it became apparent that these are all public goods, so they were essentially taken over by the government. Similarly, power companies and railroads. Not exactly taken over but made into quasi public entities through integration with public agencies and heavy regulation.

I’ve often argued that things like Google, Amazon.com, Facebook, Twitter, etc. have become the equivalent of public goods, like roads and the post office, etc., in a similar way. To some extent, this is also true of operating systems.

There is of course a solution to all of this. What we need is an operating system that is made by the public itself. If all interested parties simply became involved, and maybe large companies with a lot of stake in computers would put aside a meaningful amount of their own software development resources, and a few public and private agencies would provide some grants and bounties and stuff, we could have an operating system that was free, easily installed, updated every week with common updates (like, maybe, on Sunday evenings or something) with a very easy and easily automated system of updating, that would be great.

Ideally most software would come from well maintained and secure repositories that were checked for malicious code. There could be several different such repositories more or less redundant with each other but maybe tweaked to cater to different types of users. The added advantage of several different but similar repositories is this: even if some bad code gets into one repository, the fact that across users, many different repositories are used, would slow its spread.

By making the operating system free, easy, effective, powerful, flexible, and easily updated almost every one of the limitations in the way we do things that allowed WannCry to spread and ruin everything would simply not have happened. A few people would be hit, it would be a minor story.

On top of this, let’s make this new operating system have a few other security related features.

For instance, monitoring code. The way it works now with Windows, is that a finite number of paid and I’m sure brilliant individuals are in charge of coding and maintaining the operating system, and updated and patches, while a much larger number of criminal-minded nefarious but also brilliant individuals are focused on breaking the security. This means that there is an uneven arms race where day to day Microsoft will always be a step ahead of the bad guys, except every now and then when the bad guys jump ahead and make a huge mess.

I propose that this ratio be reversed, that the arms race between the good and the evil become totally one sided in the other direction. Have a very large number of individuals, a proportion of the above mentioned community of private individuals and interested corporations and agencies, working on security, swamping out the nefarious bad guys. There would be very few moments when the bad guys got very far ahead of the good guys.

In addition, the operating system itself could have other security related features. For example, the basic tools inside the operating system could be well maintained, highly traditional, really clean and neat code, and free to use. So, for example, basic tasks that any new software might use are figured out, so you don’t have to add your own new version of the code to do them. This means that new code will generally be fast, effective, clean, easier to maintain, and more secure.

Also, the operating system can work more like a prison than, say, a food court. In a food court, you do what you want to do (eat, meet your friends, hang out) in a rather chaotic environment where you can move freely from place to place. Someone puts their food down on a table to go back to the Azian Kuizine window to get the chopsticks they forgot, and you can grab their pot stickers, sit down at a nearby table, and no one can really figure out that you just sole their food. And so on.

In a prison, you can theoretically leave your cell and walk down the hall to the gym, then go to the cafeteria, then the law library. But, the entire route is blocked by a series of doors that only specific people have permission to open, at specific times, for specific reasons. Everything you do requires having permission at every step of of the way, and it is all constantly being carefully watched.

Life should be more like the food court. What happens inside computers should be more like the prison.

Of course, by now, most of you have figured out that I’m talking about Linux. Linux is an operating system that is already widely used when certain conditions pertain. Since the Android OS is based on Linux, and the majority of servers run Linux, and Linux is becoming the preferred desktop in China, it may well be that Linux is more widely deployed right now than any other operating system, though most Westerners think of it as nearly non-existent on desktops.

Critical tasks are often trusted to Linux or similar operating systems (Unix, BSD, etc.) because of reliability and security. When efficiency is required, Linux is often tapped because it can be deployed in a very efficient manner. Linux acts internally like the prison, not the food court. The system itself is constantly monitored open source code, and most of what runs on it is openly monitored as well. Software is usually distributed via secure repositories. The system is free and easily updated, there is no such thing as a pirated copy of Linux. There is a regular schedule of updates, they come out every Sunday.

Another important feature of Linux is the separation of the operating system and the surface appearance of the system. The operating system itself comes in a number of varieties, but most people use one of two: Red Hat or Debian (there are others). But the surface of the OS, the part the user sees, is not related to that at all. Most people use a “desktop” which provides the windows and stuff, the parts that you interface with, and there are several versions of this, from which users can more or less pick and chose.

Here is why this is important: The desktop provides the user experience, and the user experience sells the product. If you develop a proprietary operating system like Windows, many of your decisions, including when to produce major updates, etc. is driven by the marketing department. The development and deployment of the operating system is a complex process where designers and marketing gurus are at the same table, essentially, as security experts and developers concerned with efficiency.

In the Linux system, the security people and efficiency and functionality developers work most of the time independently from the equivalent of “marketers” or “designers” because of this two layer aspect of the system. It is quite interesting to visit the communities of desktop developers and hear what they are saying to each other, then visit the community of system developers and hear what they are saying to each other. They are pretty much two distinct conversations. There will never be a marketing or design decision about Linux that impacts security.

Linux is the female operating system in a patriarchic world. Refer to the appropriate John Lennon song for a starker analogy. It does a lot of the work, maybe most of the work, but is usually not recognized. When people make comparisons, Linux has to dance backwards and in high heels.

If I say, like I just said here, that “if Linux was widely in use, the WannaCry attack would have been much less severe” people will respond “Linux can be attacked too” and that will be taken by others, and possibly meant to begin with, as stating “Linux and Windows are the same, its just that attackers attack Windows and not Linux.” That is a pernicious falsehood that feels a lot like many sexist comments about the limitations of women. Yes, Linux could in theory be attacked. No, Linux is pretty much not attacked very often or ever, so your fantasy about how it can be attacked has no empirical back up. No, Linux and Windows are not the same in which they are developed, designed, maintained, deployed, updated, or secured, and every single one of those differences gives Linux a huge leg up on security and Windows one or more disadvantages.

If a cyber attack is a mugger, Windows is a physically small drunken person with wads of money sticking out of his pockets, staggering down a dark ally near the convention hall during a mugger’s conference, while Linux is a hundred sober and smart well trained Navy Seals each driving a separate armored car in undisclosed locations.

Yes, you can attack the Navy Seals. But if you do that, they’ll make you wanna cry.