The next version of the Linux Kernel will be very noticeably faster on computers with limited memory.
Most “improvements” of certain unnamed operating systems (such as Microsoft Windows) involve more demand on hardware, so upgrades slow your computer down and eventually you have to throw it out and buy a new one.
It is not the primary objective for Linux to make each major improvement include improved rather than degraded performance, but it is a side effect of excellent OpenSource engineering.
The latest?
Currently desktop software slows down when its path jumps to a part of the code that is not cached in memory and needs to be paged-in from disk. That can be caused by poor memory management that doesn’t scale all that well in the desktop environment.
In Linux kernel version 2.6.31, developers have added some heuristics to make it much harder for ‘mapped executable pages’ to get moved out of the list of active pages.
The kinds of improvements expected are huge, like fifty percent increases in speed or better on low memory machines.
Hat Tip: Virgil Samms
And that’s obviously why the OS must die. We can’t have people doing more with older, still functional, but otherwise obsolete hardware! How then would companies soak consumers for everything they’re worth?
What do you mean by “low memory machines”? These days, that could be considered 1-2 Gbs of memory. It wasn’t so long ago that “low memory” was considered 128-256 Mb.
For a little further info, I made that earlier comment because I’ve got an old PIII laptop with 128 Mb memory and a 20Gb hard disk. It works fine (although the CD-rom finally died last week), except it’s become painfully slow under XP. I was thinking of tossing some random version of Linux on it instead of junking it, now that I’ve actually played with Kubuntu a bit on my PC and am a tiny bit less fearful of Linux.
sinned34:
Putting on my “old fart” hat, I remember when “Low Memory” meant 16KB, and only one page of “hires” graphics on an Apple II. I remember, and would agree with the question, “who could possibly need more than 640KB of memory?” I’m sure that in another twenty-five to thirty years having only 16 terabytes of main memory will seem impossibly confining.
I had a 700MHz P3 with 384MB of RAM that I was using for the majority of my websurfing just a month ago. It worked pretty well most of the time, but after about ten tabs open things would get sluggish.
With an upgrade to 3GHz P4 w/2GB RAM, well, sky’s the limit. But I’d expect this to help a lot with, say, Linux on the PS3, where you have less than 512MB RAM available.
I’m not sure what a low memory machine is in this case, but the improvement in paging heuristics would affect any machine doing lots of work.
Like Ray mentioned, I’m sure this will be good news for Linux on a PS3, though I see that as more of a proof-of-concept. The best use of Linux on a PS3 will probably be clusters for specific-use simulation or analysis.
I’ve got a few older machines that will probably benefit from the kernel upgrade. One in particular has 128MB of RAM (with a 566MHz Celeron) and serves as my home router. I have a couple other machines that I intend to set up with Linux for casual home use (web browser and desktop card games) by some acquaintances at work.
Shawn:
I’m certain that a number of readers of this blog have been doing computer work longer than I’ve been alive. That said, I remember how excited I was when I got a 30Mb hard drive for my XT. Oooooh, I had so much space I could copy Pool Of Radiance and the Ultima games to the hard disk instead of swapping floppies repeatedly! I fondly remember having to run Hercules graphics emulation to play CGA games… Oh, the interlacing!
Wearing my “old fart” cap, I remember burning a scar into my cheek while soldering the 2012 SRAM chips into my dad’s SWTPC 6800’s memory board 30 years ago. Thousands of dollars, and it still couldn’t run Linux.
It only affects machines using swap. 😉
Russell: Are you sure? There are two kinds of paging going on in Linux, IIRC. There could be a “swap disk” which is a partition dedicated for use by the system for this sort of thing. If, however, the system needs more meory and there is no swap disk, or the swap disk is used up, the system will simply create swap files as needed.
I’m not sure if this new heuristic works only with swap partitions or with both.
I don’t catch things if you throw them at me, I just watch them fall to the ground before I react. It’s been that way for decades now.
As a teenager in the 70s, I built a digital clock from a magazine project. Parents seem to give budding hardware fanatical youngters the biggest clumsiest ugliest most misbalanced soldering iron that also costs as little as possible, rather than a nice fine tiny precise one. I remember dropping it at one stage, and instantly catching it. By the wrong end.
Linux runs just fine as a real memory system, if it is given neither swap partitions nor swap files. Which is a good thing, since otherwise it would be useless for a variety of embedded applications, that simply don’t have a swap device.
I am at this moment downloading an alpha version of the next Kubuntu. It includes a release candidate of the new kernel. I expect the alpha version will work well enough on my hardware to give me a hint of the new kernel’s superpowers. I’ll start my test drive in a couple of hours. If you’d like to see what it can do, visit distrowatch.com and tell Firefox to find “6.31”.
This machine has two gigs of ram, and I have two two-gig swap partitions set up. This is maybe a bit more than the kind of box that will benefit from the kernel’s new approach.
All this reminiscing about early computers’ resources reminds me of my first “Real” computer (I’m not counting Vic 20 and Commodore 64). My Amiga 1000 had 124 kilobytes of ram, no internal hard drive, and a floppy drive that said “grronk” with each slow rotation of the disk. That was 1985, and although the machine was made on the cheap, in some ways it outperformed the IBM PCs of that time.
Anything not resident in RAM.
In the “you had ones?” department, I started with 8K on an 8080 — after I moved up from the PDP-8 with 4K total. Which leads to Joe Bethancourt. Enjoy!
I started with a TRS-80 with 4 k. Then I upgrade to 12K. Stuck with the cassette recorder for hard storage, though.
I had to dig through multiple links but I finally found the benchmarks for this new kernel.
For the low memory tests, the PC that got a 50% performance boost was an “nfsroot gnome desktop with 512M physical memory”.
For low (128-256M) applications, I’d recommend xubuntu if you need a desktop. It’s based on xfce, a lighter weight window manager than gnome or kde.
If you don’t need a gui, you can run ubuntu in even less memory. Used to be 64M minimum, not sure any more.
FWIW, I’ve got a file server in my basement that’s been chugging away for a decade or so now. The only time it’s ever down is during power outages. It’s running on a Pentium 166, not sure how much memory, but not much.
Linux is great for giving new life to old hardware.
Incidentally, without knowing all the details of this change, I’d expect it to improve performance in any situation where your memory requirements exceed available memory. So not just older, but well used newer machines should work better.
Greg @ “I started with a TRS-80 with 4 k. Then I upgrade to 12K. Stuck with the cassette recorder for hard storage, though. ”
Luxury. We built our interface.
Even now I have a 50MHz 80486 Thinkpad with 12 MB of memory running the 1.2.13 Kernel. There’s no network connection, so I don’t have to worry about security vulnerabilities and the console-only mode I have is good enough to help me with calculating bowling results. My Apple IIe with 1 MB still works, also, but I don’t turn it on very much any more, since my Rescue Raiders floppy finally died.
Ah, the memories. Such as programming my old Timex Sinclair to run a graphical clock–in BASIC–on the Sinclair’s tiny membrane keyboard; then programming it to do the same thing again and again for lack of persistent storage of any kind. Every time I turned off the computer, which I eventually had to do, my laboriously typed programs were gone forever. My uncle had a cassette tape drive for his computer, of which I was insanely jealous.
I should mention I was seven or eight years old at the time. I wish I still had that Sinclair.
I do, however, still have a functioning homebuilt PIII 667mhz computer with 192 MB of memory. It enjoys life as a headless home server, sans any sort of graphical desktop. If this new kernel is as good as advertised, I may give it a shot and see how well it performs. Currently it uses approximately 75% of the RAM just sitting there, and pages to disk whenever it has to do more than serve a file or an html page.
DaveX[20] Look at this, everyone:
http://www.swtpc.com/mholley/BYTE/Dec1976/Byte_Dec_1976pg098.jpg
OK. Old Farts time. Walking backwards to school in the snow, no shoes, and uphill both ways time.
I got my hands on my first personal computer in 1967. That is, if you define “personal” as one to which you have control over the power switch, and the only jobs it runs are the ones you loaded. For those who might know it, it was an Elliott Brothers 4130, and I had that much access because my first job was making them work at Elliott Bros. Our reference machine was only officially ‘on’ from 8 am – noon, then 1 pm -5 pm. And nobody else in the commissioning dept was interested in learning how to use them.
They were hand built out of about 200 PCBs, each of which was laid out as a generic matrix of 9 x 6 transistor and diode pads. The interconnecting logic was implemented by hand-soldered single core 22 gauge wires on board, and hand-applied wire-wrapping on the backplane between boards. Memory was 48KW (24 bit words) of 50 thou magnetic cores, assembled in a wire matrix by an army of little asian women who effectively knitted them together. By hand, again. Hard drive? Huh? The nearest they had back then was the CRAM, a finger-chopper if ever I saw one. And 9 track tapes. I have pics of both of those.
In two years I never saw a CPU pass by without any mis-wires, but then again the amazing thing was I never saw one go by with more than about 6 such. We had to test them using functional software tests, as actually buzzing out that many wires was prohibitively time-consuming. The speakers came into their own here – after sufficient time for mental osmosis (no other description for it, sorry, it was totally subconscious), one could identify which board failed by the sound the test program made. One guy up in the lab obviously had way too much time on his hands. He wrote a program which played “Daisy” on the speakers (ask your dad why “Daisy”, you young-uns reading this). Including the time-base slow-down.
So I was imprinted with large blinkenlichten panels and speakers hooked in to the microcode engine. I miss them. I cannot even find a picture of the light show now. Everything I built since has them. Even if its just a register with LEDs on its outputs, and an IDC header to connect to a logic analyzer.
Later on I built my own micros out of 74183s, 27C08s, and fusible link PALs. Then moved up to 4004, 8008, and the rest.
So I have a hypothesis supported only by personal anecdotal evidence: knowing how the logic in the big chips work can help one be a better programmer, and engenders much appreciation for O(1) and O(n) algorithms etc. I’m guessing the brains behind the LIU (Least Importantly Used:) vs LRU (Least Recently Used) swap optimization have this background? It certainly affects my hiring criteria.
(bitch: why does spell see ‘am’ as OK but wants ‘pm’ to be ‘p.m.’?)
Gray. I a.m. not sure about that. Hmm.
UNIVAC 1219
With the possible exception of D.C.Sessions, the rest of you punks can get off my lawn.
I revel in the fact, personally, that I never had to deal with vacuum tubes and punch cards. Sure, it means I’m not a pioneer, but that doesn’t make what I do with present-day hardware any less wizardly. I have a lot of respect for those of you that did work with UNIVAC, though, for paving the way for young punks like me. Thanks for carving out a niche in which I could make a living!
The punch cards did not make me feel like a pioneer.
my first “Real” computer (I’m not counting Vic 20 and Commodore 64).
You aren’t? Screw you then. The C64 was a monster for its day, not to mention the most robust hacking platform I’ve ever seen. You could change absolutely everything, but since the OS was in ROM you never had to worry about bricking it or corrupting the OS. And say what you will about half-assed disk support, but dumping the filesystem logic off on the drive as if it was NAS is a good way to get rid of overhead.
BruceH #23:
Actually, funny you should mention that. The Pentium III (specifically, the third-generation Tualatin core) is my favorite Intel processor of all time — it was the one that proved the Pentium 4 was a steaming pile of crap. IIRC Intel didn’t know what to do with it at the time, since clock for clock it was faster than the P4 but it was supposed to be an entry-level chip; I think the Pentium M/Centrino design started out more or less as a tweak of Tualatin.
Same here. When I went to college, the computer we used for our Computer Science 101 programming assignments was a Univac 1100 mainframe. We called it the Lemon Hundred.
Brian X,
Ain’t that the truth! Let’s count some of the ways it was better than the Apple II:
It had an actual sound chip with three voices, where the Apple had that motherfucking speaker that software could “click.”
It had up to 320×200 graphics capability, instead of 280×192.
It had up to 8 sprites on the same scanline, that could be displayed either on top or below the underlying graphics. The Apple had…nothing.
You could change the location of the 16k of graphics memory with a simple softswitch. The Apple…nope.
It had working programmable interrupts, which could be used to increase the graphics capabilities by switching things around, or intelligently time sound playback. The Apple never got interrupts to work right.
The C64 drives were slow, but could be made reasonable with a simple cartridge, and it was because the drive operations were off-loaded to the drive instead of being controlled with softswitches in the Apple.
The Apple II – I think it was $1200.00 retail, sans drives, 80-column card, and monitor. C64 – I think it was $600.00 retail.
CPP: I went to a University HS so we had access to the 1108 (at SUNY Albany) which was then upgraded to the 1110. I remember my account being suspended at one point because I wrote a computer program that was intended to detect and classify jokes, and they caught me entering the lookup data for dirty jokes.
20, 24: I still have one of those (AC30). No idea if it still works, though. Battery compartment is clean.
26: yup, still only complains about pm, not am, in the comment editor on Leopard, using Safari.
27: Did you have exclusive access to the entire Univac machine? As in: my actual first ever program (Sieve of Eratosthenes, written in Algol 60) I ran on an Elliott 803, but I was batched in with a bunch of other folks, so I do not think it qualifies as “personal” use, unlike my 4130 access.
My first home machine? Atari 800. l Did my M. Sc. work on it. With original Visicalc goodness and 700K floppies. The games pack cost me a girlfriend. I got it Xmas eve, next time I looked up, 2 days later, she was gone. I saw Asteroids falling down my inner eyelids when I slept for weeks afterwards. Not much changed since then – I had a life before home computers blurred the distinction between work and play.
Q: talking about Algol 60, and bearing in mind that Hoare is still around, at Microsoft I believe, does anybody here remember the Algol “Universal Function”? It took advantage of Algol’s ability to pass function bodies as first class objects, combined with typed behavior. You could call it to evaluate anything you wanted (by a judicious choice of the four arguments), hence the “Universal” tag. But I have forgotten / lost the source for it.
Ramble:
IIRC, Algol 60 is the grandfather of the imperative language style most in use today, starting with C and its embellishments. Even now there are some language features not commonly implemented. And Hoare wrote a compiler for this language that ran on the 803! In 1960 (+/-) yet.
There is a Wikipedia page on this machine, search for “Elliott 803”. 8K/40b core memory, but it omits one of what I think is the most unusual feature of the machine: it implemented its CPU registers as magnetostrictive Nickel delay line loops; basically each 40 bit register was implemented as acoustic waves traveling round a small strip of metal, with a 1 bit serial read/regenerate/write coil at its root. By a vast coincidence, it was the 803 architecture that got me the job at Elliott’s, at 19 and without having completed my degree (yet): I had become fascinated by Boolean algebra, and had done a paper design for a 7.7 FPU with registers that used a 1 bit data path, so needed circulating registers. Used 800 gates. I thought nobody would ever build such a huge monster, at the time, but had fun with it as a gedankenexperiment. Then I saw the real thing, at my interview, and realized I had been thinking small, not large.
Dude! I never owned a mainframe. The closest I came was sneaking into the local university computer center at night and running decks on their (already ancient) IBM system. Which, being at night, the operator didn’t bother whether we had accounts or not.
Boredom, probably.
My partner-in-computer-crime from those days is now on the CS faculty at Purdue. Go figure.
I just remembered: we called it the “Uniquack Lemon Hundred”!! AHAHAHAHAHA!!!!
Oh goody – I hope this will help my laboring server. Every time I post to the blog I get an email from cPanel a few minutes later claiming excessive resource usage (well WordPress is a bit of a resource hog)