For my own posterity

I doubt very much my descendants will ever look into what I did with my free time during the course of my existence. Chances are it will not interest them, because few people enjoy listening to elders talk, especially about programming. But for myself, for my own posterity, I though it would be great to have a paper copy of all my projects. Not only to add a fair amount of resiliency to my backup system but also for history’s sake. Some will build murals with photos of the many vacations they went on. I printed hundreds of pages of uncommented code, organized them in nice binders with a cd-rom copy and stored them on shelves. Now anyone can browse through, and see for themselves that it was true I wrote 35 pages on a communication protocol that will never see the light of day just for the fun of it.

Most parents do keep some tracks of their children’s evolution, but the bookkeeping comes to an end once they quit the family’s nest. After which, it becomes financial institutions who remain keen at maintaining an history of one’s existence, a financial history. Although most of us do fancy looking at the many drawings we made when we were in pre-school, we have to admit that they all looked alike and were of relatively poor historical value (unless you grew up to be a graphic artist). What is the most interesting is the intellectual work that comes later-on, those painful essays in high-school, when producing a 150 work text was a chore. If you actually abstract the constant boredom while doing them, you can easily go back trough time and get a glimpse of how you thought back then, how different you were.

By printing my projects, I will be able to repeat the exercise a few years down the line, albeit with a much deeper analysis, as I never pretented to be a writer, but I do pretend to be a progammer and looking at how I programmed in the past, can surely help me improve even more in the future. Even now, I do go back to my previous creations, mostly to dig out a solution on a pattern I remember dealing with in the past. My work is evolutive and most of the ideas that I am implementing as I write these lines were though of a few years ago during the course of another project. Sadly, those ideas very often came with the realization that the work I did up to that point was inherently flawed, and ended up causing the abortion of that current project in most cases; I suppose it was a necessary step. After all, the relativity theory came from Newton’s celestial mechanics, which in turn was built upon the work of many the many obscure astonomers of those ancient times. The human race thinks upon its own intellect and creates upon its previous creations. We are long past the times where seing a boulder going down a slope gave someone the spark of genius that was the wheel. Things are much too complicated nowadays and history helps us avoid repeating the same mistakes, but it also helps us avoid reinventing the wheel every time.

How I came to virtualize my router

A year or two ago, I took it upon myself to reverse the trend of computer proliferation in my life. I offloaded the free support that I used to provide to my friends to other knowledgeable friends, I got rid of many machines that were just sitting there for the looks (they were running Folding@Home) and finally centralized all my activities on one laptop instead of having both a desktop for power and a laptop for mobility.

All was great, and the return in time was direct and fast. The amount of effort invested every month maintaining hardware and software went down dramatically. I once read that you need one technician per 50 machines, which makes sense if the people using computers are not so savvy. But you sometimes still need a few machines, be it for testing, running platform dependent software, or just fooling around with another OS. For that problem I found virtualization to be the solution. While it will spawn a host of new difficulties, it does cut down on the hardware woes and once you tame it, it shows lots of potential; so much that in the process of making physical machine virtual, I had the idea of applying the same philosophy to that grey and blue Linksys box that breaks from time to time and requires its fair share of maintenance and configuration.

So there I am with a router that does not exist physically. Friends come to my place and look for the device to hook their laptop to (no wireless for now), but they get directed to my lab machine. They then give me an “are you sure?” look, but hook the cable anyway, and then proceed to using internet, charmed by the “magic black box” phenomena.

Making a router virtual, while not so trivial, does have a few advantages:

  • One less computer if you use a machine that is on anyway (lab computer in my case).
  • You get to use better routers like Smoothwall, m0n0wall or Fresco without a standalone computer.
  • Less power consumption.
  • You can backup your router.
  • Essentially free
  • It is a fun project.

The setup is actually quite simple, I will not go through the whole process command by command (I’ll keep that for later) but here is an overview of what you will need.

  • Time
  • Patience
  • A computer with enough network interfaces.
  • An OS (I use Ubuntu 8.04).
  • A router distro (Smoothwall, m0n0wall, Fresco, etc.)
  • VirtualBox (Or any other emulation/virtualisation software you migh prefer).
  • Bridge Utils or any other bridging software.

And here is what you will do:

  1. Bridge all your network interfaces but leave one outside the bridge, it will be the outside interface. Make sure other programs cannot use it !
  2. Install VirtualBox.
  3. Map you bridge and outside network interface with virtual interfaces on VirtualBox.
  4. Virtualize the router distro on VirtualBox
  5. Configure the router like is was a real one.
  6. Unplug your old router, put it back in its box and keep it just in case…
  7. Admire your work.

As for wireless support, I do not yet possess a wireless adapter that is compatible with madwifi and can be used in master mode; it is on the way. However, I am unsure whether I will be able to bridge to it or not as I have come across many forums posts of people trying without success. If it is a no go, then maybe making the wireless network a different subnet and routing traffic will do the trick…

Cable management

I recently found a nifty solution for the crippling problem of cable management. It might not be suitable for super computers but it does work for my desk.

It was built by screwing Ikea Antonius coat hangers to the underside of the desk using long enough wood screws. To give it a bit of stability, I used a hollow tube (like piping cut to length) for the hangers to sit on.

Here are a few pictures:

Simple and inexpensive…

Due to popular demand, here is a diagram I cooked up in paint that better explains the mounting mechanism. I have no particular suggestion for the length of the pipe except that it should be long enough for the hanger to clear the table (so you can slide the cables in). If my memory is correct, mine are 8cm. As for the screw, just make sure it is long enough so it can drive a good 2cm in the table.

Sorry for the lack of information, next time, I will be more thorough in my description and will take more pictures.

Modern screen savers are useless

I once owned a monitor with a permanently printed windows 3.1 desktop on it. It was still usable, but when turned off, you could see the damage well enough to read the windows menus on the normally dark screen. This monitor was a 13 inch black and white CRT that was probably manufactured at the very beginning of the 1990s.

CRT are just electron guns aiming at a phosphorous surface. When an electron hits that surface, an electron belonging to a phosphor atom gets excited and when it drops back to its normal energy level, it emits a photon of a certain frequency. Repeat the same process many times per second and in an ordered fashion and you will be able to display graphics. When the image on a CRT is left standing still for a long duration, the repetitive hit of electrons on the inside phosphorous layer degrade it, making it more transparent where the image is the most bright. Screen savers were invented to prevent this phenomenon; to save screens that were left on.

With LCDs, there is no such phenomenon; using a screen saver is dumb and only wastes power. Instead, set your computer to put it in standby after a few minutes of inactivity, or discipline youself and turn it off using the power button, which is better because the screen uses less standby power that way.

Netbooks: the return of terminals

Rings a bell? They are not so common nowadays and are often associated with vintage technology, except in libraries, where they very often serve the sole purpose of consulting the database. But a while ago, terminals were pretty much the only alternative to fiddling with punchcards on a mainframe. Back then, it was not possible to build a personal computer that had enough resources to do anything other than add a bunch of numbers together, or at least at a decent price and size. Displaying a text document is a very tricky task when all you have is 256 bytes of ram…

Terminals died as silicon shrunk and became more affordable; so much that by the 1980s, they had pretty much vanished as a sensible option. At the beginning of the 1990s, they made a limited come back as X-terminals, or simple computers that only run a X server and rely on a bigger UNIX machine to take care of the actual work. It was rather ephemeral, although most Linux users still get to enjoy the venerable as their windowing system.

A few days ago, I came across a quite interesting article by wired. To make a long story short, it explains how netbook sales will very soon pass notebook sales, mostly because of their relatively lower price tag, that they make perfect internet workstation, and that third world citizens can almost afford them. It does make a lot of sense if you look at what people use their computers for: Facebook, surfing, Facebook, e-mails you get from Facebook, Facebook, chat with Facebook, music, Facebook, movies, Facebook, pr0n, Facebook and the occasional word processing. Basically, they rely on the internet for most of what they do and the programs they use, beside a browser, tend to be very lightweight. The story becomes different if you add games to the equation (although nowadays, most use consoles), but in short you don’t need a 2999$ alienware 20 inch, Quad Core, SLI, RAID 0 laptop that you keep on a table anyway because its too big and heavy and overheats, to do that (Facebooking). I suppose that segment of the portable computer market will gradually shift towards being niche as the trend reverses itself, where pure power used to push sales, it is now size and affordability, so much that the crowds who used to thrive on multigigahertz monsters, now find the concept of netbooks quite convenient for day-to-day computing.

It totally makes sense from an economical perspective for people to buy netbooks, but what about terminals? Well, if you think about what I previously said, it seems netbooks fit perfectly the case of terminals, where the web is the mainframe, and your browser a window to it. There is very little you cannot do online, google docs, Facebook, chat, music, movies, everything is there, even image processing, the rest of the activities concern only 5% of all users. You said games? Well, AMD has a rendering farm in the works. Imagine playing World of Warcraft on the bus, wow! The wired article does bring up that point; that things are shifting. Shifting back to what they used to be: terminals. But this time, different terminals, and at-times, terminal will probably be used as a relationship rather than a moniker to describe hardware, like client-server. Kevin Kelly entertained the TED crowd with a very insightful talk, about the next 5000 days of the web, that was all about the concept of that new kind of terminal. To describe the relationship between personal computers and the web, he stated that personal computers would only be windows to the “One”, or terminals looking inside the immense machine that is the web. However, as parts of the web, they would also be looked into by other terminals, thus making “terminal” a relationship.

While I will not dig further into futurology, the perspective of that one big web really excites me. But what I am wondering, is if that terminal versus standalone time function is not periodic. If software will not become once again so demanding, that it will no longer be possible to distribute it until the network catches up. As with all things, time will tell.