Monday, March 4, 2013

Stop And Go

I've had a rather annoying problem lately, going on like 18 months or so: there is some 'logic' error with the 'video' subunit on my laptop. The screen will suddenly go black, rather randomly, but sometimes many times in close succession, sometimes already on boot. There's nothing to do but hold down the on/off key to force a reboot.

Anyway, I can live with it. I should turn in the laptop; as I heard, Apple are good when it comes to this kind of customer service. I just don't want to go without it for the time it takes to replace whatever part it is that needs replacement.

Mostly by association to the concept of interruption, this annoyance has led me to some rumination in the field of computing philosophy -- let's pretend there is such a field of study..."well, now there is". The thoughts regard incremental computing.

Now, all computing is incremental, in a lesser sense. What makes progress, and thus what one usually means by incremental, is the saving of results 'so far', so that, if the task is interrupted, one can return to the task, and earlier work is not lost -- making progress possible in the face of interruptions.

This is a very important concept, when you philosophize on it. Threads: imagine if state was thrown away each time the CPU was taken away from it. In such a world, there is no progress until a thread is allowed to run uninterrupted to the end.

When there is unreliable hardware involved -- when isn't that the case? -- the level of incremental processing must follow suit. One can simply multiply reliability and incrementality to get the rate of progress. If we go an incremental distance of 10, say, and we have a reliability of 50% (there is a 50% chance that this incremental step makes it to the end and is committed), we will get an average rate of progress of 5. Very simple. Mix in units of time if you like.

But another way to express the 50% reliability, if the distance is composed of two steps, is that each step has around a 0.71 chance of making progress. So if we could 'save' twice as often, rate of progress goes up by 0.71/0.50 == 40%, nice.

---

Now, what does this have to do with the lapsing toppler? Well, actually, it's mostly that it started this philosophization, some thought about progress being slower when you have this kind of interruptions.

But the slowness in this concrete case is actually dominated by other factors: the time to restore state to where you were. Most of the important state is actually saved; pressing save often by habit or relying on non-manual autosave.

What takes time to restart, is: boot (only like 7 seconds), restarting Eclipse (varies, 5 to 20 seconds), sometimes there is a background reindexing of the Spotlight database (serious slowdown). But the worst drag is that Eclipse almost always needs to rebuild everything (one minute or two).

The case may be even worse: you might have a project setup that requires much more on each reboot. Just some totally random examples...you might have setup which requires you to start two instances of Eclipse, one for Python and one for Java. You might have to start an old JBoss 5.1 server (avg 1m 40s, ISTR). And you might have to click a button to start a MySQL database server. And do a rebuild and deploy. Just a purely random example.

We're now wandering into the psychology department: what's the restart time of your mental state? And how much of the state was saved? And how much does human frustration add, if you're human? Add those factors to the above mentioned project restart times to determine how much time is lost every time your computer has to be rebooted.

So a crashing computer can be good teacher in that it pretty much forces you to fix a bad project setup, even when you don't really feel you have the time. Which is of course short-sighted, in and of itself.

---

I had some more thoughts that tie into this incrementality theme, but I feel it's time to Publish, so maybe later.

No comments:

Post a Comment