software


OCaml first hit my radar in November 2013. I had just learnt SML, a similar but older language, in the excellent Programming Languages Coursera course. Dan Grossman is one of the best lecturers I’ve ever seen, I found his explanations hit all the right notes and made learning easy. The simplicity of the SML syntax, and the power of the language while still producing code that is readable with minimal training immediately appealed to me.

Over the last 3 years I have tried, and failed, to learn Haskell. The combination of minimalist syntax, pure functional programming style and lazy evaluation is like a 3-hit sucker punch that is very hard to grasp all at once. Having now learnt SML and OCaml, which like Haskell are based on the ML language, that has changed. I have yet to put any more effort into learning Haskell, but it is now clear to me that the syntax is only a small leap from ML and the pure functional style has similarities to SML.

I still don’t want to write production code in Haskell, but the fact that I find it less scary than I used to indicates I have made a significant jump in my knowledge and, arguably, career in the last 6 months.

Dynamic typing

Before I go any further, I need fans of dynamic typing to exit the room. My 12 years in the industry have set my camp firmly on the static typing side of the fence, and discussions about static vs dynamic will not be productive or welcome here.

So, why OCaml?

Smarter people than me have written about this, but I’ll give it a shot.

I have found OCaml to be a refreshing change of pace. Most of my favourite things are derived from the ML base language; variants, records, and pattern matching combine to create elegantly powerful code that is still easy to follow (unlike most Haskell code I’ve seen).

Ocaml takes the expression-based ML style and incorporates enough imperative features to make it comfortable for someone learning Functional Programming. Don’t know how to use recursion to solve a problem? Drop into a for loop in the middle of your expression. Need some debug output? Add it right there with a semicolon to sequence expressions.

Throw in almost perfect static type inference, a compiler that gives useful error messages and immutable-by-default variables and I just can’t get enough. I won’t sit here and list every feature of the language, but hopefully that piques your interest as much as it did mine ;)

Industry acceptance

There is always an element of “I have a hammer, everything looks like a nail” when learning a new language but the evidence that OCaml is becoming more widely accepted is not hard to find.

In the middle of February, Thomas Leonard’s OCaml: what you gain post made waves; the reddit and hackernews discussions are fascinating. A lot of people using OCaml in the industry came out of the woodwork for that one. I’m still working my way through the series of 11 posts Thomas made, dating back to June 2013, about his process of converting a large Python codebase to OCaml.

Facebook have a fairly extensive OCaml codebase (more details below).

It doesn’t take much googling to find presentations by Skydeck in 2010 (they wrote ocamljs, the first ocaml to JS compiler) or a 2006 talk describing why OCaml is worth learning after Haskell.

OCamlPro appear to be seeing good business out of OCaml, and they have an excellent browser-based OCaml tutorial (developed using, of course, js_of_ocaml).

No list of OCaml developers would be complete without mentioning the immense amount of code at Jane Street.

There are plenty of other success stories.

The elephant in the room

The first question I usually get when I tell a Functional Programming guru that I’m learning OCaml is “Why not Haskell?”. It’s a fair enough question. Haskell can do a ton more than OCaml can, and there are only one or two things OCaml can do that Haskell can’t (I don’t know the details exactly, I would think it was zero). I see a lot of references to OCaml being a gateway drug for Haskell.

The answer is JavaScript. As much as I hate the language, JS is the only realistic way to write web apps. Included in the many and varied AltJS languages, both OCaml and Haskell can be compiled to JavaScript but the Haskell compilers aren’t mature enough yet (and I’m not convinced lazy evaluation in JavaScript will have good performance).

In fact, some study has revealed OCaml may be the most mature AltJS compiler of all by virtue of support for existing OCaml libraries.

JavaScript

Late last year I started hearing about OCaml at Facebook. Their pfff tool, which is a serious OCaml codebase all by itself, is already open source – but there was talk of an even larger project using js_of_ocaml (the link seems to be offline, try the video). That presentation by Julien Verlaguet is almost identical to the one he gave at YOW! 2013 and it really grabbed my attention. (Hopefully the YOW! video is online soon, as it’ll be better quality).

To cut a long story short, Facebook created a new language (Hack, a statically typed PHP variant) and wrote the compiler in OCaml. They then use js_of_ocaml to compile their entire type checker into JavaScript, as the basis of a web IDE (@19 minutes in the video) along the lines of cloud9. Due to the use of OCaml for everything, this IDE has client-side code completion and error checking. It’s pretty amazing.

Maturity of tools and js_of_ocaml

The more I dive into OCaml, and specifically js_of_ocaml, the more it amazes me how the maturity of the tools and information reached suitability for production use just as I need them.

  • The package manager OPAM is now a little over 12 months old and every library I’ve looked at is available on it. Wide community acceptance of a good package manager is a huge plus.

  • The Real World OCaml book was released in November and is an excellent read. The book is so close to the cutting edge they had features added to September’s 4.01.0 compiler release for them :)

  • OCaml Labs has been around for 12 months, and they’re helping to move the OCaml community forward into practical applications (see the 2013 summary).

  • Ocsigen are investing heavily in js_of_ocaml (among other things) with the next release including an improved optimiser (I can attest to the fact that it’s awesome) and support for FRP through the React library.

Moving forward

Is it perfect? No. Software development is not a one-size-fits-all industry. There are as many articles cursing the limitations of OCaml as there are singing its praises. But in the current market, and with the size of JavaScript applications we are starting to generate, I believe OCaml has a bright future.

… or at least, a quote good enough to make me come out of hiding before I’ve explained why things have been so quiet around here.

John Gruber on the iPhone IDE debacle:

If you are constitutionally opposed to developing for a platform where you’re expected to follow the advice of the platform vendor, the iPhone OS is not the platform for you. It never was. It never will be.

To all the people whinging about this decision by Apple, go away. You can have your fun on Android or some other platform that supports your open development philosophy. If by some fluke Apple wind up with a such a massive majority that you’re forced to come back because all the users are here, don’t expect any sympathy from us. It will have happened because Apple’s restrictions resulted in the most consistent mobile OS experience, and users decided that’s what they want.

iPhone is a closed system, and in my opinion the overall quality of the apps available is better for it. Not that the app store is full of fantastic quality at the moment – you really need an iPod or iPhone to appreciate this, but the store has an amazing amount of crap already.

However I can see the app store really going down the toilet if they let “meta-platform” (as Gruber calls them) apps onto the store. Just look at what happens when people develop cross-platform apps for PC; you either target one primary OS and optimise your UI for it at the expense of the others, or target a general use case and suffer for having a non-native UI. Yes there are exeptions, but they are rare and most of them spend stupid amounts of time implementing multiple native UIs in their cross-platform code.

Gruber has a specific example of this:

Consider, for one example, Amazon’s Kindle clients for iPhone OS and Mac OS X. The iPhone OS Kindle app is excellent, a worthy rival in terms of experience to Apple’s own iBooks. The Mac Kindle app is a turd that doesn’t look, feel, or behave like a real Mac app. The iPhone OS Kindle app is a native iPhone app, written in Cocoa Touch. The Mac Kindle app was produced using the cross-platform Qt toolkit.

Native apps are always better; I don’t use OpenOffice more because the UI pisses me off than because iWork is cheap enough that I don’t mind paying for it. Windows is the same (I can’t stand Apple’s apps ported to Windows with Mac-style keyboard shortcuts). Once you allow cross-platform UIs to enter your computing world, life just isn’t as much fun anymore.

And I want my iPhone to be fun.

[update: A related article with an appropriate quote, this time from MacWorld].

… the develop-once-run-anywhere philosophy is something that makes more sense to bean counters and development-environment vendors than it does to platform owners and discriminating users. In the ’90s we were told that Java apps would be the future of software, because you could write them once and deploy them anywhere. As someone who used to use a Java-based Mac app on an almost daily basis, let me tell you: it was a disaster. Java apps didn’t behave like Mac apps.

Close to 18 months ago, when I first started seriously using that old mac laptop, I decided I needed a way to easily transfer my speakers between the desktop games machine and my mac that I used for everything else. One of my mates at work had an Audigy 2 NX, and after borrowing it for a day to make sure it worked on macs I decided to get one. It wasn’t until I had it that I realised the mac was only giving me 2 channels instead of 5.1 :(

I shrugged and chalked this up to the built-in mac drivers, it was fine under windows with the official creative drivers.

And so it was that when I upgraded to the mac mini, and again with this second mini, that I was stuck with a sound card that wasn’t giving me surround. Most of the time this doesn’t concern me as I usually only listen to stereo sources, but I’d never even considered that it might work (the few references I could find to this device on the net were it only working in stereo on the mac).

Until tonight.

While doing some research for a friend who was interested in USB sound cards, I saw a product review stating that the Zalman USB card does work on macs in full 5.1 surround mode. This piqued my interest so I went searching and stumbled on a list of working sound cards forum post. Right there at the top is the Zalman card, but hang on, what’s that sitting at the bottom under supported 7.1 cards? Why it’s my damn Audigy 2 NX! WTF!

I immediately (and stupidly) installed the package attached to that post, but thankfully I read a bit further down the post before rebooting and realised I didn’t need to. This was a good idea because the package is from 10.4 somewhere and I would almost certainly have been left trying to do a restore from backup. I’ve reverted the kext files that the package installed, hopefully my mac doesn’t die when I reboot it after posting this.

In any case, the answer is Audio MIDI Setup! A program that had always sat in the Utilities folder looking summarily useless but turns out to be the hidden gem that Apple really needs to make more obvious. For those who will no doubt arrive here from google one day, here’s how to enable 5.1 surround sound on a USB sound card:

  1. Select your sound card under the Properties For: dropdown
  2. Select the number of channels under the audio output format
  3. Click Configure Speakers
  4. Select Multichannel
  5. Select the correct number of speakers from the dropdown (only the valid one should be enabled)
  6. You can now assign channels to each speaker, I’m pretty sure the numbers I used are correct although 3/4 and 5/6 might be in the wrong order

Here’s a couple of screenshots with number highlights to make it clear:
Audio Midi Setup
Audio Midi Speaker Setup

Maybe it’s just this sound card, but that’s a ridiculous requirement to get 5.1 surround sound working (and I haven’t actually tested if DVDs will play correctly, only some 6 channel test wavs I found). Wish me luck! ;)

On the plus side, if this does work I will no longer have to worry about surround sound output from my media centre when I buy proper home theatre speakers (the audigy has optical and spdif out). I had been concerned that I would be stuck with stereo output from my Mac forever!

The new Google mobile ActiveSync is working great for my calendars. Syncing iCal to google was pretty easy; I exported my 3 local calendars, cleared out the main Google calendar & created 2 new ones (naming my primary calendar “work” thanks to the stupid Outlook plugin), subscribed to all 3 via CalDav with Calaboration and then imported the data. No worries at all. I can create an event in iCal and 10 seconds later it appears on my phone :D

There was a bit of confusion and duplication after syncing my Outlook calendar at work to Google (Did I mention the plugin’s main-calendar-only restriction is REALLY annoying? How about it’s complete inability to detect duplicates?) but that was pretty easy to clear up.

What I haven’t done is turn on address book syncing with the phone. As I suspected and others have confirmed, turning on ActiveSync for contacts & calendar stops iTunes from doing any sync work with them for the iPhone. Which, since iTunes initiates the contact sync to Google, means that contacts are no longer synced to my desktop.

Both forum posts I’ve just linked to have suggested fixes (particularly if you expand them beyond the accepted answer), but I can see three options personally to sync my contacts:

  • Resurrect the fancy iSync scheduling that I haven’t used since switching to the iPhone (I still use the scheduler, just for some Address Book hackery instead of activating the iSync menu)
  • Don’t drop Plaxo completely as I had planned, but use it to sync between google’s contacts and Address Book
  • Leave over-the-air contact sync disabled and continue with iTunes to Google contact sync

So far option 3 sounds the easiest to me. I don’t need to sync my contacts more than once a day (which is how often I sync with iTunes), over-the-air sync wouldn’t give me all of my contact numbers on the phone anyway, and this way I can completely disconnect from Plaxo.

Not that Plaxo is bad – I’ve really enjoyed the service, including a far better Outlook calendar sync platform than Google’s Outlook plugin provides – but ever since I switched my email to Google I have only used it for the Outlook sync (hotmail contact sync is enabled but I don’t need it anymore). It just doesn’t make sense to continue using it in light of Google’s improved Mac/iPhone sync options.

I’m in the middle of another post and was distracted reading some RSS feeds when I came across this gem. Google now supports ActiveSync for contacts and calendar :D :D

I already sync my contacts and calendar to google but it’s only updated on the phone when I sync it with iTunes. Now I get to experience the joy that is push calendaring!

I’ve held out for a while, but last month reached the point where I need some kind of word processor & spreadsheet software. I use MS Office at work, but have always considered it too expensive for use at home as I don’t need much.

On my Windows box, I had been forcing myself to use Open Office. It worked, but that was about all; I hate the interface, it doesn’t hold a candle to MS Office. It was actually one of the first things I tried when I started using this Mac – and it was an utter mess, barely even able to load.

Office for Mac 2008 didn’t fare much better. I’ve had some exposure to it from the mac users at work, and what I’ve seen is not only yet another Microsoft interface that I would have to puzzle my way around, the memory it used on a decent machine indicated it would absolutely kill this poor little 512mb laptop.

The A$650 price, and reviews like this one from MacInTouch (particularly the memory usage section), sealed that deal.

I decided to put up with nothing, but that didn’t last long. When the need hit last month, a few searches lead me to iWork 08. I’d never looked into it beyond the initial reviews, but thankfully it has a 30 day trial.

I’m now hooked :)

It has taken me a few weeks, but my needs at home are so basic that I’ve quickly adapted to the iWork interface. The only thing that gave me grief is border styling in Numbers, but I think I’m getting the hang of it.

It’s a different and far more basic approach to Office application design, but one that focuses on making document creation easy rather than cramming itself full of features. Fairly typical of Apple software, really ;)

The biggest win, in my eyes, is the memory usage. I still haven’t lost the amazement of running Safari, iTunes and Mail at the same time on 512mb ram without any noticable system lag. Apple have taken the same philosophy with their Office apps, and I couldn’t be happier.

The only time it starts lagging is when I leave Safari + Mail open and run both Pages and Numbers with large documents. Even so, it wasn’t until I upgraded to 2gb ram that my Windows box was happy with that much running.

I just… I’m sold. All this for around 15% of the cost of MS Office.

I think 3 huge posts in the space of a week is enough. It’s not that I had them built up from a month of silence, I’ve just had a lot to say about topics that came up recently :)

And just because I don’t want to make a new post about it, some fun news.

I noticed as I wrote my Awaken post last night that the MacHeist front page had changed to “coming soon”, and this morning I woke up to two new software keys sitting in my inbox! It turns out that the fantastic MacHeist bundle I picked up in April has been re-released with three new apps, and because it’s the same price they managed to extend the deal for existing bundle purchasers. The new apps make it even more of a steal:

  • VectorDesigner (woot I don’t own any OSX graphic tools)
  • TextExpander (read about this on DaringFireball, didn’t want to pay full price)
  • SoundStudio (if it gets unlocked)

The bundle is packed full of useful stuff, particularly for new OS X users. I haven’t tried the new apps yet but out of the others I use them all except for:

  • The games (they’re old and obviously released into the bundle as advertising, Enigmo was one of the games ported to iPhone for the WWDC 08 keynote)
  • iClip (it’s sluggish on 512mb ram)
  • DEVONthink (I’ve been looking for an excuse to use it though)

So by the time I buy a new mac, I’ll be using every app except the games. Even if you’re not a new mac user, for my money there are some very handy tools (CoverSutra, Awaken, XSlimmer, WriteRoom) that don’t have any good free equivalent and would cost more than the $50 bundle price just by themselves. VictorDesigner and SoundStudio are both worth more than the bundle price alone, so if you can make use of either one it becomes a no-brainer.

And to top it all off $12 of your money will go to charity. Pick it up now to help guarantee SoundStudio for everyone :D

I’ve had a MythTV box running for quite a few years now, but there’s always new things to learn, right? :)

My flatmate’s TV is in for repairs so we haven’t been using the box in the last few days, with a big NRL game on tonight it was supposed to be recording. I became concerned when the drive light wasn’t flashing as we ate dinner, a quick check of the web interface set my alarm bells ringing. Nothing was recording, and in fact nothing had been recording for nearly 2 days.

The time of failure co-incided with a machine lockup while we were trying to watch TV over the network (it’s saved as MPG so this is easy to do). With no TV screen I hadn’t verified that MythTV was working after the reboot, only that the SMB shares were alive. Visions of dead hard drives floated into my head as much frantic searching and diagnosis ensued ;)

The problem turned out to be very subtle, this explanation may get a bit technical but I couldn’t find any references to it on google so hopefully this post will be useful to someone else.

I only have one machine, but the log clearly said “Running as a slave backend“. Therein lies the problem, the Master server thought it was a Slave backend and sat there trying to connect to nothing. This means no scheduled recordings either because scheduling is all handled by the Master server :(

MythTV is amazingly flexible. It handles multiple backend recording machines, each with multiple capture cards, as well as multiple frontends. Unfortunately a very flexible system easily leads to a lot of configuration complexity, a fact I know all to well from my time in ELJ support.

After browsing around various forms and mailing lists for nearly half an hour I ended up in the Configuring MythTV docs. If you scroll down to the general section, you’ll see some confusing paragraphs:

If you will be deploying multiple backends, or if your backend is on one system and you’re running the frontend on another machine then do not use the “127.0.0.1” IP address.

NOTE: If you modify the 127.0.0.1 address and use a “real” IP address, you must use real IP addresses in both fields, otherwise your frontend machines will generate “Unexpected response to MYTH_PROTO_VERSION” errors.

To understand that, you have to understand that everything in MythTV land is controlled by a single MySQL server. Both the Backend and Frontend sofware connect to the database, read config settings for their hostname (they’re designed to netboot on diskless machines) and then read the global Master Server IP to connect to.

I don’t know why this option exists, but one of those per-host config settings is a Backend Server IP.

And suddenly it hit me.

A week ago, I downloaded MythFrontend for OS X. This poor little iBook is too slow to actually watch TV, but before I discovered that it was the first time I had run a remote frontend so I had to change a few things to make it work. One of those was the Master Server IP.

Since this had been set to 127.0.0.1, the frontend couldn’t connect to the Master Server so I changed it to the Master Server’s network IP. The frontend worked, everything else seemed fine, so I thought nothing of it. Until the Server rebooted.

It turns out that when the Backend loads up, it compares Backend Server IP it has been assigned to the Master Server IP. If they match it loads as the Master; otherwise it becomes a slave. Apparently, it doesn’t bother to figure out that the Master Server IP is the same machine as the 127.0.0.1 it has been told it owns. Isn’t this option redundant? Can’t the Backend just check if it owns the Master Server IP when it already listens on all interfaces?

So long story short, if you mess with the MasterServerIP in your database make sure you also update the BackendServerIP listed in the settings for your Master Server’s hostname. ugh.

On the plus side, I think half an hour to fix a problem this subtle in configuration settings that I had no clue existed is a new record for me :)

I’ve done it. I’m ditching Vista for OSX. You may think I’m a switcher, you may think I was a fool for using Vista at all and if you’re like the rest of the world, you probably think I’m a fool for sticking with it for the 12 months I did. If you think any of those things, you’re wrong. I still love Vista, but have decided that it just doesn’t run well enough without a beefy dual or quad core cpu and I have no intentions of upgrading right now. So what am I using? Well bear with me, there’s a bit of background to this one :)

I am by no means a switcher, or if I am I switched years ago. Despite never actually owning a mac, I’ve been saying ever since I left uni that if I buy a laptop it would be a mac. Unfortunately, I’ve had absolutely no need to buy a laptop and my desktop machine is for games which means macs are out. I did however convince my mum to get a mac laptop in 2003, which she loved and has stuck with macs through to what is about to be a second replacement laptop.

Which brings me to today’s story.

Over the christmas break, the screen on my mum’s iBook G4 went dark. It didn’t take me long to figure out I could still see the screen if I used a torch, but a quick hunt around google revealed it would be a monumental pain in the arse to fix – most likely a broken inverter cable (an apparently common problem in this model of laptop after 3 years of use). The worst part is that in what I can only describe as a fit of stupidity, when I helped her buy the laptop in 2005 I neglected to make sure she bought the extended warranty which would’ve covered this.

I knew it would work fine with an external monitor, but mum doesn’t travel with the adapter nor did she want to buy a monitor and tether the laptop to a desk. So mum took the lappy to her local apple shop, they refused to believe it was the inverter cable and charged her $100 to say “We will only fix this with a new screen, which will cost you nearly a thousand dollars to fix. Buy a new laptop”.

Buying a new laptop is what’s going to happen, that’s for sure, but not from those idiots. Said laptop was sent down to me (including the external monitor adapter) via an uncle who was visiting Brisbane, and now sits on my desk. I’m heading up to visit mum later this year and my task is to buy a new MacBook and do all the data transfer legwork so the new mac has all the photos, music etc from the current one. In return, the iBook is mine to do whatever I want with.

So you can see why this was an opportunity I just couldn’t resist. I spent most of last week testing the water for using this laptop as my primary machine, and all I can say is it never ceases to amaze me how lightweight unix feels. I have a linux file server and MythTV box so this is nothing new to me, but we’re talking a 1.2ghz G4 with 512mb ram here. I have 6 apps running + a temperature monitor, this works because the only apps using more than 50mb ram are Safari and NetNewsWire. There’s no free ram but it has 130mb inactive and there is no swap activity as I switch between apps. Things can get hairy if I go nuts with installers or other cpu heavy tasks in the background, but I

Not to mention the fact that it has a 3 year old graphics card designed for a 1024×768 screen – but I’ve installed a hack to run the external monitor at 1920×1200 on my 24″ lcd. I figure it’d hate that for sure, but it’s surviving admirably.  Most of the fancy graphical effects are horribly slow due to using the CPU to render them, but they’re all either rare or avoidable.

 

Overall I’m amazed at how much this little machine can handle, which is why I’m migrating. I was already running my mail and documents off the file server and use Newsgator to sync my RSS feeds, so the only thing left to migrate is my iTunes library (which I doubt will be much fun, but I’m up for an adventure).

Having used a mac plenty of times, the only real issue I’ve had so far is the keyboard shortcuts. I was willing to put up with the pain of re-learning the text navigation keys, but while looking for a way to fix the terminal keys I discovered a way to enable windows-style keys (although making it work on leopard requires info from one of the comments). Yay for windows shortcuts and muscle memory! It even has old DOS favourites ctrl+insert, shift+insert etc.

I also need to buy a new blogging client, I’m leaning towards ecto but there are still a few options to consider. I noticed MarsEdit 2.1 making some waves on my rss feeds but I can’t stand editing HTML with tags anymore. I’ll rant about that later :)

 

As for my desktop machine? Well it was already dual booting to XP for intensive games due to the vista speed issues. XP will now be the only thing it runs.

I just found a Vista Death March article by John C. Dvorak talking about Vista’s lack of ability to sell, and for the most part I agree with him.  However at one point he suggests they should go back and rebuild it on XP.  Funnily enough, that’s actually the exact reason Vista was delayed so much.  What I’m about to say won’t be a surprise to my close friends because I’ve been saying it to them for most of this year, but I think it’s time to try getting the word out there and at the very least give me an easy link to point people at.

The huge delay in Vista’s release cycle, and the dropping of all those cool features, did not happen because Microsoft are stupid.  It happened because Microsoft have more balls than most software developers (although money helps too).

The details are buried in Paul Thurrott’s Road to Gold article from last year.  It’s actually quite an interesting read if you have some spare time, but today we’re going to concentrate on page 3.  It talks about the 2004 area of the timeline, the point where Microsoft had been building hype about the OS for nearly three years and then suddenly went silent.  The story is spread a bit thin through the page, but is mostly summed up in this quote about a third of the way down:

By that time [April 2004], Microsoft group vice president Jim Allchin had decided that Longhorn wasn’t going to work. He told Bill Gates that the company would have to start over again from scratch, using the more recent Windows Server 2003 (rather than XP) code base as a starting point.

It was reported at the time as the “Longhorn Reset“, and while I remember reading articles about Microsoft’s new development practices I don’t think the implications of what had just happened really hit home for anyone (as evidenced by John’s article I linked above).  Microsoft had built Longhorn on top of XP.  They’d probably finished most of the features.  But the XP codebase was so old and fragile that the resulting system was horribly unstable.  So they threw out three years of work, began again with the Windows Server 2003 codebase, and took another three years to finish.

I don’t think anyone other than Netscape has had the balls to do that, which I’ll get to in a minute.

Vista definitely has a few issues, and it will probably end up with below average sales even though I think it’s fine and can’t stand the old XP interface anymore.  But Microsoft have deep enough pockets to afford this, and I think it had to be done.  In my experience Vista is far more stable than XP, much like Windows 2000 was more stable than Windows 98.  Hopefully a few years down the track we’ll have the vastly improved edition built on the rock solid base that the masses will love.

There is actually a precedent for my faith in Vista’s rewrite.  The classic example is Netscape who in 1998 threw out their code and created Mozilla.  Just about anyone who ever used it will tell you Mozilla 1.0 sucked (as did Netscape 6 and 7 which were basically just rebadged Mozilla releases).  However by 2004 that decision had produced FireFox and we all know how that turned out.

I’m not advocating everyone throw out their code; it killed Netscape and it’s going to cost Microsoft more money than they’d care to admit.  But if you have the resources to do it, I think it works.

Final thought – when describing Netscape 6, Joel Spolsky said the rewrite decision cost Netscape 3 years.  Is it ironic that that’s also how much time it cost Microsoft? ;)

Next Page »

Follow

Get every new post delivered to your Inbox.

Join 90 other followers