software


There has been a lot of change in the OCaml -> JS landscape over the last couple of years, and I regularly see questions about what all the names mean. This is my attempt to sort out the world as I know it.

As you navigate the world of compiling OCaml to JS, most if not all of the following will come up at some point:

That’s a lot to remember, but each works in their own similar yet completely distinct way. They can also be mixed and matched as projects see fit. I’m going to use diagrams to attempt to explain all of this and hopefully provide some clarity.

With apologies to those who understand how over-simplified this is, here is a rough overview of how compilers work, specifically in the case of OCaml.

OCaml Compiler.png

The OCaml compiler

Code flows from top to bottom in three distinct phases:

  1. Syntax is read into an Abstract Syntax Tree
  2. Checks are done to confirm the AST has valid semantics in the OCaml language
    • For example, type checking
    • Optimisation is performed at this level too
  3. The AST is written out to a machine executable
    • OCaml supports both native output, like a C compiler, and platform-independent bytecode output like a Java compiler (although not using JVM bytecode, OCaml has its own unique VM).

Are you with me so far? Good. Next we’re looking at the development that happened in 2011 with the creation of js_of_ocaml.

JSOO.png

js_of_ocaml runs after the compiler

This is very simple, on the surface of it. Leave the compiler alone and machine translate the bytecode to JavaScript. This makes JSOO able to handle literally any pure OCaml code, including the diverse ecosystem of libraries, but there are two major downsides:

  • It can be a bit slow, JSOO is effectively a second compiler
  • The resulting JS is mostly unreadable machine code (since that’s more or less what JSOO had to start with). Source maps help here, but they aren’t a silver bullet.

In early 2016, Bloomberg open sourced their answer to this process. Instead of treating the compiler as a black box and working with the result, they dug in and replaced the output phase with Bucklescript.

Bucklescript.png

Bucklescript replaces the compiler backend

This no doubt took a lot of effort to achieve, but the result has some unquestionable benefits:

  • By working with the compiler internals, their output retains the structure of the original code
  • Raw JavaScript types are used (i.e. OCaml strings are JS strings) producing clean readable JavaScript – at least until advanced types are used (here’s the mapping).
  • Most if not all of the compiler speed is retained

Really the only downside I can see is that by working so deep in the compiler they have to chase new compiler releases (as I write this it is still based on the OCaml compiler released in July 2015). Not that this is a particularly bad thing as the OCaml compiler is fairly stable – and it has since been further mitigated as we’ll see in a moment.

A few months after Bloomberg opened their project to the world, Facebook came along and announced Reason. I watched on with great interest as OCaml seemed poised to win the hearts and minds of JavaScript developers.

ReasonML.png

Reason replaces the compiler frontend.

Things are moving a lot faster at this point. Because reason is a 1:1 syntax mapping to OCaml, they’re able to keep up with compiler releases quite easily. This allows them to focus on what they really want to bring to the JavaScript ecosystem; reliable and easy-to-use tooling that leverages the power of OCaml to tempt JS programmers with mostly familiar syntax. And thanks to leaving the rest of the compiler untouched, the same familiar syntax can compile to native binaries that perform better than JS could ever dream of.

But here is where the real fun begins. If you compare the reason diagram to earlier diagrams, you’ll begin to wonder if it could be combined with with either js_of_ocaml or bucklescript. And that’s exactly what happened. When I first looked at this, efforts were made to support both as ways to compile reason source code to JS. That’s still possible, but like all good communities a single recommended approach is appearing and the tools are moving in that direction.

In the last few months the community has settled on the solution that is very fast and produces output JS that can be very similar to the input reason code. Native compilation and tooling still uses the reason compiler, but bucklescript has added first-class support for reason syntax directly into their compiler.

Reason+Bucklescript.png

By their powers combined…

Wow. Check out where we are now.

  • The syntax is familiar to JavaScript developers, and there is a groundswell of effort building to bring react developers in particular on board.
  • The full power of OCaml is available. I won’t sit here and argue it’s the best type system around, I freely admit it has flaws, but OCaml still has tangible benefits over JavaScript. The choice of a battle-tested compiler with a mature and diverse ecosystem is hard to pass up.
  • The output JavaScript is clean, readable, and easy enough to follow that you could check it into source control for team members that don’t want to learn reason yet. I have literally seen recommendations to do this.

Three years ago, I announced my belief that OCaml was the next big thing for JavaScript. Thanks to reason and bucklescript, that future looks to be fast approaching. It’s an exciting time to be a JavaScript developer.

Come join the fun in the reasonml discord!

Advertisements

My post describing why I was interested in OCaml was originally intended as an internal document at my office arguing that we should invest in an AltJS project. The failure of that effort was unfortunate, but the post opened up a gold mine of support. Not just once, but twice (we’ll get to that in a moment).

I saw confusion from people who hadn’t heard of OCaml, and some attackers, but plenty of people defending it. It was a fairly normal programming language discussion, to be honest, which struck me as impressive for a language that wasn’t mainstream and vindicated my choice.

Writing up my thoughts on this would’ve been a good idea three years ago, and certainly two years ago after the second traffic bump, but despite the probable lack of value in these links now I still want to get these thoughts out on record.

So let’s start with the morning after the post went up (due to time zones most of the discussion happened while I was asleep). I saw wordpress notifications that my stats were booming, most referrals pointing to this reddit thread. A coworker captured a screen shot of the hacker news front page for me, with discussion happening in this Hacker News thread. More than 15000 views on that single day.

hacker news blog post pos5.png

That was unbelievably cool. I found a discussion on lobsters, Erik Meijer tweeted about it, I enjoyed my 15 minutes of fame and participated in what remained of the discussion as best I could.

But almost a year later, in February 2015, it happened again:

blog view stats.png

Thanks to someone else posting it to Hacker News:

hacker news pos7.png

The total stats are what I really want to highlight. 18309 views in March 2014, plus another 15101 in February 2015, have raised this post to great exposure. It has legs, too, continuing to drive 100-200 views a month.

And yet I’ve been silent since then. Looking back at the two hacker news threads now, I don’t think I even read the 2015 comments in great detail. The failure of my efforts in 2014 pretty much lead to burnout on all coding outside of work hours. I didn’t even celebrate the 10th anniversary of creating this blog (November 2015). It was all just… nothing.

I was very excited about all of this in 2014. Thanks to some prompting from Jordan Walke on twitter, who gave me a really interesting gist link to the ocaml-based API that he was hoping to use for react, I embarked on creating a good quality ocaml-js wrapper. One that maintained pure ocaml as much as possible, unlike later efforts. I have kept it private all this time due to the embarrassing lack of progress since burning out, but it’s relevant now so here we go.

https://bitbucket.org/TheSpyder/reactjs

Note the distinct lack of commits after about a month. I am very proud however that the MyComponent file makes no reference to js_of_ocaml, and the example code only uses it to pass a DOM element to the render function. This is how web coding should be, and I’m very happy to see that it is the direction we’re headed.

So why am I surfacing now? In a similar parallel to last time, it’s because discussions at work have finally returned to AltJS. I have a new post brewing in my head about the current state of OCaml and JS, but I wanted to get this one out first. I’m doing it for my own record and for posterity, but feel free to get the discussions rolling again 😀

OCaml first hit my radar in November 2013. I had just learnt SML, a similar but older language, in the excellent Programming Languages Coursera course. Dan Grossman is one of the best lecturers I’ve ever seen, I found his explanations hit all the right notes and made learning easy. The simplicity of the SML syntax, and the power of the language while still producing code that is readable with minimal training immediately appealed to me.

Over the last 3 years I have tried, and failed, to learn Haskell. The combination of minimalist syntax, pure functional programming style and lazy evaluation is like a 3-hit sucker punch that is very hard to grasp all at once. Having now learnt SML and OCaml, which like Haskell are based on the ML language, that has changed. I have yet to put any more effort into learning Haskell, but it is now clear to me that the syntax is only a small leap from ML and the pure functional style has similarities to SML.

I still don’t want to write production code in Haskell, but the fact that I find it less scary than I used to indicates I have made a significant jump in my knowledge and, arguably, career in the last 6 months.

Dynamic typing

Before I go any further, I need fans of dynamic typing to exit the room. My 12 years in the industry have set my camp firmly on the static typing side of the fence, and discussions about static vs dynamic will not be productive or welcome here.

So, why OCaml?

Smarter people than me have written about this, but I’ll give it a shot.

I have found OCaml to be a refreshing change of pace. Most of my favourite things are derived from the ML base language; variants, records, and pattern matching combine to create elegantly powerful code that is still easy to follow (unlike most Haskell code I’ve seen).

Ocaml takes the expression-based ML style and incorporates enough imperative features to make it comfortable for someone learning Functional Programming. Don’t know how to use recursion to solve a problem? Drop into a for loop in the middle of your expression. Need some debug output? Add it right there with a semicolon to sequence expressions.

Throw in almost perfect static type inference, a compiler that gives useful error messages and immutable-by-default variables and I just can’t get enough. I won’t sit here and list every feature of the language, but hopefully that piques your interest as much as it did mine 😉

Industry acceptance

There is always an element of “I have a hammer, everything looks like a nail” when learning a new language but the evidence that OCaml is becoming more widely accepted is not hard to find.

In the middle of February, Thomas Leonard’s OCaml: what you gain post made waves; the reddit and hackernews discussions are fascinating. A lot of people using OCaml in the industry came out of the woodwork for that one. I’m still working my way through the series of 11 posts Thomas made, dating back to June 2013, about his process of converting a large Python codebase to OCaml.

Facebook have a fairly extensive OCaml codebase (more details below).

It doesn’t take much googling to find presentations by Skydeck in 2010 (they wrote ocamljs, the first ocaml to JS compiler) or a 2006 talk describing why OCaml is worth learning after Haskell.

OCamlPro appear to be seeing good business out of OCaml, and they have an excellent browser-based OCaml tutorial (developed using, of course, js_of_ocaml).

No list of OCaml developers would be complete without mentioning the immense amount of code at Jane Street.

There are plenty of other success stories.

The elephant in the room

The first question I usually get when I tell a Functional Programming guru that I’m learning OCaml is “Why not Haskell?”. It’s a fair enough question. Haskell can do a ton more than OCaml can, and there are only one or two things OCaml can do that Haskell can’t (I don’t know the details exactly, I would think it was zero). I see a lot of references to OCaml being a gateway drug for Haskell.

The answer is JavaScript. As much as I hate the language, JS is the only realistic way to write web apps. Included in the many and varied AltJS languages, both OCaml and Haskell can be compiled to JavaScript but the Haskell compilers aren’t mature enough yet (and I’m not convinced lazy evaluation in JavaScript will have good performance).

In fact, some study has revealed OCaml may be the most mature AltJS compiler of all by virtue of support for existing OCaml libraries.

JavaScript

Late last year I started hearing about OCaml at Facebook. Their pfff tool, which is a serious OCaml codebase all by itself, is already open source – but there was talk of an even larger project using js_of_ocaml (the link seems to be offline, try the video). That presentation by Julien Verlaguet is almost identical to the one he gave at YOW! 2013 and it really grabbed my attention. (Hopefully the YOW! video is online soon, as it’ll be better quality).

To cut a long story short, Facebook created a new language (Hack, a statically typed PHP variant) and wrote the compiler in OCaml. They then use js_of_ocaml to compile their entire type checker into JavaScript, as the basis of a web IDE (@19 minutes in the video) along the lines of cloud9. Due to the use of OCaml for everything, this IDE has client-side code completion and error checking. It’s pretty amazing.

Maturity of tools and js_of_ocaml

The more I dive into OCaml, and specifically js_of_ocaml, the more it amazes me how the maturity of the tools and information reached suitability for production use just as I need them.

  • The package manager OPAM is now a little over 12 months old and every library I’ve looked at is available on it. Wide community acceptance of a good package manager is a huge plus.

  • The Real World OCaml book was released in November and is an excellent read. The book is so close to the cutting edge they had features added to September’s 4.01.0 compiler release for them 🙂

  • OCaml Labs has been around for 12 months, and they’re helping to move the OCaml community forward into practical applications (see the 2013 summary).

  • Ocsigen are investing heavily in js_of_ocaml (among other things) with the next release including an improved optimiser (I can attest to the fact that it’s awesome) and support for FRP through the React library.

Moving forward

Is it perfect? No. Software development is not a one-size-fits-all industry. There are as many articles cursing the limitations of OCaml as there are singing its praises. But in the current market, and with the size of JavaScript applications we are starting to generate, I believe OCaml has a bright future.

… or at least, a quote good enough to make me come out of hiding before I’ve explained why things have been so quiet around here.

John Gruber on the iPhone IDE debacle:

If you are constitutionally opposed to developing for a platform where you’re expected to follow the advice of the platform vendor, the iPhone OS is not the platform for you. It never was. It never will be.

To all the people whinging about this decision by Apple, go away. You can have your fun on Android or some other platform that supports your open development philosophy. If by some fluke Apple wind up with a such a massive majority that you’re forced to come back because all the users are here, don’t expect any sympathy from us. It will have happened because Apple’s restrictions resulted in the most consistent mobile OS experience, and users decided that’s what they want.

iPhone is a closed system, and in my opinion the overall quality of the apps available is better for it. Not that the app store is full of fantastic quality at the moment – you really need an iPod or iPhone to appreciate this, but the store has an amazing amount of crap already.

However I can see the app store really going down the toilet if they let “meta-platform” (as Gruber calls them) apps onto the store. Just look at what happens when people develop cross-platform apps for PC; you either target one primary OS and optimise your UI for it at the expense of the others, or target a general use case and suffer for having a non-native UI. Yes there are exeptions, but they are rare and most of them spend stupid amounts of time implementing multiple native UIs in their cross-platform code.

Gruber has a specific example of this:

Consider, for one example, Amazon’s Kindle clients for iPhone OS and Mac OS X. The iPhone OS Kindle app is excellent, a worthy rival in terms of experience to Apple’s own iBooks. The Mac Kindle app is a turd that doesn’t look, feel, or behave like a real Mac app. The iPhone OS Kindle app is a native iPhone app, written in Cocoa Touch. The Mac Kindle app was produced using the cross-platform Qt toolkit.

Native apps are always better; I don’t use OpenOffice more because the UI pisses me off than because iWork is cheap enough that I don’t mind paying for it. Windows is the same (I can’t stand Apple’s apps ported to Windows with Mac-style keyboard shortcuts). Once you allow cross-platform UIs to enter your computing world, life just isn’t as much fun anymore.

And I want my iPhone to be fun.

[update: A related article with an appropriate quote, this time from MacWorld].

… the develop-once-run-anywhere philosophy is something that makes more sense to bean counters and development-environment vendors than it does to platform owners and discriminating users. In the ’90s we were told that Java apps would be the future of software, because you could write them once and deploy them anywhere. As someone who used to use a Java-based Mac app on an almost daily basis, let me tell you: it was a disaster. Java apps didn’t behave like Mac apps.

Close to 18 months ago, when I first started seriously using that old mac laptop, I decided I needed a way to easily transfer my speakers between the desktop games machine and my mac that I used for everything else. One of my mates at work had an Audigy 2 NX, and after borrowing it for a day to make sure it worked on macs I decided to get one. It wasn’t until I had it that I realised the mac was only giving me 2 channels instead of 5.1 😦

I shrugged and chalked this up to the built-in mac drivers, it was fine under windows with the official creative drivers.

And so it was that when I upgraded to the mac mini, and again with this second mini, that I was stuck with a sound card that wasn’t giving me surround. Most of the time this doesn’t concern me as I usually only listen to stereo sources, but I’d never even considered that it might work (the few references I could find to this device on the net were it only working in stereo on the mac).

Until tonight.

While doing some research for a friend who was interested in USB sound cards, I saw a product review stating that the Zalman USB card does work on macs in full 5.1 surround mode. This piqued my interest so I went searching and stumbled on a list of working sound cards forum post. Right there at the top is the Zalman card, but hang on, what’s that sitting at the bottom under supported 7.1 cards? Why it’s my damn Audigy 2 NX! WTF!

I immediately (and stupidly) installed the package attached to that post, but thankfully I read a bit further down the post before rebooting and realised I didn’t need to. This was a good idea because the package is from 10.4 somewhere and I would almost certainly have been left trying to do a restore from backup. I’ve reverted the kext files that the package installed, hopefully my mac doesn’t die when I reboot it after posting this.

In any case, the answer is Audio MIDI Setup! A program that had always sat in the Utilities folder looking summarily useless but turns out to be the hidden gem that Apple really needs to make more obvious. For those who will no doubt arrive here from google one day, here’s how to enable 5.1 surround sound on a USB sound card:

  1. Select your sound card under the Properties For: dropdown
  2. Select the number of channels under the audio output format
  3. Click Configure Speakers
  4. Select Multichannel
  5. Select the correct number of speakers from the dropdown (only the valid one should be enabled)
  6. You can now assign channels to each speaker, I’m pretty sure the numbers I used are correct although 3/4 and 5/6 might be in the wrong order

Here’s a couple of screenshots with number highlights to make it clear:
Audio Midi Setup
Audio Midi Speaker Setup

Maybe it’s just this sound card, but that’s a ridiculous requirement to get 5.1 surround sound working (and I haven’t actually tested if DVDs will play correctly, only some 6 channel test wavs I found). Wish me luck! 😉

On the plus side, if this does work I will no longer have to worry about surround sound output from my media centre when I buy proper home theatre speakers (the audigy has optical and spdif out). I had been concerned that I would be stuck with stereo output from my Mac forever!

The new Google mobile ActiveSync is working great for my calendars. Syncing iCal to google was pretty easy; I exported my 3 local calendars, cleared out the main Google calendar & created 2 new ones (naming my primary calendar “work” thanks to the stupid Outlook plugin), subscribed to all 3 via CalDav with Calaboration and then imported the data. No worries at all. I can create an event in iCal and 10 seconds later it appears on my phone 😀

There was a bit of confusion and duplication after syncing my Outlook calendar at work to Google (Did I mention the plugin’s main-calendar-only restriction is REALLY annoying? How about it’s complete inability to detect duplicates?) but that was pretty easy to clear up.

What I haven’t done is turn on address book syncing with the phone. As I suspected and others have confirmed, turning on ActiveSync for contacts & calendar stops iTunes from doing any sync work with them for the iPhone. Which, since iTunes initiates the contact sync to Google, means that contacts are no longer synced to my desktop.

Both forum posts I’ve just linked to have suggested fixes (particularly if you expand them beyond the accepted answer), but I can see three options personally to sync my contacts:

  • Resurrect the fancy iSync scheduling that I haven’t used since switching to the iPhone (I still use the scheduler, just for some Address Book hackery instead of activating the iSync menu)
  • Don’t drop Plaxo completely as I had planned, but use it to sync between google’s contacts and Address Book
  • Leave over-the-air contact sync disabled and continue with iTunes to Google contact sync

So far option 3 sounds the easiest to me. I don’t need to sync my contacts more than once a day (which is how often I sync with iTunes), over-the-air sync wouldn’t give me all of my contact numbers on the phone anyway, and this way I can completely disconnect from Plaxo.

Not that Plaxo is bad – I’ve really enjoyed the service, including a far better Outlook calendar sync platform than Google’s Outlook plugin provides – but ever since I switched my email to Google I have only used it for the Outlook sync (hotmail contact sync is enabled but I don’t need it anymore). It just doesn’t make sense to continue using it in light of Google’s improved Mac/iPhone sync options.

I’m in the middle of another post and was distracted reading some RSS feeds when I came across this gem. Google now supports ActiveSync for contacts and calendar 😀 😀

I already sync my contacts and calendar to google but it’s only updated on the phone when I sync it with iTunes. Now I get to experience the joy that is push calendaring!

Next Page »