Ranting about Stack Overflow developer stories

I posted this on twitter in September last year, but while twitter still hobbles along my feed is becoming a ghost town. I wanted somewhere a little more long-form to rant about this and Mastodon doesn’t seem like the best place for it.

Almost exactly 12 months ago Stack Overflow announced both their Jobs and Developer Story features would be taken offline a couple of months later. I don’t have a problem with this decision, business is business and under new management it’s no surprise things can change.

What I find completely unacceptable is that they didn’t tell me. They have my email, I’ve used the site almost since it’s inception, I just haven’t used it very often in the last 5ish years. I certainly didn’t visit my developer story regularly enough to know within a 2 month window that I needed to rush to download my data or it would be deleted.

And so it was that 6 months after the deadline I went to make a small adjustment to my CV and found it was gone with no recourse. The wayback machine was prevented from making a copy of it and nobody would voluntarily crawl over 4 million developer CVs so the services to help export the data did so only on demand. It’s just gone.

The lesson is obvious, I’m sure, always keep an offsite backup of any data you care about. And I do in every other area of my life other than this blog, I think I just assumed my CV would always be tied to my Stack Overflow account and if Stack Overflow was ever retired I’d certainly hear about it. I never expected them to delete a feature and all data associated with it but nothing else.

My only saving grace is that while I don’t generally need to look for work, I was just keeping my CV up to date as my career progressed, I did apply for a job in June 2021 and emailed them a PDF export of it. Or at least I thought I did; I didn’t check back in September and now it looks like all I sent them was a link. Asking if they still have it is going to be an embarrassing conversation.

I don’t know if I can ever be “done” with stack overflow, because it’s such a useful tool and we’re all a captive audience. But I certainly won’t be participating in questions and answers there anytime soon. And I should probably look into doing something I’ve been avoiding for over 17 years – buying a domain and self (or paid) hosting a website that includes this blog and my CV. It would certainly be a good way to get off the garbage gutenberg editor that I’m forced to use here and nearly lost this entire post due to some sort of idle login timeout issue.

[edit]
The person who I applied to no longer works for the company, and their Operations Director couldn’t find my CV on file, but an hour later their Engineering Director found a print to pdf copy of the “story” view (which is not as nice to print as the “CV” view but does include all the links etc).

Today has been a roller coaster. I need a nap. I’m just glad I don’t actually need to trawl through 20 years of memories to come up with something coherent.

Expanding a Tiny(mce) home

For an open source project as old and well-known as TinyMCE the primary repository is more than just a collection of code; it’s home base, the first place to look for help and the foundation for everyone who forks or contributes to the project.

So you can imagine that making perhaps the biggest change this repository has ever seen was quite daunting. What started as a simple proposal became an evolving background priority task that has taken 5 months for me to become confident it is stable.

As TinyMCE transitioned to a modern codebase through 2017 and 2018 many external library dependencies were added from previously closed source projects. This multi-repository development style worked well enough in small teams, but as we grew the team on TinyMCE 5 we hit the pain points harder and harder slowing the team down significantly. It’s a common story in “I switched to a monorepo” blog posts.

As TinyMCE 5 wound through beta and release candidate at the end of 2018 I decided enough was enough. In consultation with Spocke the decision was made to bring our 22 library projects together alongside TinyMCE in a monorepo. I volunteered to take this on as a pet project, expecting the scope of changes in TinyMCE 5 to take priority and it would be a long slow burn.

I don’t want to create another tedious “how to monorepo” article, but I do want to give a high-level overview of why and how we did it for posterity and conversation.

Based on my expectation of delays I started with the best decision of the entire project; I did not build the monorepo by hand. I used a script to do the heavy lifting and import both the existing code and templates of new files. This script could rebuild my monorepo fork based on the master branch of TinyMCE and 22 source repositories in just 2 minutes. This gave me freedom to progress, experiment and iterate in my own little sandbox while also keeping it up to date.

Early on we decided to convert the TinyMCE repository rather than start a new one. The popularity of the project meant we could not break the master branch; the whole monorepo update for our developers had to come down with a single ​git pull. Next, we settled on Lerna as the basis for our monorepo. It is fairly well known and seemed to be strongly recommended.

Side note: the decision to use modules instead of the Lerna default packages is a whole other tale, one I tried to cover in the TinyMCE readme. It’s quite possible we will eventually drop Lerna, a yarn workspace gives us most of the benefits I was looking for, and there are definitely stories of people outgrowing Lerna. But for now it’s working well.

I quickly abandoned the low-level management functions and settled on Yarn workspaces instead, but Lerna’s help with publishing independently versioned modules is essential as we wished to continue publishing the libraries even after their source code was merged into the monorepo.

The base setup of my monorepo script was fairly simple:

  • yarn config set workspaces-experimental true
  • mkdir -p modules/tinymce && mv .gitignore * modules/tinymce
  • create new .gitignore
  • lerna init
  • Update lerna.json
    • "version": "independent",
      "npmClient": "yarn",
      "useWorkspaces": true
  • Update the default package.json created by lerna:
    • "workspaces": ["modules/*"]

 

From there I needed to start adding the package repositories into the monorepo. There are many and varied resources to help with creating a monorepo but the conversion I had planned was more complicated than most articles I found on the subject.

My initial attempt followed the simplest possible approach, git subtree:
git subtree add -P modules/name ../name master

This retained the commit log history, but it didn’t show file diffs in each commit (I guess due to the file location changes?). I’m not sure what the use case for this command is.

My second attempt was with Lerna’s import command. This filtered every commit, making it very very slow (8867 commits across 22 repositories) and the resulting git history structure did not impress me.

After digging through more articles, and finding the right words to search for, I came across a detailed approach to merging git repositories without losing file history by Eric Lee (via his Stack Overflow post).

This technique checks out the source repository master, moves all files into a subfolder, then merges that into the monorepo master. SeemsGood. I only needed to make small adjustments to this process, I can post the script if there are requests but most of the details are specific to TinyMCE and our internal git server.

Once the repositories are imported my script uses sed and a few other tricks to adjust common tooling so it works inside the monorepo:

  • Move all devDependencies to the monorepo root to avoid diverging versions
  • Switch all packages to TypeScript project references and build mode
  • Switch all packages to LGPL, matching TinyMCE (most of the source projects were Apache 2.0 licensed)
  • ./node_modules/.bin/cmd and npx cmd no longer work
    • This was a simple fix, we now use yarn cmd
  • load-grunt-tasks found no tasks to load because it required the task modules be in a local node_modules but the monorepo moved all of those to the root. So I had to get creative:
    • require('load-grunt-tasks')(grunt, {
        requireResolution: true,
        config: "../../package.json",
        pattern: ['grunt-*', 'rollup']
      });
  • grunt.loadNpmTasks() also found no tasks to load
    • This was deleted, replaced by pattern additions to the load-grunt-tasks config.

From here I made constant tweaks to the import script and began finding issues I could fix in the source repositories instead of patching with my script:

  • Repositories with CRLF in their files (TinyMCE standardises on LF). The way git merge is configured by the script it was performing renames + line ending conversions but only committed the rename. This did not happen often so was easy to fix when it came up.
  • In late 2018 we built a more modern API for our AJAX library, jax, but only deployed it to premium plugins. We decided that rather than standardise the monorepo on the old API, we would completely replace the open source jax with this new code.
  • I took the opportunity to use BFG Repo-Cleaner to strip out binary history from two projects before they were imported. This brought the monorepo git pull size down from 23MB to 11MB.
  • We use webpack a lot in development, and for a long time have relied on awesome-typescript-loader. We are big fans of atl, and we still use it for the demo environment, but testing in the monorepo was just too slow without support for TypeScript project references. So we switched testing to ts-loader, which does support them, via a seemingly innocuous commit message 😉
  • The way our test framework imported tests, when applied to the whole monorepo, produced a 19MB JavaScript file and a fun TypeScript error:
    TS2563: The containing function or module body is too large for control flow analysis.
    This one turned out to be easy to fix and sped up all projects, so in it went to the open source project months ago.
  • I sneakily deployed a monorepo-specific change to resource loading in tests. The test framework added resource aliasing for yarn workspaces, with an otherwise pointless local alias, except I switched TinyMCE tests to make use of the local alias allowing all tests to run inside the monorepo without patching from the script.

I poked and prodded at this on and off for a couple of months until I had my monorepo dev environment working, the tests all passed, and I could start thinking about versioning / publishing. I’m the sort of person who doesn’t totally trust documentation; I want to mess around and explore the commands to see what they do.

This required extreme caution. I had imported live NPM module source code so a rogue npm publish could have dire consequences. After spending some time logged out of NPM to be safe I hit upon Lerna’s publishConfig setting which let me constrain all publishing to a specific NPM registry.

I used a local Sonatype Nexus Docker container along with a fork of TinyMCE to give me complete freedom to publish builds and explore the publish styles Lerna offers while playing in my sandbox.

We publish to NPM from CI very regularly, so at first glance from-package seemed like the best path forward to match how we had been developing. After some discussion we decided to switch to automated lerna publish patch. Independent versioning on 22 packages will potentially create hundreds of version tags a year but we love automation and cleaning up a tag mess can be scripted. We do still use from-package to account for manually updating minor and major versions, but we are hoping to explore conventional commits to later automate the entire release process.

As a final step I leveraged lerna changed to create a new CI script that only runs tests on changed packages. This reduces CI build times and further improves the iteration speed of developing in the monorepo.

Five months in the planning, and after a week of internal testing, nearly 9000 new commits and the TinyMCE monorepo are now live. I’ve had a lot of fun building it and I hope it works as well for our contributors as it has already done for our own development.

A big TinyMCE shakeup

Today marked the release of TinyMCE 5 and the launch of our shakeup of the project with the hope to see it modernise and move forward into the future. This shakeup, from my perspective, comes in four parts.

People

The first big change is that version 5 was developed under the stewardship of the Editor Platform team in Brisbane, Australia, where I am the Technical Lead. Our developers in Sweden are still involved – TinyMCE wouldn’t be TinyMCE without them – but now the team assigned to TinyMCE 5 is triple the size we had previously (and there was an expansion to 4x at one point during the project). Everyone is excited about the things this makes possible.

FP

The second big change is the switch to a more rigorous Functional Programming style of development, not only in the code but also the project. We have many small self contained libraries that we now compose together particularly in the new silver theme.

The team in Brisbane has been working in this way for a number of years while developing our Textbox.io editor technology; moving onto TinyMCE we have begun to apply those techniques there as well. I could (and should) do an entire post on the details, but if you browse the codebase and see imports from weird-sounding libraries like alloy, sugar and boulder that’s us.

The libraries we built had been making their way into TinyMCE in bits and pieces over the last year or two, but their use went into full effect with this release.

UI

That brings me to the third big change and headline feature of TinyMCE 5, the new silver theme and oxide skin. The entire src/ui folder has been deleted, along with the modern theme, and we started fresh. With the blessing and help of TinyMCE lead developer Johan “spocke” Sörlin we built a new API designed to be modern, flexible, and most importantly abstracted away from the DOM. Taking inspiration from virtual DOM style libraries this structure serves two high level goals:

  1. Custom dialogs fit into the new TinyMCE style without any effort, including the multitude of UI variants the new skin makes possible.
  2. Implementation details of components can be changed without breaking the API. This was a very clear goal to not replicate the “everything and the kitchen sink” approach of previous TinyMCE interfaces.

This breaking API change is the one that will be most obvious for developers migrating to TinyMCE 5, and also the most contentious. We have already received feedback during the pre-release cycle that the new API does not replicate v4 functionality in areas that are important to our developers. We hope to address these needs in the near future but we will be doing our best to not compromise either goal.

Project Structure

The final change I’d like to talk about is one that we are working on but isn’t yet ready. In the last few years it has became clear to the Editor Platform team that while our split library approach allowed for excellent separation of concerns, it was frustrating to develop with when changes were required across multiple libraries. This manifested in a significant way during TinyMCE 5 development, is a common story as the number of libraries in a project expands, and the common answer to these problems is a monorepo. It isn’t a simple change, although it appears to be easier than it used to be, I’m keeping notes and plan to do a detailed write-up of the process.

So the shake up still in progress is that at some point soon the TinyMCE repository will most likely be converted to a monorepo – everything there today will move into a subfolder, and the entire 7000 commit history of the libraries that are used in TinyMCE 5 will be merged into the master branch under the same subfolder. How that looks and when it happens is still yet to be determined, but initial signs are that it will ease these pain points for us.

Takeaways

I hope this change in direction for TinyMCE is well received by the community. We’re trying something new, initial feedback on version 5 has been positive and I’m confident the changes are for the best. We are listening, however, so if you think we’ve done something wrong or not well enough feedback is greatly appreciated.

A fun morning on twitter

I’ve had a John Carmack quote at the top of my CV pretty much since I started taking it seriously (which was actually after I got my first job with Ephox who I still work for).

This morning, I mentioned it to him in response to a speech he gave recently:

And well… the stats on my tweet tell most of the story, but a picture tells a thousand words:

john carmack retweet.png

Today is a good day.

OCaml to JavaScript buzzwords

There has been a lot of change in the OCaml -> JS landscape over the last couple of years, and I regularly see questions about what all the names mean. This is my attempt to sort out the world as I know it.

As you navigate the world of compiling OCaml to JS, most if not all of the following will come up at some point:

That’s a lot to remember, but each works in their own similar yet completely distinct way. They can also be mixed and matched as projects see fit. I’m going to use diagrams to attempt to explain all of this and hopefully provide some clarity.

With apologies to those who understand how over-simplified this is, here is a rough overview of how compilers work, specifically in the case of OCaml.

OCaml Compiler.png
The OCaml compiler

Code flows from top to bottom in three distinct phases:

  1. Syntax is read into an Abstract Syntax Tree
  2. Checks are done to confirm the AST has valid semantics in the OCaml language
    • For example, type checking
    • Optimisation is performed at this level too
  3. The AST is written out to a machine executable
    • OCaml supports both native output, like a C compiler, and platform-independent bytecode output like a Java compiler (although not using JVM bytecode, OCaml has its own unique VM).

Are you with me so far? Good. Next we’re looking at the development that happened in 2011 with the creation of js_of_ocaml.

JSOO.png
js_of_ocaml runs after the compiler

This is very simple, on the surface of it. Leave the compiler alone and machine translate the bytecode to JavaScript. This makes JSOO able to handle literally any pure OCaml code, including the diverse ecosystem of libraries, but there are two major downsides:

  • It can be a bit slow, JSOO is effectively a second compiler
  • The resulting JS is mostly unreadable machine code (since that’s more or less what JSOO had to start with). Source maps help here, but they aren’t a silver bullet.

In early 2016, Bloomberg open sourced their answer to this process. Instead of treating the compiler as a black box and working with the result, they dug in and replaced the output phase with Bucklescript.

Bucklescript.png
Bucklescript replaces the compiler backend

This no doubt took a lot of effort to achieve, but the result has some unquestionable benefits:

  • By working with the compiler internals, their output retains the structure of the original code
  • Raw JavaScript types are used (i.e. OCaml strings are JS strings) producing clean readable JavaScript – at least until advanced types are used (here’s the mapping).
  • Most if not all of the compiler speed is retained

Really the only downside I can see is that by working so deep in the compiler they have to chase new compiler releases (as I write this it is still based on the OCaml compiler released in July 2015). Not that this is a particularly bad thing as the OCaml compiler is fairly stable – and it has since been further mitigated as we’ll see in a moment.

A few months after Bloomberg opened their project to the world, Facebook came along and announced Reason. I watched on with great interest as OCaml seemed poised to win the hearts and minds of JavaScript developers.

ReasonML.png
Reason replaces the compiler frontend.

Things are moving a lot faster at this point. Because reason is a 1:1 syntax mapping to OCaml, they’re able to keep up with compiler releases quite easily. This allows them to focus on what they really want to bring to the JavaScript ecosystem; reliable and easy-to-use tooling that leverages the power of OCaml to tempt JS programmers with mostly familiar syntax. And thanks to leaving the rest of the compiler untouched, the same familiar syntax can compile to native binaries that perform better than JS could ever dream of.

But here is where the real fun begins. If you compare the reason diagram to earlier diagrams, you’ll begin to wonder if it could be combined with with either js_of_ocaml or bucklescript. And that’s exactly what happened. When I first looked at this, efforts were made to support both as ways to compile reason source code to JS. That’s still possible, but like all good communities a single recommended approach is appearing and the tools are moving in that direction.

In the last few months the community has settled on the solution that is very fast and produces output JS that can be very similar to the input reason code. Native compilation and tooling still uses the reason compiler, but bucklescript has added first-class support for reason syntax directly into their compiler.

Reason+Bucklescript.png
By their powers combined…

Wow. Check out where we are now.

  • The syntax is familiar to JavaScript developers, and there is a groundswell of effort building to bring react developers in particular on board.
  • The full power of OCaml is available. I won’t sit here and argue it’s the best type system around, I freely admit it has flaws, but OCaml still has tangible benefits over JavaScript. The choice of a battle-tested compiler with a mature and diverse ecosystem is hard to pass up.
  • The output JavaScript is clean, readable, and easy enough to follow that you could check it into source control for team members that don’t want to learn reason yet. I have literally seen recommendations to do this.

Three years ago, I announced my belief that OCaml was the next big thing for JavaScript. Thanks to reason and bucklescript, that future looks to be fast approaching. It’s an exciting time to be a JavaScript developer.

Come join the fun in the reasonml discord!

A look back at “why OCaml”

My post describing why I was interested in OCaml was originally intended as an internal document at my office arguing that we should invest in an AltJS project. The failure of that effort was unfortunate, but the post opened up a gold mine of support. Not just once, but twice (we’ll get to that in a moment).

I saw confusion from people who hadn’t heard of OCaml, and some attackers, but plenty of people defending it. It was a fairly normal programming language discussion, to be honest, which struck me as impressive for a language that wasn’t mainstream and vindicated my choice.

Writing up my thoughts on this would’ve been a good idea three years ago, and certainly two years ago after the second traffic bump, but despite the probable lack of value in these links now I still want to get these thoughts out on record.

So let’s start with the morning after the post went up (due to time zones most of the discussion happened while I was asleep). I saw wordpress notifications that my stats were booming, most referrals pointing to this reddit thread. A coworker captured a screen shot of the hacker news front page for me, with discussion happening in this Hacker News thread. More than 15000 views on that single day.

hacker news blog post pos5.png

That was unbelievably cool. I found a discussion on lobsters, Erik Meijer tweeted about it, I enjoyed my 15 minutes of fame and participated in what remained of the discussion as best I could.

But almost a year later, in February 2015, it happened again:

blog view stats.png

Thanks to someone else posting it to Hacker News:

hacker news pos7.png

The total stats are what I really want to highlight. 18309 views in March 2014, plus another 15101 in February 2015, have raised this post to great exposure. It has legs, too, continuing to drive 100-200 views a month.

And yet I’ve been silent since then. Looking back at the two hacker news threads now, I don’t think I even read the 2015 comments in great detail. The failure of my efforts in 2014 pretty much lead to burnout on all coding outside of work hours. I didn’t even celebrate the 10th anniversary of creating this blog (November 2015). It was all just… nothing.

I was very excited about all of this in 2014. Thanks to some prompting from Jordan Walke on twitter, who gave me a really interesting gist link to the ocaml-based API that he was hoping to use for react, I embarked on creating a good quality ocaml-js wrapper. One that maintained pure ocaml as much as possible, unlike later efforts. I have kept it private all this time due to the embarrassing lack of progress since burning out, but it’s relevant now so here we go.

https://bitbucket.org/TheSpyder/reactjs

Note the distinct lack of commits after about a month. I am very proud however that the MyComponent file makes no reference to js_of_ocaml, and the example code only uses it to pass a DOM element to the render function. This is how web coding should be, and I’m very happy to see that it is the direction we’re headed.

So why am I surfacing now? In a similar parallel to last time, it’s because discussions at work have finally returned to AltJS. I have a new post brewing in my head about the current state of OCaml and JS, but I wanted to get this one out first. I’m doing it for my own record and for posterity, but feel free to get the discussions rolling again 😀

Well, yes

Image

yes I do. That draft was created just after I joined twitter, and in the details I had written that I had just passed 400 tweets. I’m now closing in on 13,000. I am, however, in the process of scaling back my twitter and Facebook usage – I identified last week that as I reduce my tweeting I might return to blogging for my creative output.

So. 15,500 hits and counting for my first real post in 3 years. That’s a pretty high bar to live up to 😉

Why OCaml, why now?

OCaml first hit my radar in November 2013. I had just learnt SML, a similar but older language, in the excellent Programming Languages Coursera course. Dan Grossman is one of the best lecturers I’ve ever seen, I found his explanations hit all the right notes and made learning easy. The simplicity of the SML syntax, and the power of the language while still producing code that is readable with minimal training immediately appealed to me.

Over the last 3 years I have tried, and failed, to learn Haskell. The combination of minimalist syntax, pure functional programming style and lazy evaluation is like a 3-hit sucker punch that is very hard to grasp all at once. Having now learnt SML and OCaml, which like Haskell are based on the ML language, that has changed. I have yet to put any more effort into learning Haskell, but it is now clear to me that the syntax is only a small leap from ML and the pure functional style has similarities to SML.

I still don’t want to write production code in Haskell, but the fact that I find it less scary than I used to indicates I have made a significant jump in my knowledge and, arguably, career in the last 6 months.

Dynamic typing

Before I go any further, I need fans of dynamic typing to exit the room. My 12 years in the industry have set my camp firmly on the static typing side of the fence, and discussions about static vs dynamic will not be productive or welcome here.

So, why OCaml?

Smarter people than me have written about this, but I’ll give it a shot.

I have found OCaml to be a refreshing change of pace. Most of my favourite things are derived from the ML base language; variants, records, and pattern matching combine to create elegantly powerful code that is still easy to follow (unlike most Haskell code I’ve seen).

Ocaml takes the expression-based ML style and incorporates enough imperative features to make it comfortable for someone learning Functional Programming. Don’t know how to use recursion to solve a problem? Drop into a for loop in the middle of your expression. Need some debug output? Add it right there with a semicolon to sequence expressions.

Throw in almost perfect static type inference, a compiler that gives useful error messages and immutable-by-default variables and I just can’t get enough. I won’t sit here and list every feature of the language, but hopefully that piques your interest as much as it did mine 😉

Industry acceptance

There is always an element of “I have a hammer, everything looks like a nail” when learning a new language but the evidence that OCaml is becoming more widely accepted is not hard to find.

In the middle of February, Thomas Leonard’s OCaml: what you gain post made waves; the reddit and hackernews discussions are fascinating. A lot of people using OCaml in the industry came out of the woodwork for that one. I’m still working my way through the series of 11 posts Thomas made, dating back to June 2013, about his process of converting a large Python codebase to OCaml.

Facebook have a fairly extensive OCaml codebase (more details below).

It doesn’t take much googling to find presentations by Skydeck in 2010 (they wrote ocamljs, the first ocaml to JS compiler) or a 2006 talk describing why OCaml is worth learning after Haskell.

OCamlPro appear to be seeing good business out of OCaml, and they have an excellent browser-based OCaml tutorial (developed using, of course, js_of_ocaml).

No list of OCaml developers would be complete without mentioning the immense amount of code at Jane Street.

There are plenty of other success stories.

The elephant in the room

The first question I usually get when I tell a Functional Programming guru that I’m learning OCaml is “Why not Haskell?”. It’s a fair enough question. Haskell can do a ton more than OCaml can, and there are only one or two things OCaml can do that Haskell can’t (I don’t know the details exactly, I would think it was zero). I see a lot of references to OCaml being a gateway drug for Haskell.

The answer is JavaScript. As much as I hate the language, JS is the only realistic way to write web apps. Included in the many and varied AltJS languages, both OCaml and Haskell can be compiled to JavaScript but the Haskell compilers aren’t mature enough yet (and I’m not convinced lazy evaluation in JavaScript will have good performance).

In fact, some study has revealed OCaml may be the most mature AltJS compiler of all by virtue of support for existing OCaml libraries.

JavaScript

Late last year I started hearing about OCaml at Facebook. Their pfff tool, which is a serious OCaml codebase all by itself, is already open source – but there was talk of an even larger project using js_of_ocaml (the link seems to be offline, try the video). That presentation by Julien Verlaguet is almost identical to the one he gave at YOW! 2013 and it really grabbed my attention. (Hopefully the YOW! video is online soon, as it’ll be better quality).

To cut a long story short, Facebook created a new language (Hack, a statically typed PHP variant) and wrote the compiler in OCaml. They then use js_of_ocaml to compile their entire type checker into JavaScript, as the basis of a web IDE (@19 minutes in the video) along the lines of cloud9. Due to the use of OCaml for everything, this IDE has client-side code completion and error checking. It’s pretty amazing.

Maturity of tools and js_of_ocaml

The more I dive into OCaml, and specifically js_of_ocaml, the more it amazes me how the maturity of the tools and information reached suitability for production use just as I need them.

  • The package manager OPAM is now a little over 12 months old and every library I’ve looked at is available on it. Wide community acceptance of a good package manager is a huge plus.

  • The Real World OCaml book was released in November and is an excellent read. The book is so close to the cutting edge they had features added to September’s 4.01.0 compiler release for them 🙂

  • OCaml Labs has been around for 12 months, and they’re helping to move the OCaml community forward into practical applications (see the 2013 summary).

  • Ocsigen are investing heavily in js_of_ocaml (among other things) with the next release including an improved optimiser (I can attest to the fact that it’s awesome) and support for FRP through the React library.

Moving forward

Is it perfect? No. Software development is not a one-size-fits-all industry. There are as many articles cursing the limitations of OCaml as there are singing its praises. But in the current market, and with the size of JavaScript applications we are starting to generate, I believe OCaml has a bright future.