A Lazy Sequence

Accessing Local Storage from a Web Worker

A curious detail of the Web Workers and Web Storage specs is that the storage objects are not accessible from within a worker context: only the main UI process can access the stores1.

This limitation may or may not be a problem for you. In an app I am building I wanted to be able to consolidate all my data access and processing to a worker so I’ve produced a (somewhat obvious when you are familiar with the Worker API) workaround:

  1. Create your worker like normal.
  2. Create a MessageChannel instance
  3. In your main thread process a wrapper around the Local Storage API and events that communicates via one of the MessagePort instances on the MessageChannel instance.
  4. Send a message, along with the other MessagePort from the channel created in 2, to the worker telling it to use the provided port as a proxy for LocalStorage.

There are two obvious downsides with this approach: It’s certainly going to be slightly slower, and it means the storage access (in the worker) will lose the convenient synchronous API.


Due to the nature of the Worker API its a little fussy to make a clean workable example of this code, but here is a pseudocode version (ripped from Manticore).

The storage wrapper

This function lives within your main thread.

function localStoragePort() {
    var chan = new MessageChannel();
	var localPort = chan.port1;

    window.addEventListener('storage', function(e) {
        if (e.key) {
            localPort.postMessage({kind: "storage.data", key: e.key, value: e.newValue});
    localPort.onmessage = (ev) => {
        var message = ev.data;

        if (message.kind === "storage.get") {
            var val = localStorage.getItem(message.key)
            if (message.defaultValue !== undefined && val === null) {
                val = message.defaultValue;
                kind: "storage.data",
                key: message.key, 
                value: val
        else if (message.kind === "storage.put") {
            localStorage.setItem(message.key, message.value);
                kind: "storage.data",
                key: message.key,
                value: localStorage.getItem(message.key)
        else if (message.kind === "storage.delete") {
                kind: "storage.data", 
                key: message.key,
                value: null
    return chan.port2

You can then tell the worker to use the port returned from localStoragePort() with something like:

worker.postMessage({kind: "lifecycle.register-storage-port"}, [localStoragePort()]);

Note that the MessagePort is passed in as a transferrable in the second argument to postMessage. If you try to pass it in as part of the message itself you will get a error telling you that the data cannot be cloned. Once you pass the transferrable object through you will lose the ability to access it on the posting side of the interface.

Worker port usage

Finally you have to listen for this lifecycle message in the worker and when you receive it, store a reference to the port. Ideally you would wrap this up in a cleaner object than just working directly but for the purposes of explaining what is going on:

var storagePort;

function registerStoragePort(port) {
    if (storagePort) {

    storagePort = port;
    storagePort = function (message) {
        var data = message.data;
	    // This will be a "storage.data" message returned from the port.
        // Do something with data.value 

onmessage = function (ev) {
    var message = ev.data;
    if (message.kind === "lifecycle.register-storage-port") {
    else {
        // ...

Sending messages to port is the final step:

storagePort.postMessage({kind: "storage.put", key: "foo", value: "Bar"});
storagePort.postMessage({kind: "storage.get", key: "foo", defaultValue: "Quux"});

It is left as an exercise to the reader to use per message MessagePort instances with the storagePort to ensure that the result of a request is handled in a specific location rather than globally.


  1. If anyone can explain the reasoning for this I would love to know.

23 October 2016

ANN: Manticore 0.5-beta1 released

It has been a long time between updates but I finally have a new release of Manticore, my encounter generation app for the 13th Age role-playing game.

While the majority of the changes are developer niceties, there are a few features that should make it easier to use for everyone:

  • Grouped encounters. This is biggest feature of this release. When multiple encounters produce the same monsters with just different numbers they are now grouped together and only the first is shown with a variations link to show all the other generated breakdowns. This is a massive win for quickly skimming the results.
  • Paginated results. This one has been a common request from many people.
  • Party information is now set with sliders rather than fussing with tiny input boxes. Should be easier for those of you who use the app on their phones.
  • Party information is remembered between uses of the app.

There are two changes to the generation algorithm now too:

  • Encounters never include more than 7 types of monster now. In the future this may become a setting you can tweak for your own purpose.
  • Two types of monster, dragons, and master vampires are now considered 'territorial' by the algorithm. These monsters are apex predators and never appear in an encounter with another territorial monster. I think this idea might have been swiped from D&D 5e.

For those of you who care about the technical details, I've ported the whole UI to use ReactJS which should make it easier to build new UI. The second is its now using web workers behind the scenes for data access and for generating the results, so the UI should not lock up briefly when you hit the generate button.

Questions, bug reports, comments, and feature requests most welcome. The issue tracker is on the project's GitHub issue tracker.