Back to All Posts

There are a lot of ways to break up long tasks in JavaScript.

It's very common to intentionally break up long, expensive tasks over multiple ticks of the event loop. But there are sure are a lot of approaches to choose from. Let's explore them.

Alex MacArthur   /

It's not hard to bork your site's user experience by letting a long, expensive task hog the main thread. No matter how complex an application becomes, the event loop can still do only one thing at a time. If any of your code is squatting on it, everything else is on standby, and it usually doesn't take long for your users to notice.

Here's a contrived example: we have a button for incrementing a count on the screen, alongside a big ol' loop doing some hard work. It's just running a synchronous pause, but pretend this is something meaningful that you, for whatever reason, need to perform on the main thread – and in order.

<button id="button">count</button>
<div>Click count: <span id="clickCount">0</span></div>
<div>Loop count: <span id="loopCount">0</span></div>

<script>
  function waitSync(milliseconds) {
    const start = Date.now();
    while (Date.now() - start < milliseconds) {}
  }

  button.addEventListener("click", () => {
    clickCount.innerText = Number(clickCount.innerText) + 1;
  });

  const items = new Array(100).fill(null);

  for (const i of items) {
    loopCount.innerText = Number(loopCount.innerText) + 1;
    waitSync(50);
  }
</script>

When you run this, nothing visually updates – not even the loop count. That's because the browser never gets a chance to paint to the screen. This is all you get, no matter how furiously you click. Only when the looping is completely finished do you get any feedback.

The dev tools flame chart corroborates this. That single task in the event loop takes five seconds to complete. Horrrrrible.

flame chart showing long, expensive task

If you've been in a similar situation before, you know that the solution is periodically break that big task up across multiple ticks of the event loop. This gives other parts of the browser a chance to use the main thread for other important things, like handling button clicks and repaints. We want to go from this:

long task illustration

To this:

shorter tasks illustration

There are actually a shocking number of ways to pull this off. We're gonna explore some of them, starting with the most classic: recursion.

#1: setTimeout() + Recursion

If you wrote JavaScript before native promises existed, you've undoubtedly seen something like this: a function recursively calling itself from the callback of a timeout.

function processItems(items, index) {
  index = index || 0;
  var currentItem = items[index];

  console.log("processing item:", currentItem);

  if (index + 1 < items.length) {
    setTimeout(function () {
      processItems(items, index + 1);
    }, 0);
  }
}

processItems(["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"]);

There's nothing wrong with it, even today. After all, the objective is accomplished – each item is processed on a different tick, spreading out the work. Look at this 400ms section of the flame chart. Rather than one big task, we get a bunch of smaller ones:

flame chart using setTimeout and recursion

And that leaves the UI nice and responsive. Click handlers can work, and the browser can paint updates to the screen:

But we're a decade past ES6now, and the browser offers several ways to more accomplish the same thing, all of them made a little more ergonomic with promises.

#2: Async/Await & a Timeout

This combination allows us to abandon recursion and streamline things a little:

<button id="button">count</button>
<div>Click count: <span id="clickCount">0</span></div>
<div>Loop count: <span id="loopCount">0</span></div>

<script>
  function waitSync(milliseconds) {
    const start = Date.now();
    while (Date.now() - start < milliseconds) {}
  }

  button.addEventListener("click", () => {
    clickCount.innerText = Number(clickCount.innerText) + 1;
  });

  (async () => {
    const items = new Array(100).fill(null);

    for (const i of items) {
      loopCount.innerText = Number(loopCount.innerText) + 1;

      await new Promise((resolve) => setTimeout(resolve, 0));
          
      waitSync(50);
  }
})();
</script>

Much better. Just a simple for loop and awaiting a promise to resolve. The rhythm on the event loop is very similar, with one key change, outlined in red:

A promise's .then() method is always executed on the microtask queue, after everything else on the call stack is finished. It's almost always an inconsequential difference, but worth noting nonetheless.

#3: scheduler.postTask()

The Scheduler interface is relatively new to Chromium browsers, intending to be a first-class tool for scheduling tasks with a lot more control and efficiency. It's basically a better version of what we've been relying on setTimeout() to do for us for decades.

const items = new Array(100).fill(null);

for (const i of items) {
  loopCount.innerText = Number(loopCount.innerText) + 1;

  await new Promise((resolve) => scheduler.postTask(resolve));

  waitSync(50);
}

What's interesting about running our loop with postTask() is the amount of time between scheduled tasks. Here's a snippet of the flame chart over 400ms again. Notice how tightly each new tasks executes after the previous one.

The default priority of postTask() is "user-visible", which appears to be comparable to the priority of setTimeout(() => {}, 0). Output always seems to mirror the order they're run in code:

setTimeout(() => console.log("setTimeout"));
scheduler.postTask(() => console.log("postTask"));

// setTimeout
// postTask
scheduler.postTask(() => console.log("postTask"));
setTimeout(() => console.log("setTimeout"));

// postTask
// setTimeout

But unlike setTimeout(), postTask() was built for scheduling, and isn't subject to the same constraints as timeouts are. Everything scheduled by it is also placed at the front of the task queue, preventing other items from budging in front & delaying execution, especially when being queued in such a rapid fashion.

I can't say for certain, but I think that because postTask() is a well-oiled machine with one purpose, the flame chart reflects that. That said, it's possible to maximize the priority for tasks scheduled with postTask() even further:

scheduler.postTask(() => {
  console.log("postTask");
}, { priority: "user-blocking" });

The "user-blocking" priority is intended for tasks critical to the user's experience on the page (such as responding to user input). As such, it's probably not worth using for just breaking up big workloads. After all, we're trying to politely yield to the event loop so other work can get done. In fact, it may even be worth setting that priority even lower by using "background":

scheduler.postTask(() => {
  console.log("postTask - background");
}, { priority: "background" });

setTimeout(() => console.log("setTimeout"));

scheduler.postTask(() => console.log("postTask - default"));

// setTimeout
// postTask - default
// postTask - background

Unfortunately, the entire Scheduler interface comes with a bummer: it's not that well-supported across all browsers yet. But it is easy enough to polyfill with existing asynchronous APIs. So, at least a strong portion of users would benefit from it.

What about requestIdleCallback()?

If it's good to surrender priority like this, requestIdleCallback() might've come to mind. It's designed to execute its callback whenever there's an "idle" period. The problem with it is that there's no technical guarantee when or if it'll run. You could set a timeout when it's invoked, but even then, you'll still need to reckon with the fact that Safari still doesn't support the API at all.

On top of that, MDN encourages a timeout over requestIdleCallback() for required work, so I'd probably just steer clear of it for this purpose altogether.

#4: scheduler.yield()

The yield() method on the Scheduler interface is a smidge more special than the other approaches we've covered because it was made for this exact sort of scenario. From MDN:

The yield() method of the Scheduler interface is used for yielding to the main thread during a task and continuing execution later, with the continuation scheduled as a prioritized task... This allows long-running work to be broken up so the browser stays responsive.

That becomes even more clear when you use it for the first time. There's no longer a need to return & resolve our own promise. Just wait for the one provided:

const items = new Array(100).fill(null);

for (const i of items) {
  loopCount.innerText = Number(loopCount.innerText) + 1;
  
  await scheduler.yield();
  
  waitSync(50);
}

It cleans up the flame chart a bit too. Notice how there's one less item in the stack that needs to be identified.

flame chart with one less row

The API for this is so nice that you can't help but start seeing opportunities to use it all over. Consider a checkbox that kicks of an expensive task on change:

document
  .querySelector('input[type="checkbox"]')
  .addEventListener("change", function (e) {
    waitSync(1000);
});

As it is, clicking the checkbox causes the UI to freeze for a second.

But now, let's immediately yield control to the browser, giving it a chance to update that UI after the click.

document
  .querySelector('input[type="checkbox"]')
  .addEventListener("change", async function (e) {
+    await scheduler.yield();

    waitSync(1000);
});

Look at that. Nice & snappy.

As with the rest of the Scheduler interface, this one lacks solid browser support, but it's still simple to polyfill:

globalThis.scheduler = globalThis.scheduler || {};
globalThis.scheduler.yield = 
  globalThis.scheduler.yield || 
  (() => new Promise((r) => setTimeout(r, 0)));

#5: requestAnimationFrame()

The requestAnimationFrame() API is designed to schedule work around the browser's repaint cycle. Because of that, it's very precise in scheduling callbacks. It'll always be right before the next paint, which likely explains why this flame chart's tasks are seated so tightly together. Animation frame callbacks effectively have their own "queue" that runs at a very particular time in the rendering phase, meaning it's difficult for other tasks to get in the way to push them to the back of the line.

However, doing expensive work around repaints also appears to compromise rendering. Look at the frames during that same time period. The yellow/lined sections indicate a "partially-presented frame":

partially-presented frames in the flame chart

This didn't occur with the other task-breaking tactics. Considering this and the fact that animation frame callbacks usually don't even execute unless the tab is active, I'd probably avoid this option too.

#6: MessageChannel()

You don't see this one used a whole lot in this way, but when you do, it's often chosen as a lighter alternative to an zero-delay timeout. Rather than asking the browser to queue a timer and schedule the callback, instantiate a channel and immediately post a message to it:

for (const i of items) {
  loopCount.innerText = Number(loopCount.innerText) + 1;

  await new Promise((resolve) => {
    const channel = new MessageChannel();
    channel.port1.onmessage = resolve();
    channel.port2.postMessage(null);
  });

  waitSync(50);
}

By the looks of the flame chart, there might be something to say for performance. There's not much delay between each scheduled task:

The (subjective) drawback to this approach, though, is how complicated it is to wire up. It's quite obvious this isn't what it was designed for.

#7: Web Workers

We've said otherwise, but if you can get away with performing your work off the main thread, a web worker should undoubtedly be your first choice. You technically don't even need a separate file to house your worker code:

const items = new Array(100).fill(null);

const workerScript = `
  function waitSync(milliseconds) {
    const start = Date.now();
    while (Date.now() - start < milliseconds) {}
  }

  self.onmessage = function(e) {
    waitSync(50);
    self.postMessage('Process complete!');
  }
`;

const blob = new Blob([workerScript], { type: "text/javascipt" });
const worker = new Worker(window.URL.createObjectURL(blob));

for (const i of items) {
  worker.postMessage(items);

  await new Promise((resolve) => {
    worker.onmessage = function (e) {
      loopCount.innerText = Number(loopCount.innerText) + 1;
      resolve();
    };
  });
}

Just look how clear the main thread is when the work for individual items is performed elsewhere. Instead, it's all pushed down below under the "Worker" section, leaving so much room for activities.

The scenario we've been using requires progress to be reflected in the UI, and so we're still passing individual items to the worker & waiting for a response. But if we could pass that entire list of items to the worker at once, we certainly should. That'd cut overhead even more.

How Do I Choose?

The approaches we've covered here are not exhaustive, but I think they do a good job at representing the various trade-offs you should consider when breaking up long tasks. Still, depending on the need, I'd probably only reach for a subset of these myself.

If I can do the work off from the main thread, I'd choose a web worker, hands-down. They're very well supported across browsers, and their entire purpose is to offload work from the main thread. The only downside is their clunky API, but that's eased by tools like Workerize and Vite's built-in worker imports.

If I need a dead-simple way to break up tasks, I'd go for scheduler.yield(). I don't love how I'd also need to polyfill it for non-Chromium users, but the majority of people would benefit from it, so I'm up for that extra bit of baggage.

If I need very fine-grained control over how chunked work is prioritized, scheduler.postTask() would be my choice. It's impressive how deep you can go in tailoring that thing to your needs. Priority control, delays, cancelling tasks, and more are all included in this API, even if, like .yield(), it needs to be polyfilled for now.

If browser support and reliability are of the utmost importance, I'd just choose setTimeout(). It's a legend that's not going anywhere, even as flashy alternatives hit the scene.

What'd I Miss?

I'll admit I've never used a few of these in a real-life application, and so it's very possible there are some blindspots in what you read here. If you can speak into the topic further, even if it's insight about one of the specific approaches, you're more than welcome to do so.

Get blog posts like this in your inbox.

May be irregular. Unsubscribe whenever.
Leave a Free Comment

0 comments