I recently tracked down a surprisingly awkward Promise bug in a Node.js app.

At first it looked inconsistent. Sometimes a failed async operation was handled exactly how I expected by Promise.allSettled(). Other times Node would crash with an unhandled rejection.

The confusing part was that it only happened in one very specific shape of work. If a single async job was triggered, Promise.allSettled() caught the failure just fine. But if that same first job failed and there was also a second job to prepare, the process could crash before Promise.allSettled() was ever reached.

The code below is simplified, but it shows the pattern:

const jobs: Promise<void>[] = [];

if (shouldTriggerFirstJob) {
  jobs.push(triggerFirstJob());
}

if (shouldTriggerSecondJob) {
  await loadSomeData();

  jobs.push(triggerSecondJob());
}

await Promise.allSettled(jobs);

At a glance, this seems completely reasonable. Start some async work, collect the promises, then use Promise.allSettled() so that one failure does not take everything down.

The issue was the await inside the second branch.

If only the first job existed, the code skipped the second branch and reached Promise.allSettled(jobs) in the same synchronous turn, before execution yielded back to the event loop. That meant the rejection handler was attached before Node had a chance to process the rejected Promise as unhandled.

If there was also a second job, though, the function hit this first:

await loadSomeData();

That was enough to change the behaviour completely.

The important detail is that await yields control back to the event loop. Node does not wait for your whole async function to finish before deciding what to do with a rejected Promise. Once execution pauses at an await, Node gets a chance to process pending microtasks. If one of those is a rejected Promise with no handler attached yet, Node can treat it as unhandled. In my case, that meant the process crashed.

So the sequence looked like this:

  1. triggerFirstJob() creates a Promise
  2. that Promise rejects quickly
  3. the function continues into the second branch
  4. the code hits await loadSomeData()
  5. the async function pauses
  6. Node processes the already-rejected Promise
  7. there is still no handler attached to it
  8. Node treats it as unhandled

That was the whole bug.

Without the second branch, there was no asynchronous gap between creating the first Promise and passing it to Promise.allSettled(). With the second branch, there was. That gap is what let Node observe the rejection first.

That is also why Promise.allSettled() did not save me here. It only helps once you have actually passed the Promise into it. It cannot retroactively rescue a rejection that Node has already decided is unhandled.

So this pattern is unsafe:

const jobs = [failingJob()];

await doOtherAsyncWork();

await Promise.allSettled(jobs);

If failingJob() rejects before Promise.allSettled() is attached, then doOtherAsyncWork() gives Node a chance to notice first.

Part of what made this harder for me to spot is that I come from more of a .NET background. In C#, if a Task fails, it generally just sits in a faulted state until you observe it with await, Task.WhenAll() or similar. It does not usually bring the whole process down simply because you have not observed it yet.

That made this JavaScript pattern feel fairly natural to me at first: start the async work now, collect it, and handle it together later.

But JavaScript Promises are not quite as forgiving. A rejected Promise can become a problem as soon as the runtime gets a turn and sees there is no handler attached. So I was carrying over a C# mental model that does not quite fit. A faulted Task usually waits for you to observe it. A rejected Promise might not.

The cleanest fix in my case was to stop storing already-started Promises and store functions instead:

const jobs: Array<() => Promise<void>> = [];

if (shouldTriggerFirstJob) {
  jobs.push(() => triggerFirstJob());
}

if (shouldTriggerSecondJob) {
  await loadSomeData();

  jobs.push(() => triggerSecondJob());
}

await Promise.allSettled(jobs.map((job) => job()));

That removes the gap entirely. The jobs are not started while the function is still off doing other awaited work. They are only started at the moment they are passed straight into Promise.allSettled().

Another simple way to think about it is this:

  • an array of Promises means the work has already started
  • an array of functions returning Promises means the work will start later

They can look almost identical in a code review, but they behave very differently once await gets involved.

This was a useful reminder that with JavaScript Promises, “I’ll handle it in a moment” is not always enough. If you yield back to the event loop before attaching a handler, that moment may already have passed.

So the rule I am trying to keep in my head now is simple: if I want async work to happen between collecting the promises, I store a function. If I create the Promise now, I need to handle it now.