Node.js Process Events
Make sure you are listening to them!
I was troubleshooting a bug recently when I noticed an unrelated issue with one of my microservices. We were experiencing a failure in a non-critical component responsible for notifying our internal staff of a minor issue in our business workflow.
This was an issue our development team, extremely thorough QA team, and Ops team missed for months. I noticed it by chance while running integration tests and eyeballing the logs. It was a simple warning produced on the stderr
that was not correctly parsed by our log aggregator:
(node:60803) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:60803) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
One of our components was failing and we weren't correctly responding to the error.
If you are developing a Node.js server application you need to register handlers on Process
for unhandledRejection
and uncaughtException
events. It’s unlikely the application frameworks you use (e.g. Express or Hapi) will do this for you.
By default, Node.js will output warnings to stderr
and these warnings may get swallowed by your logger or missed completely. The fact that you are not handling these conditions in code is pretty bad because it means you haven’t implemented error handling correctly.
Uncaught Exceptions are pretty easy to handle:
process.on('uncaughtException', (error) => {
logger.log('critical', {
message: 'Process encountered an uncaughtException',
error,
});
// If you use an exception reporting tool
exceptionReporter.report(error);
});
Keep in mind that Node.js recommends that you do not use the uncaughtException
event to “recover” from potentials errors. It’s better to “let it crash” and have an external process restart the service (e.g. PM2 or Docker). This strategy will keep your code cleaner and ensure that external connections (which may have caused the error) are safely reestablished. The alternative is to potentially have a “zombie” service that responds to health checks but fails to execute transactions against a database (we’ve experienced this before with Mongo).
Unhandled Rejection events are a little trickier to deal with because they may actually be handled asynchronously on another run of the event loop. For this reason, Node.js recommends the following strategy:
https://nodejs.org/api/process.html#process_warning_using_uncaughtexception_correctly
const unhandledRejections = new Map();
process.on('unhandledRejection', (reason, p) => {
unhandledRejections.set(p, reason);
});
process.on('rejectionHandled', (p) => {
unhandledRejections.delete(p);
});
As Node.js documentation points out, this strategy will result in a list that shrinks or grows over time. You will probably want to use some polling strategy that checks the unhandledRejections
map and logs old (unlikely to be handled) rejections:
const unhandledRejections = new Map();
process.on('unhandledRejection', (error, p) => {
unhandledRejections.set(p, { error, time: Date.now() });
});
process.on('rejectionHandled', (p) => {
unhandledRejections.delete(p);
});
const maxTimeToHandle = 10000;
setInterval(() => {
const now = Date.now();
const notHandled = [];
unhandledRejections.forEach(({ error, time }, p) => {
if ((now - time) > maxTimeToHandle) {
logger.log('critical', {
message: 'Unhandled promise rejection',
error,
});
notHandled.push(p);
}
});
notHandled.forEach(p => unhandledRejections.delete(p));
}, maxTimeToHandle);
warning
is another event you should look for, particularly during development, though I don’t thinks it’s too important to listen for in production. You will typically see the event used when a dependent library wants to remind you that a particular method call or configuration option is deprecated (I see this most often with Mongoose useMongoClient: true
and when some module uses os.tmpDir
and not os.tmpdir
).
Finally, I wanted to mention the exit
event because one might think this is a great way to interrupt the process shutting down allowing you to perform some cleanup or notification action. Unfortunately, this is not the case. When the event is fired, Node will only execute synchronous actions within the body of the handler. Any async actions enqueued in the event loop will be ignored.
// process-exit.js
process.on('exit', () => {
console.log('Exiting now');
setTimeout(() => console.log('Set Timeout'), 0);
process.nextTick(() => console.log('Next Tick'));
});
When we run the code we can see that only the first console.log executes.
node process-exit.js
> Exiting now
The event can be useful if you want to log something before the process terminates. If you do this, make sure your log utility (e.g. Winston, Good, etc.) will output the entry synchronously. I should mention that this is an excellent reason for having a simple log-to-console strategy and using an external tool like Fluentd or Logstash to aggregate the logs to your logging infrastructure.
Resources
Valeri Karpov (Mongoose creator) wrote a similar article about this last year: http://thecodebarbarian.com/unhandled-promise-rejections-in-node.js.html
The Node.js documentation is excellent and definitive on this topic: https://nodejs.org/api/process.html
You might also be interested in these articles...
Stumbling my way through the great wastelands of enterprise software development.