How to Prepare Node.js Applications for Production

Sponsored by

Stephen here.

I spend my weekends researching, learning, and creating content for this newsletter.

It would mean the world to me, if you took a few seconds of your time to check out The Rundown AI.

They’re the fastest growing AI newsletter for a reason (I’m a subscriber too).

Now let’s get on with the show.

How do you stay up-to-date with the insane pace of AI? Join The Rundown – the world’s fastest-growing AI newsletter with over 500,000+ readers learning how to become more productive using AI every morning.

1. Our team spends all day researching and talking with industry experts.

2. We send you updates on the latest AI news and how to apply it in 5 minutes a day.

3. You learn how to become 2x more productive by leveraging AI.

Production is always hard.

There’s databases, configurations, infrastructure, performance, security, observability…the list goes on.

On top of all of these things, the stakes are high. Defects can lead to downtime, and the company could lose money.

For Node.js applications, there’s several things we need to keep in mind when moving to production.

Let’s go over them.

Configure Node.js Properly

It’s common for a company to have multiple servers and environments. Each environment will typically have different configurations and environment variables.

For example, a common setup is the following: Development → Staging → Production.

For environment variables, Node.js contains a built-in core module named process. The process object has a property named env with key-value pairs of all environment variables on startup.

One particular environment variable to pay attention to is NODE_ENV. Not only is it part of Node.js, it’s also widely used in NPM packages.

When targeting a production environment, NODE_ENV needs to be set to production. Doing so provides several benefits such as reduced logging (essential) and performance optimizations.

Otherwise, sensitive information used during development and stack traces may be logged unintentionally.

Audit NPM Packages

In the Node.js ecosystem, there are over 2.1 million NPM packages (in 2022). Not all packages are properly maintained, leading to security vulnerabilities.

The latest incident was a commit in the popular colors package. The commit added an infinite loop, creating a Domino Effect. It broke many widely-used NPM packages such as Jest, Rollup, and Faker.

Other examples include the event-stream incident in 2018 and the left-pad debacle in 2016.

These cases highlight the importance of vetting packages in an application. NPM provides commands to address these issues:

# Scan for vulnerabilities
npm audit

# Fix issues automatically
npm audit fix

# Update and fix
npm update && npm audit fix

Scans can also by automated. At my company, we use Snyk.

We’ve integrated Snyk into our CI/CD pipelines and trigger when pull requests are opened.

Use a Non-Root User in Docker

Nowadays, most Node.js applications are containerized with Docker.

Docker packages up the code and dependencies needed to run the application into a container image.

High-Level View Of How Docker Containers Are Used

The Docker Engine then creates the container from the image at runtime.

Pipelines and orchestration tools like Docker Compose and Kubernetes can be used to streamline the deployment process.

By default, Docker runs commands inside containers as root. Following the Principle of Least Privilege (PoLP), a user should only have access to the necessary data and resources needed to perform the task.

The official Docker image for Node provides a non-root user:

FROM node:6.10.3
...
# At the end, set the user to use when running this image
USER node

The name, uid, and gid of the user can be changed as needed.

Implement Strong Authentication Policies

Node.js has a core module called crypto. It’s a general purpose library for doing things like password hashing, encryption, and decryption (and much more).

For password hashing, crypto uses the scrypt algorithm to hash a password using a salt.

Other packages like bcrypt (blowfish algorithm) and node-argon2 (argon2 algorithm) can be used to address more specific cases like brute force attacks.

Multi-Factor Authentication (MFA) is also becoming a standard security practice for web applications and should be added as an additional layer of security.

Protect Against Vulnerabilities

Cross-site scripting (XSS), clickjacking, and injections are different types of attacks you may run into.

Adding proper headers mitigates many different vulnerabilities. If you’re application uses Express, you can use an NPM package called Helmet.

Helmet sets the following headers and can be customized accordingly:

  • Content-Security-Policy: A powerful allow-list of what can happen on your page which mitigates many attacks

  • Cross-Origin-Opener-Policy: Helps process-isolate your page

  • Cross-Origin-Resource-Policy: Blocks others from loading your resources cross-origin

  • Origin-Agent-Cluster: Changes process isolation to be origin-based

  • Referrer-Policy: Controls the Referer header

  • Strict-Transport-Security: Tells browsers to prefer HTTPS

  • X-Content-Type-Options: Avoids MIME sniffing

  • X-DNS-Prefetch-Control: Controls DNS prefetching

  • X-Download-Options: Forces downloads to be saved (Internet Explorer only)

  • X-Frame-Options: Legacy header that mitigates clickjacking attacks

  • X-Permitted-Cross-Domain-Policies: Controls cross-domain behavior for Adobe products, like Acrobat

  • X-Powered-By: Info about the web server. Removed because it could be used in simple attacks

  • X-XSS-Protection: Legacy header that tries to mitigate XSS attacks, but makes things worse, so Helmet disables it

For DDoS and brute force attacks, rate limiting policies can be added to the load balancer (like NGINX).

During development, security rules can be enforced using a linter. ESLint provides a security plugin for identifying potential hotspots.

Integrate Monitoring Tools

There are many different Application Performance Monitoring (APM) tools on the market.

Datadog, New Relic, and Dynatrace are some of the more popular ones, all with different price points.

If you’re using a cloud platform, there’s native tools. For AWS, CloudWatch can be used for monitoring and observability (logs and metrics).

X-Ray can be used for request tracing. This is useful for debugging issues spanning multiple services (end-to-end).

Open source is also an option. A combination of Prometheus, Loki, and Grafana can be used to create dashboards and track metrics.

Practice Debugging

As long as there’s code, there’s going to be bugs. Bugs need to be fixed, and code needs to be maintained. It’s inevitable.

Debugging is a skill and takes practice.

Learn how to use the debugger (IDE) and how to debug memory leaks and performance issues.

For memory leaks, a combination of a heap profiler, heap snapshots, and garbage collection traces can be used.

For performance issues, Node.js has a built-in profiler (from V8) to sample the stack at certain intervals. A generated tick file shows which functions take the most CPU time.

Linux Perf is another tool that can be used for low level CPU profiling. The output can then be used to generate a flame graph.

Flame Graph For Visualizing Stack Traces

Netflix used flame graphs to debug a performance issue related to Express.

If you made it this far, thank you for reading! I hope you enjoyed it.

If I made a mistake, please let me know.

P.S. If you’re enjoying the content of this newsletter, please share it with your network and subscribe: https://www.fullstackexpress.io/subscribe

Resources

[1] “Node.js Documentation,” nodejs.org.
https://nodejs.org/docs/latest/api/.

[2] “User Journey,” nodejs.org.
https://nodejs.org/en/learn/diagnostics/user-journey.

[2] “Docker Documentation,” docs.docker.com.
https://docs.docker.com/.

What'd you think of today's edition?

Login or Subscribe to participate in polls.

Join the conversation

or to participate.