3772cf56ae
For consumers that aren't interested in *why* a `statSync` call failed,
allocating and throwing an exception is an unnecessary expense. This PR
adds an option that will cause it to return `undefined` in such cases
instead.
As a motivating example, the JavaScript & TypeScript language service
shared between Visual Studio and Visual Studio Code is stuck with
synchronous file IO for architectural and backward-compatibility
reasons. It frequently needs to speculatively check for the existence
of files and directories that may not exist (and cares about file vs
directory, so `existsSync` is insufficient), but ignores file system
entries it can't access, regardless of the reason.
Benchmarking the language service is difficult because it's so hard to
get good coverage of both code bases and user behaviors, but, as a
representative metric, we measured batch compilation of a few hundred
popular projects (by star count) from GitHub and found that, on average,
we saved about 1-2% of total compilation time. We speculate that the
savings could be even more significant in interactive (language service
or watch mode) scenarios, where the same (non-existent) files need to be
polled over and over again. It's not a huge improvement, but it's a
very small change and it will affect a lot of users (and CI runs).
For reference, our measurements were against `v12.x` (
|
||
---|---|---|
.. | ||
assert | ||
async_hooks | ||
buffers | ||
child_process | ||
cluster | ||
crypto | ||
dgram | ||
diagnostics_channel | ||
dns | ||
domain | ||
es | ||
esm | ||
events | ||
fixtures | ||
fs | ||
http | ||
http2 | ||
misc | ||
module | ||
napi | ||
net | ||
os | ||
path | ||
perf_hooks | ||
policy | ||
process | ||
querystring | ||
streams | ||
string_decoder | ||
timers | ||
tls | ||
url | ||
util | ||
v8 | ||
vm | ||
worker | ||
zlib | ||
_benchmark_progress.js | ||
_cli.js | ||
_cli.R | ||
_http-benchmarkers.js | ||
_test-double-benchmarker.js | ||
.eslintrc.yaml | ||
common.js | ||
compare.js | ||
compare.R | ||
README.md | ||
run.js | ||
scatter.js | ||
scatter.R |
Node.js Core Benchmarks
This folder contains code and data used to measure performance of different Node.js implementations and different ways of writing JavaScript run by the built-in JavaScript engine.
For a detailed guide on how to write and run benchmarks in this directory, see the guide on benchmarks.
Table of Contents
Benchmark Directories
Directory | Purpose |
---|---|
assert | Benchmarks for the assert subsystem. |
buffers | Benchmarks for the buffer subsystem. |
child_process | Benchmarks for the child_process subsystem. |
crypto | Benchmarks for the crypto subsystem. |
dgram | Benchmarks for the dgram subsystem. |
domain | Benchmarks for the domain subsystem. |
es | Benchmarks for various new ECMAScript features and their pre-ES2015 counterparts. |
events | Benchmarks for the events subsystem. |
fixtures | Benchmarks fixtures used in various benchmarks throughout the benchmark suite. |
fs | Benchmarks for the fs subsystem. |
http | Benchmarks for the http subsystem. |
http2 | Benchmarks for the http2 subsystem. |
misc | Miscellaneous benchmarks and benchmarks for shared internal modules. |
module | Benchmarks for the module subsystem. |
net | Benchmarks for the net subsystem. |
path | Benchmarks for the path subsystem. |
perf_hooks | Benchmarks for the perf_hooks subsystem. |
process | Benchmarks for the process subsystem. |
querystring | Benchmarks for the querystring subsystem. |
streams | Benchmarks for the streams subsystem. |
string_decoder | Benchmarks for the string_decoder subsystem. |
timers | Benchmarks for the timers subsystem, including setTimeout , setInterval , .etc. |
tls | Benchmarks for the tls subsystem. |
url | Benchmarks for the url subsystem, including the legacy url implementation and the WHATWG URL implementation. |
util | Benchmarks for the util subsystem. |
vm | Benchmarks for the vm subsystem. |
Other Top-level files
The top-level files include common dependencies of the benchmarks and the tools for launching benchmarks and visualizing their output. The actual benchmark scripts should be placed in their corresponding directories.
_benchmark_progress.js
: implements the progress bar displayed when runningcompare.js
_cli.js
: parses the command line arguments passed tocompare.js
,run.js
andscatter.js
_cli.R
: parses the command line arguments passed tocompare.R
_http-benchmarkers.js
: selects and runs external tools for benchmarking thehttp
subsystem.common.js
: see Common API.compare.js
: command line tool for comparing performance between different Node.js binaries.compare.R
: R script for statistically analyzing the output ofcompare.js
run.js
: command line tool for running individual benchmark suite(s).scatter.js
: command line tool for comparing the performance between different parameters in benchmark configurations, for example to analyze the time complexity.scatter.R
: R script for visualizing the output ofscatter.js
with scatter plots.
Common API
The common.js module is used by benchmarks for consistency across repeated tasks. It has a number of helpful functions and properties to help with writing benchmarks.
createBenchmark(fn, configs[, options])
See the guide on writing benchmarks.
default_http_benchmarker
The default benchmarker used to run HTTP benchmarks. See the guide on writing HTTP benchmarks.
PORT
The default port used to run HTTP benchmarks. See the guide on writing HTTP benchmarks.
sendResult(data)
Used in special benchmarks that can't use createBenchmark
and the object
it returns to accomplish what they need. This function reports timing
data to the parent process (usually created by running compare.js
, run.js
or
scatter.js
).