node/benchmark
Chengzhong Wu 68dc15e400
util: fix util.getCallSites plurality
`util.getCallSite` returns an array of call site objects. Rename the
function to reflect that it returns a given count of frames captured
as an array of call site object.

Renames the first parameter `frames` to be `frameCount` to indicate
that it specifies the count of returned call sites.

PR-URL: https://github.com/nodejs/node/pull/55626
Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
Reviewed-By: Yagiz Nizipli <yagiz@nizipli.com>
Reviewed-By: Marco Ippolito <marcoippolito54@gmail.com>
2024-11-02 15:24:56 +00:00
..
abort_controller benchmark: add AbortSignal.abort benchmarks 2024-04-10 20:16:04 +00:00
assert benchmark: adjust config for deepEqual object 2024-10-11 18:26:42 +00:00
async_hooks lib: rewrite AsyncLocalStorage without async_hooks 2024-08-02 19:44:20 +00:00
blob benchmark: reduce the buffer size for blob 2024-04-17 19:33:44 +00:00
buffers benchmark: adjust byte size for buffer-copy 2024-10-14 12:14:01 +00:00
child_process
cluster
crypto crypto: make timingSafeEqual faster for Uint8Array 2024-04-08 11:36:53 +00:00
dgram
diagnostics_channel benchmark: enhance dc publish benchmark 2024-09-05 20:21:24 +00:00
dns
domain benchmark: update iterations of benchmark/domain/domain-fn-args.js 2024-03-04 10:05:28 +00:00
error errors: fix stacktrace of SystemError 2023-12-23 09:04:45 -08:00
es lib: prefer logical assignment 2024-10-09 06:42:16 +00:00
esm benchmark: rewrite detect-esm-syntax benchmark 2024-10-09 14:16:01 +00:00
events events: optimize EventTarget.addEventListener 2024-10-14 10:24:32 +00:00
fixtures benchmark: create benchmark for typescript 2024-09-25 17:25:18 +00:00
fs fs: refactoring declaratively with Array.fromAsync 2024-09-27 11:59:09 +02:00
http benchmark: adds groups to better separate benchmarks 2024-09-04 03:26:53 +00:00
http2
https
mime
misc benchmark: rename count to n 2024-08-10 20:07:54 +00:00
module benchmark: --no-warnings to avoid DEP/ExpWarn log 2024-09-15 16:57:10 +00:00
napi node-api: add external buffer creation benchmark 2024-09-14 00:29:22 +00:00
net stream: bump default highWaterMark 2024-03-13 19:02:14 +00:00
os os: improve tmpdir performance 2024-09-11 14:46:30 +00:00
path path: fix relative on Windows 2024-08-06 09:40:23 +00:00
perf_hooks benchmark: add nodeTiming.uvmetricsinfo bench 2024-11-01 13:35:15 -03:00
permission benchmark: add no-warnings to process.has bench 2024-09-30 20:27:39 +00:00
process
querystring
readline
streams benchmark: add stream.compose benchmark 2024-08-12 12:40:04 +00:00
string_decoder
test_runner
timers
tls
ts benchmark: create benchmark for typescript 2024-09-25 17:25:18 +00:00
url benchmark: fix benchmark for file path and URL conversion 2024-08-25 13:41:13 +00:00
util util: fix util.getCallSites plurality 2024-11-02 15:24:56 +00:00
v8
validators
vm vm: migrate ContextifyScript to cppgc 2024-08-30 16:58:31 +00:00
websocket lib: enable WebSocket by default 2024-02-04 14:03:39 +00:00
webstorage benchmark: add webstorage benchmark 2024-09-23 13:46:14 +00:00
webstreams
worker
zlib lib: prefer logical assignment 2024-10-09 06:42:16 +00:00
_benchmark_progress.js
_cli.js benchmark: support --help in CLI 2024-08-06 18:21:27 +00:00
_cli.R
_http-benchmarkers.js lib: prefer logical assignment 2024-10-09 06:42:16 +00:00
_test-double-benchmarker.js
bar.R
common.js lib: prefer logical assignment 2024-10-09 06:42:16 +00:00
compare.js benchmark: inherit stdio/stderr instead of pipe 2024-04-13 18:01:28 +00:00
compare.R
cpu.sh benchmark,doc: add CPU scaling governor to perf 2024-09-04 19:21:18 +00:00
eslint.config_partial.mjs tools: move ESLint tools to tools/eslint 2024-06-11 06:58:51 +00:00
README.md benchmark,doc: mention bar.R to the list of scripts 2024-09-04 18:53:02 +00:00
run.js benchmark: add --runs support to run.js 2024-10-18 15:33:58 +00:00
scatter.js
scatter.R

Node.js Core Benchmarks

This folder contains code and data used to measure performance of different Node.js implementations and different ways of writing JavaScript run by the built-in JavaScript engine.

For a detailed guide on how to write and run benchmarks in this directory, see the guide on benchmarks.

Table of Contents

File tree structure

Directories

Benchmarks testing the performance of a single node submodule are placed into a directory with the corresponding name, so that they can be executed by submodule or individually. Benchmarks that span multiple submodules may either be placed into the misc directory or into a directory named after the feature they benchmark. E.g. benchmarks for various new ECMAScript features and their pre-ES2015 counterparts are placed in a directory named es. Fixtures that are not specific to a certain benchmark but can be reused throughout the benchmark suite should be placed in the fixtures directory.

Other Top-level files

The top-level files include common dependencies of the benchmarks and the tools for launching benchmarks and visualizing their output. The actual benchmark scripts should be placed in their corresponding directories.

  • _benchmark_progress.js: implements the progress bar displayed when running compare.js
  • _cli.js: parses the command line arguments passed to compare.js, run.js and scatter.js
  • _cli.R: parses the command line arguments passed to compare.R
  • _http-benchmarkers.js: selects and runs external tools for benchmarking the http subsystem.
  • bar.R: R script for visualizing the output of benchmarks with bar plots.
  • common.js: see Common API.
  • compare.js: command line tool for comparing performance between different Node.js binaries.
  • compare.R: R script for statistically analyzing the output of compare.js
  • run.js: command line tool for running individual benchmark suite(s).
  • scatter.js: command line tool for comparing the performance between different parameters in benchmark configurations, for example to analyze the time complexity.
  • scatter.R: R script for visualizing the output of scatter.js with scatter plots.

Common API

The common.js module is used by benchmarks for consistency across repeated tasks. It has a number of helpful functions and properties to help with writing benchmarks.

createBenchmark(fn, configs[, options])

See the guide on writing benchmarks.

default_http_benchmarker

The default benchmarker used to run HTTP benchmarks. See the guide on writing HTTP benchmarks.

PORT

The default port used to run HTTP benchmarks. See the guide on writing HTTP benchmarks.

sendResult(data)

Used in special benchmarks that can't use createBenchmark and the object it returns to accomplish what they need. This function reports timing data to the parent process (usually created by running compare.js, run.js or scatter.js).