feat(archive): `UntarStream` and `TarStream` (#4548)
* refactor(archive): An implementation of Tar as streams
* fmt(archive): Ran `deno fmt`
* fix(archive): fixed JSDoc examples in tar_streams.ts
* fix(archive): fixed JSDoc examples so `deno task test` doesn't complain
* fix(archive): lint license error
* fix(archive): lint error files not exported
* set(archive): Set current time as mtime for default
* resolve(archive): resolves comments made
* add(archive): `{ mode: 'byob' }` support for TarStream
* add(archive): `{ mode: 'byob' }` support for UnTarStream
* adjust(archive): The logical flow of a few if statements
* tests(archive): Updated Tests for Un/TarStream
* fix(archive): TarStream mtime wasn't an octal
* fix(archive): TarStream tests
* add(archive): Added parsePathname function
Added parsePathname function abstracting the logic out of TarStream allowing the developer to validate pathnames before providing them to TarStream hoping it doesn't throw an error and require the archive creation to start all over again.
* fix(archive): extra bytes incorrectly appending at the end of files
When the appending file was exactly divisible by 512 bytes, an extra 512 bytes was being appending instead of zero to fill in the gap, causing the next header to be read at the wrong place.
* adjust(archive): to always return the amount of bytes requested
Instead of using enqueue, the leftover bytes are saved for later for the next buffer provided.
* tweaks
* fix
* docs(archive): Link to the spec that they're following
* docs(archive): fix spelling
* add(archive): function validTarSteamOptions
- To make sure, if TarStreamOptions are being provided, that they are in the correct format so as to not create bad tarballs.
* add(archive): more tests
* fix(archive): validTarStreamOptions
* add(archive): tests for validTarStreamOptions
* refactor(archive): code to copy the changes made in the @doctor/tar-stream version
* test(archive): added from @doctor/tar-stream
* chore: nit on anonymous function
* refactor(archive): UnTarStream that fixes unexplainable memory leak
- The second newest test introduced here '... with invalid ending' seems to detect a memory leak due to an invalid tarball. I couldn't figure out why the memory leak was happening but I know this restructure of the code doesn't have that same memory leak.
* chore: fmt
* tests(archive): added remaining tests to cover many lines as possible
* adjust(archive): remove simplify pathname code
* adjust(archive): remove checking for duplicate pathnames in taring process
* adjust(archive): A readable will exist on TarEntry unless string values 1-6
* tests(archive): added more tests for higher coverage
* adjust(archives): TarStream and UnTarStream to implement TransformStream
* docs(archive): moved TarStreamOptions docs into properties.
* adjust(archive): TarStreamFile to take a ReadableSteam instead of an Iterable | AsyncIterable
* adjust(archive): to use FixedChunkStream instead of rolling it's own implementation
* fix(archive): lint error
* adjust(archive): Error types and messages
* adjust(archive): more Error messages / improve tests
* refactor(archive): UnTarStream to return TarStreamChunk instead of TarStreamEntry
* fix(archive): JSDoc example
* adjust(archive): mode, uid, gid options to be provided as numbers instead of strings.
* adjust(archive): TarStream's pathname to be only of type string
* fix(archive): prefix/name to ignore everything past the first NULL
* adjust(archive): `checksum` and `pad` to not be exposed from UnTarStream
* adjust(archive): checksum calculation
* change(archive): `.slice` to `.subarray`
* doc(archive): "octal number" to "octal literal"
* adjust(archive): TarStreamOptions to be optional with defaults
* doc(archive): added more docs for the interfaces
* docs(archive): denoting defaults
* docs(archive): updated for new lint rules
* adjust(archive): Tests to use assertRejects where appropriate & add `validPathname` function
- The `validPathname` is meant to be a nicer exposed function for users of this lib to validate that their pathnames are valid before pipping it through the TarStream, over exposing parsePathname where the user may be confused about what to do with the result.
* adjust(archive): to use `Date.now()` instead of `new Date().getTime()`
Co-authored-by: ud2 <sjx233@qq.com>
* adjust(archive): mode, uid, and gid to be numbers instead of strings when Untaring
* tests(archive): adjust two tests to also validate the contents of the files are valid
* adjust(archive): linkname, uname, and gname to follow the same decoding rules as name and prefix
* rename(archive): UnTarStream to UntarStream
* fix(archive): type that was missed getting updated
* tests(archive): adjust check headers test to validate all header properties instead of just pathnames
* rename(archive): `pathname` properties to `path`
* docs(archive): updated to be more descriptive
* docs(archive): Updated error types
* adjust(archive): `validPath` to `assertValidPath`
* adjust(archive): `validTarStreamOptions` to `assertValidTarStreamOptions`
* revert(archive): UntarStream to produce TarStreamEntry instead of TarStreamChunk
* refactor: remove redundant `void` return types
* docs: cleanup assertion function docs
* docs: correct `TarStream` example
* docs: minor docs cleanups
* refactor: improve error class specificity
* docs: add `@experimental` JSDoc tags
* docs(archive): Updated examples for `assertValidPath` and `assertValidTarStreamOptions```
* fix(archive): problem with tests
- I suspect the problem is that a file that was read by `Deno.readDir` changed size between being read at `Deno.stat` and when `Deno.open` finished pulling it all in.
* update error messages
* update error messages
* fix typos
* refactor: tweak error messages
* refactor: tweaks and add type field
---------
Co-authored-by: Asher Gomez <ashersaupingomez@gmail.com>
Co-authored-by: ud2 <sjx233@qq.com>
Co-authored-by: Yoshiya Hinosawa <stibium121@gmail.com>
2024-09-02 07:43:22 +00:00
|
|
|
// Copyright 2018-2024 the Deno authors. All rights reserved. MIT license.
|
2024-09-30 03:31:41 +00:00
|
|
|
|
|
|
|
import { assertEquals, assertRejects, assertThrows } from "@std/assert";
|
|
|
|
import { concat } from "@std/bytes";
|
feat(archive): `UntarStream` and `TarStream` (#4548)
* refactor(archive): An implementation of Tar as streams
* fmt(archive): Ran `deno fmt`
* fix(archive): fixed JSDoc examples in tar_streams.ts
* fix(archive): fixed JSDoc examples so `deno task test` doesn't complain
* fix(archive): lint license error
* fix(archive): lint error files not exported
* set(archive): Set current time as mtime for default
* resolve(archive): resolves comments made
* add(archive): `{ mode: 'byob' }` support for TarStream
* add(archive): `{ mode: 'byob' }` support for UnTarStream
* adjust(archive): The logical flow of a few if statements
* tests(archive): Updated Tests for Un/TarStream
* fix(archive): TarStream mtime wasn't an octal
* fix(archive): TarStream tests
* add(archive): Added parsePathname function
Added parsePathname function abstracting the logic out of TarStream allowing the developer to validate pathnames before providing them to TarStream hoping it doesn't throw an error and require the archive creation to start all over again.
* fix(archive): extra bytes incorrectly appending at the end of files
When the appending file was exactly divisible by 512 bytes, an extra 512 bytes was being appending instead of zero to fill in the gap, causing the next header to be read at the wrong place.
* adjust(archive): to always return the amount of bytes requested
Instead of using enqueue, the leftover bytes are saved for later for the next buffer provided.
* tweaks
* fix
* docs(archive): Link to the spec that they're following
* docs(archive): fix spelling
* add(archive): function validTarSteamOptions
- To make sure, if TarStreamOptions are being provided, that they are in the correct format so as to not create bad tarballs.
* add(archive): more tests
* fix(archive): validTarStreamOptions
* add(archive): tests for validTarStreamOptions
* refactor(archive): code to copy the changes made in the @doctor/tar-stream version
* test(archive): added from @doctor/tar-stream
* chore: nit on anonymous function
* refactor(archive): UnTarStream that fixes unexplainable memory leak
- The second newest test introduced here '... with invalid ending' seems to detect a memory leak due to an invalid tarball. I couldn't figure out why the memory leak was happening but I know this restructure of the code doesn't have that same memory leak.
* chore: fmt
* tests(archive): added remaining tests to cover many lines as possible
* adjust(archive): remove simplify pathname code
* adjust(archive): remove checking for duplicate pathnames in taring process
* adjust(archive): A readable will exist on TarEntry unless string values 1-6
* tests(archive): added more tests for higher coverage
* adjust(archives): TarStream and UnTarStream to implement TransformStream
* docs(archive): moved TarStreamOptions docs into properties.
* adjust(archive): TarStreamFile to take a ReadableSteam instead of an Iterable | AsyncIterable
* adjust(archive): to use FixedChunkStream instead of rolling it's own implementation
* fix(archive): lint error
* adjust(archive): Error types and messages
* adjust(archive): more Error messages / improve tests
* refactor(archive): UnTarStream to return TarStreamChunk instead of TarStreamEntry
* fix(archive): JSDoc example
* adjust(archive): mode, uid, gid options to be provided as numbers instead of strings.
* adjust(archive): TarStream's pathname to be only of type string
* fix(archive): prefix/name to ignore everything past the first NULL
* adjust(archive): `checksum` and `pad` to not be exposed from UnTarStream
* adjust(archive): checksum calculation
* change(archive): `.slice` to `.subarray`
* doc(archive): "octal number" to "octal literal"
* adjust(archive): TarStreamOptions to be optional with defaults
* doc(archive): added more docs for the interfaces
* docs(archive): denoting defaults
* docs(archive): updated for new lint rules
* adjust(archive): Tests to use assertRejects where appropriate & add `validPathname` function
- The `validPathname` is meant to be a nicer exposed function for users of this lib to validate that their pathnames are valid before pipping it through the TarStream, over exposing parsePathname where the user may be confused about what to do with the result.
* adjust(archive): to use `Date.now()` instead of `new Date().getTime()`
Co-authored-by: ud2 <sjx233@qq.com>
* adjust(archive): mode, uid, and gid to be numbers instead of strings when Untaring
* tests(archive): adjust two tests to also validate the contents of the files are valid
* adjust(archive): linkname, uname, and gname to follow the same decoding rules as name and prefix
* rename(archive): UnTarStream to UntarStream
* fix(archive): type that was missed getting updated
* tests(archive): adjust check headers test to validate all header properties instead of just pathnames
* rename(archive): `pathname` properties to `path`
* docs(archive): updated to be more descriptive
* docs(archive): Updated error types
* adjust(archive): `validPath` to `assertValidPath`
* adjust(archive): `validTarStreamOptions` to `assertValidTarStreamOptions`
* revert(archive): UntarStream to produce TarStreamEntry instead of TarStreamChunk
* refactor: remove redundant `void` return types
* docs: cleanup assertion function docs
* docs: correct `TarStream` example
* docs: minor docs cleanups
* refactor: improve error class specificity
* docs: add `@experimental` JSDoc tags
* docs(archive): Updated examples for `assertValidPath` and `assertValidTarStreamOptions```
* fix(archive): problem with tests
- I suspect the problem is that a file that was read by `Deno.readDir` changed size between being read at `Deno.stat` and when `Deno.open` finished pulling it all in.
* update error messages
* update error messages
* fix typos
* refactor: tweak error messages
* refactor: tweaks and add type field
---------
Co-authored-by: Asher Gomez <ashersaupingomez@gmail.com>
Co-authored-by: ud2 <sjx233@qq.com>
Co-authored-by: Yoshiya Hinosawa <stibium121@gmail.com>
2024-09-02 07:43:22 +00:00
|
|
|
import {
|
|
|
|
assertValidTarStreamOptions,
|
|
|
|
TarStream,
|
|
|
|
type TarStreamInput,
|
|
|
|
} from "./tar_stream.ts";
|
|
|
|
import { UntarStream } from "./untar_stream.ts";
|
|
|
|
|
|
|
|
Deno.test("TarStream() with default stream", async () => {
|
|
|
|
const text = new TextEncoder().encode("Hello World!");
|
|
|
|
|
|
|
|
const reader = ReadableStream.from<TarStreamInput>([
|
|
|
|
{
|
|
|
|
type: "directory",
|
|
|
|
path: "./potato",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
type: "file",
|
|
|
|
path: "./text.txt",
|
|
|
|
size: text.length,
|
|
|
|
readable: ReadableStream.from([text.slice()]),
|
|
|
|
},
|
|
|
|
])
|
|
|
|
.pipeThrough(new TarStream())
|
|
|
|
.getReader();
|
|
|
|
|
|
|
|
let size = 0;
|
|
|
|
const data: Uint8Array[] = [];
|
|
|
|
while (true) {
|
|
|
|
const { done, value } = await reader.read();
|
|
|
|
if (done) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
size += value.length;
|
|
|
|
data.push(value);
|
|
|
|
}
|
|
|
|
assertEquals(size, 512 + 512 + Math.ceil(text.length / 512) * 512 + 1024);
|
|
|
|
assertEquals(
|
|
|
|
text,
|
|
|
|
concat(data).slice(
|
|
|
|
512 + // Slicing off ./potato header
|
|
|
|
512, // Slicing off ./text.txt header
|
|
|
|
-1024, // Slicing off 1024 bytes of end padding
|
|
|
|
)
|
|
|
|
.slice(0, text.length), // Slice off padding added to text to make it divisible by 512
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("TarStream() with byte stream", async () => {
|
|
|
|
const text = new TextEncoder().encode("Hello World!");
|
|
|
|
|
|
|
|
const reader = ReadableStream.from<TarStreamInput>([
|
|
|
|
{
|
|
|
|
type: "directory",
|
|
|
|
path: "./potato",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
type: "file",
|
|
|
|
path: "./text.txt",
|
|
|
|
size: text.length,
|
|
|
|
readable: ReadableStream.from([text.slice()]),
|
|
|
|
},
|
|
|
|
])
|
|
|
|
.pipeThrough(new TarStream())
|
|
|
|
.getReader({ mode: "byob" });
|
|
|
|
|
|
|
|
let size = 0;
|
|
|
|
const data: Uint8Array[] = [];
|
|
|
|
while (true) {
|
|
|
|
const { done, value } = await reader.read(
|
|
|
|
new Uint8Array(Math.ceil(Math.random() * 1024)),
|
|
|
|
);
|
|
|
|
if (done) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
size += value.length;
|
|
|
|
data.push(value);
|
|
|
|
}
|
|
|
|
assertEquals(size, 512 + 512 + Math.ceil(text.length / 512) * 512 + 1024);
|
|
|
|
assertEquals(
|
|
|
|
text,
|
|
|
|
concat(data).slice(
|
|
|
|
512 + // Slicing off ./potato header
|
|
|
|
512, // Slicing off ./text.txt header
|
|
|
|
-1024, // Slicing off 1024 bytes of end padding
|
|
|
|
)
|
|
|
|
.slice(0, text.length), // Slice off padding added to text to make it divisible by 512
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("TarStream() with negative size", async () => {
|
|
|
|
const text = new TextEncoder().encode("Hello World");
|
|
|
|
|
|
|
|
const readable = ReadableStream.from<TarStreamInput>([
|
|
|
|
{
|
|
|
|
type: "file",
|
|
|
|
path: "name",
|
|
|
|
size: -text.length,
|
|
|
|
readable: ReadableStream.from([text.slice()]),
|
|
|
|
},
|
|
|
|
])
|
|
|
|
.pipeThrough(new TarStream());
|
|
|
|
|
|
|
|
await assertRejects(
|
|
|
|
() => Array.fromAsync(readable),
|
|
|
|
RangeError,
|
|
|
|
"Cannot add to the tar archive: The size cannot exceed 64 Gibs",
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("TarStream() with 65 GiB size", async () => {
|
|
|
|
const size = 1024 ** 3 * 65;
|
|
|
|
const step = 1024; // Size must equally be divisible by step
|
|
|
|
const iterable = function* () {
|
|
|
|
for (let i = 0; i < size; i += step) {
|
|
|
|
yield new Uint8Array(step).map(() => Math.floor(Math.random() * 256));
|
|
|
|
}
|
|
|
|
}();
|
|
|
|
|
|
|
|
const readable = ReadableStream.from<TarStreamInput>([
|
|
|
|
{
|
|
|
|
type: "file",
|
|
|
|
path: "name",
|
|
|
|
size,
|
|
|
|
readable: ReadableStream.from(iterable),
|
|
|
|
},
|
|
|
|
])
|
|
|
|
.pipeThrough(new TarStream());
|
|
|
|
|
|
|
|
await assertRejects(
|
|
|
|
() => Array.fromAsync(readable),
|
|
|
|
RangeError,
|
|
|
|
"Cannot add to the tar archive: The size cannot exceed 64 Gibs",
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("TarStream() with NaN size", async () => {
|
|
|
|
const size = NaN;
|
|
|
|
const step = 1024; // Size must equally be divisible by step
|
|
|
|
const iterable = function* () {
|
|
|
|
for (let i = 0; i < size; i += step) {
|
|
|
|
yield new Uint8Array(step).map(() => Math.floor(Math.random() * 256));
|
|
|
|
}
|
|
|
|
}();
|
|
|
|
|
|
|
|
const readable = ReadableStream.from<TarStreamInput>([
|
|
|
|
{
|
|
|
|
type: "file",
|
|
|
|
path: "name",
|
|
|
|
size,
|
|
|
|
readable: ReadableStream.from(iterable),
|
|
|
|
},
|
|
|
|
])
|
|
|
|
.pipeThrough(new TarStream());
|
|
|
|
|
|
|
|
await assertRejects(
|
|
|
|
() => Array.fromAsync(readable),
|
|
|
|
RangeError,
|
|
|
|
"Cannot add to the tar archive: The size cannot exceed 64 Gibs",
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("parsePath()", async () => {
|
|
|
|
const readable = ReadableStream.from<TarStreamInput>([
|
|
|
|
{
|
|
|
|
type: "directory",
|
|
|
|
path:
|
|
|
|
"./Veeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeery/LongPath",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
type: "directory",
|
|
|
|
path:
|
|
|
|
"./some random path/with/loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong/path",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
type: "directory",
|
|
|
|
path:
|
|
|
|
"./some random path/with/loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong/file",
|
|
|
|
},
|
|
|
|
{
|
|
|
|
type: "directory",
|
|
|
|
path:
|
|
|
|
"./some random path/with/loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong/file",
|
|
|
|
},
|
|
|
|
])
|
|
|
|
.pipeThrough(new TarStream())
|
|
|
|
.pipeThrough(new UntarStream());
|
|
|
|
|
|
|
|
const output = [
|
|
|
|
"./Veeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeery/LongPath",
|
|
|
|
"./some random path/with/loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong/path",
|
|
|
|
"./some random path/with/loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong/file",
|
|
|
|
"./some random path/with/loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong/file",
|
|
|
|
];
|
|
|
|
for await (const tarEntry of readable) {
|
|
|
|
assertEquals(tarEntry.path, output.shift());
|
|
|
|
tarEntry.readable?.cancel();
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("validTarStreamOptions()", () => {
|
|
|
|
assertValidTarStreamOptions({});
|
|
|
|
|
|
|
|
assertValidTarStreamOptions({ mode: 0 });
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ mode: 8 }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid Mode provided",
|
|
|
|
);
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ mode: 1111111 }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid Mode provided",
|
|
|
|
);
|
|
|
|
|
|
|
|
assertValidTarStreamOptions({ uid: 0 });
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ uid: 8 }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid UID provided",
|
|
|
|
);
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ uid: 1111111 }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid UID provided",
|
|
|
|
);
|
|
|
|
|
|
|
|
assertValidTarStreamOptions({ gid: 0 });
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ gid: 8 }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid GID provided",
|
|
|
|
);
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ gid: 1111111 }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid GID provided",
|
|
|
|
);
|
|
|
|
|
|
|
|
assertValidTarStreamOptions({ mtime: 0 });
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ mtime: NaN }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid MTime provided",
|
|
|
|
);
|
|
|
|
assertValidTarStreamOptions({
|
|
|
|
mtime: Math.floor(new Date().getTime() / 1000),
|
|
|
|
});
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ mtime: new Date().getTime() }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid MTime provided",
|
|
|
|
);
|
|
|
|
|
|
|
|
assertValidTarStreamOptions({ uname: "" });
|
|
|
|
assertValidTarStreamOptions({ uname: "abcdef" });
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ uname: "å-abcdef" }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid UName provided",
|
|
|
|
);
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ uname: "a".repeat(100) }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid UName provided",
|
|
|
|
);
|
|
|
|
|
|
|
|
assertValidTarStreamOptions({ gname: "" });
|
|
|
|
assertValidTarStreamOptions({ gname: "abcdef" });
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ gname: "å-abcdef" }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid GName provided",
|
|
|
|
);
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ gname: "a".repeat(100) }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid GName provided",
|
|
|
|
);
|
|
|
|
|
|
|
|
assertValidTarStreamOptions({ devmajor: "" });
|
|
|
|
assertValidTarStreamOptions({ devmajor: "1234" });
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ devmajor: "123456789" }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid DevMajor provided",
|
|
|
|
);
|
|
|
|
|
|
|
|
assertValidTarStreamOptions({ devminor: "" });
|
|
|
|
assertValidTarStreamOptions({ devminor: "1234" });
|
|
|
|
assertThrows(
|
|
|
|
() => assertValidTarStreamOptions({ devminor: "123456789" }),
|
|
|
|
TypeError,
|
|
|
|
"Invalid DevMinor provided",
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("TarStream() with invalid options", async () => {
|
|
|
|
const readable = ReadableStream.from<TarStreamInput>([
|
|
|
|
{ type: "directory", path: "potato", options: { mode: 9 } },
|
|
|
|
]).pipeThrough(new TarStream());
|
|
|
|
|
|
|
|
await assertRejects(
|
|
|
|
() => Array.fromAsync(readable),
|
|
|
|
TypeError,
|
|
|
|
"Invalid Mode provided",
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("TarStream() with mismatching sizes", async () => {
|
|
|
|
const text = new TextEncoder().encode("Hello World!");
|
|
|
|
const readable = ReadableStream.from<TarStreamInput>([
|
|
|
|
{
|
|
|
|
type: "file",
|
|
|
|
path: "potato",
|
|
|
|
size: text.length + 1,
|
|
|
|
readable: ReadableStream.from([text.slice()]),
|
|
|
|
},
|
|
|
|
]).pipeThrough(new TarStream());
|
|
|
|
|
|
|
|
await assertRejects(
|
|
|
|
() => Array.fromAsync(readable),
|
|
|
|
RangeError,
|
|
|
|
"Cannot add to the tar archive: The provided size (13) did not match bytes read from provided readable (12)",
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("parsePath() with too long path", async () => {
|
|
|
|
const readable = ReadableStream.from<TarStreamInput>([{
|
|
|
|
type: "directory",
|
|
|
|
path: "0".repeat(300),
|
|
|
|
}])
|
|
|
|
.pipeThrough(new TarStream());
|
|
|
|
|
|
|
|
await assertRejects(
|
|
|
|
() => Array.fromAsync(readable),
|
|
|
|
RangeError,
|
|
|
|
"Cannot parse the path as the path length cannot exceed 256 bytes: The path length is 300",
|
|
|
|
);
|
|
|
|
});
|
|
|
|
|
|
|
|
Deno.test("parsePath() with too long path", async () => {
|
|
|
|
const readable = ReadableStream.from<TarStreamInput>([{
|
|
|
|
type: "directory",
|
|
|
|
path: "0".repeat(160) + "/",
|
|
|
|
}])
|
|
|
|
.pipeThrough(new TarStream());
|
|
|
|
|
|
|
|
await assertRejects(
|
|
|
|
() => Array.fromAsync(readable),
|
|
|
|
TypeError,
|
|
|
|
"Cannot parse the path as the path needs to be split-able on a forward slash separator into [155, 100] bytes respectively",
|
|
|
|
);
|
|
|
|
});
|