Skip to content

Releases: kamiazya/web-csv-toolbox

web-csv-toolbox@0.15.0

25 Dec 01:38
9a53c3c

Choose a tag to compare

Minor Changes

  • #614 25d49ee Thanks @kamiazya! - BREAKING CHANGE: Restrict columnCountStrategy options for object output to fill/strict only.

    Object format now rejects keep and truncate strategies at runtime, as these strategies are incompatible with object output semantics. Users relying on keep or truncate with object format must either:

    • Switch to outputFormat: 'array' to use these strategies, or
    • Use fill (default) or strict for object output

    This change improves API clarity by aligning strategy availability with format capabilities and documenting the purpose-driven strategy matrix (including sparse/header requirements).

  • #616 8adf5d9 Thanks @kamiazya! - Add TypeScript 5.0 const type parameters to eliminate as const requirements

    New Features:

    • Add CSVOutputFormat type alias for "object" | "array" union type
    • Implement const type parameters in factory functions for automatic literal type inference
    • Add function overloads to factory functions for precise return type narrowing
    • Users no longer need to write as const when specifying headers, delimiters, or other options

    Improvements:

    • Replace import("@/...").XXX patterns with standard import statements at file top
    • Update factory function type signatures to use const type parameters:
      • createStringCSVParser - automatically infers header, delimiter, quotation, and output format types
      • createBinaryCSVParser - automatically infers header, delimiter, quotation, charset, and output format types
      • createCSVRecordAssembler - automatically infers header and output format types
    • Update type definitions to support const type parameters:
      • CSVRecordAssemblerCommonOptions - add OutputFormat and Strategy type parameters
      • CSVProcessingOptions - add OutputFormat type parameter
      • BinaryOptions - add Charset type parameter
    • Update JSDoc examples in factory functions to remove unnecessary as const annotations
    • Update README.md examples to demonstrate simplified usage without as const

    Before:

    const parser = createStringCSVParser({
      header: ["name", "age"] as const, // Required as const
      outputFormat: "object",
    });

    After:

    const parser = createStringCSVParser({
      header: ["name", "age"], // Automatically infers literal types
      outputFormat: "object", // Return type properly narrowed
    });

    Technical Details:

    • Leverages TypeScript 5.0's const type parameters feature
    • Uses function overloads to narrow return types based on outputFormat value:
      • outputFormat: "array" → returns array parser
      • outputFormat: "object" → returns object parser
      • omitted → defaults to object parser
      • dynamic value → returns union type
    • All changes are 100% backward compatible
    • Existing code using as const continues to work unchanged
  • #614 25d49ee Thanks @kamiazya! - ## Lexer API Changes

    This release includes low-level Lexer API changes for performance optimization.

    Breaking Changes (Low-level API only)

    These changes only affect users of the low-level Lexer API. High-level APIs (parseString, parseBinary, etc.) are unchanged.

    1. Token type constants: Changed from Symbol to numeric constants
    2. Location tracking: Now disabled by default. Add trackLocation: true to Lexer options if you need token location information. Note: Error messages still include position information even when trackLocation: false (computed lazily only when errors occur).
    3. Struct of token objects: Changed to improve performance. Token properties changed and reduce tokens by combining delimiter and newline information into a field.

    Who is affected?

    Most users are NOT affected. Only users who directly use FlexibleStringCSVLexer and rely on token.location or Symbol-based token type comparison need to update their code.

  • #613 2a7b22e Thanks @kamiazya! - Add factory functions for stream-based CSV parsing APIs

    New Features:

    • Add createStringCSVParserStream() factory function for Mid-level string stream parsing
    • Add createBinaryCSVParserStream() factory function for Mid-level binary stream parsing
    • Add createStringCSVLexerTransformer() factory function for creating StringCSVLexerTransformer instances
    • Add createCSVRecordAssemblerTransformer() factory function for creating CSVRecordAssemblerTransformer instances
    • Add StringCSVLexerOptions type for lexer factory function options
    • Add StringCSVLexerTransformerStreamOptions type for stream behavior options
    • Add CSVRecordAssemblerFactoryOptions type for assembler factory function options
    • Add StringCSVParserFactoryOptions type for string parser factory function options
    • Add BinaryCSVParserFactoryOptions type for binary parser factory function options
    • Update model factory functions (createStringCSVLexer, createCSVRecordAssembler, createStringCSVParser, createBinaryCSVParser) to accept engine options for future optimization support
    • Update documentation with API level classification (High-level, Mid-level, Low-level)

    Breaking Changes:

    • Rename CSVLexerTransformer class to StringCSVLexerTransformer to clarify input type (string)
    • Rename createCSVLexerTransformer() to createStringCSVLexerTransformer() for consistency
    • Rename CSVLexerTransformerStreamOptions type to StringCSVLexerTransformerStreamOptions for naming consistency
    • Remove unused CSVLexerTransformerOptions type

    Migration:

    // Before
    import {
      CSVLexerTransformer,
      createCSVLexerTransformer,
      type CSVLexerTransformerStreamOptions,
    } from "web-csv-toolbox";
    new CSVLexerTransformer(lexer);
    createCSVLexerTransformer({ delimiter: "," });
    
    // After
    import {
      StringCSVLexerTransformer,
      createStringCSVLexerTransformer,
      type StringCSVLexerTransformerStreamOptions,
    } from "web-csv-toolbox";
    new StringCSVLexerTransformer(lexer);
    createStringCSVLexerTransformer({ delimiter: "," });

    These factory functions simplify the API by handling internal parser/lexer creation, reducing the impact of future internal changes on user code. This addresses the issue where CSVLexerTransformer constructor signature changed in v0.14.0 (#612).

Patch Changes

  • #614 25d49ee Thanks @kamiazya! - ## JavaScript Parser Performance Improvements

    This release includes significant internal optimizations that improve JavaScript-based CSV parsing performance.

    Before / After Comparison

    Metric Before (v0.14) After Improvement
    1,000 rows parsing 3.57 ms 1.42 ms 60% faster
    5,000 rows parsing 19.47 ms 7.03 ms 64% faster
    Throughput (1,000 rows) 24.3 MB/s 61.2 MB/s 2.51x
    Throughput (5,000 rows) 24.5 MB/s 67.9 MB/s 2.77x

    Optimization Summary

    Optimization Target Improvement
    Array copy method improvement Assembler -8.7%
    Quoted field parsing optimization Lexer Overhead eliminated
    Object assembler loop optimization Assembler -5.4%
    Regex removal for unquoted fields Lexer -14.8%
    String comparison optimization Lexer ~10%
    Object creation optimization Lexer ~20%
    Non-destructive buffer reading GC -46%
    Token type numeric conversion Lexer/GC -7% / -13%
    Location tracking made optional Lexer -19% to -31%
    Object.create(null) for records Assembler -31%
    Empty-row template cache Assembler ~4% faster on sparse CSV
    Row buffer reuse (no per-record slice) Assembler ~6% faster array format
    Header-length builder preallocation Assembler Capacity stays steady on wide CSV
    Object assembler row buffer pooling Assembler Lower GC spikes on object output
    Lexer segment-buffer pooling Lexer Smoother GC for quoted-heavy input

...

Read more

web-csv-toolbox@0.14.0

27 Nov 14:48
01dbd68

Choose a tag to compare

Minor Changes

  • #608 24f04d7 Thanks @kamiazya! - feat!: rename binary stream APIs for consistency and add BufferSource support

    Summary

    This release standardizes the naming of binary stream parsing APIs to match the existing parseBinary* family, and extends support to accept any BufferSource type (ArrayBuffer, Uint8Array, and other TypedArray views).

    Breaking Changes

    API Renaming for Consistency

    All parseUint8Array* functions have been renamed to parseBinary* to maintain consistency with existing binary parsing APIs:

    Function Names:

    • parseUint8ArrayStream()parseBinaryStream()
    • parseUint8ArrayStreamToStream()parseBinaryStreamToStream()

    Type Names:

    • ParseUint8ArrayStreamOptionsParseBinaryStreamOptions

    Internal Functions (for reference):

    • parseUint8ArrayStreamInMain()parseBinaryStreamInMain()
    • parseUint8ArrayStreamInWorker()parseBinaryStreamInWorker()
    • parseUint8ArrayStreamInWorkerWASM()parseBinaryStreamInWorkerWASM()

    Rationale:
    The previous naming was inconsistent with the rest of the binary API family (parseBinary, parseBinaryToArraySync, parseBinaryToIterableIterator, parseBinaryToStream). The new naming provides:

    • Perfect consistency across all binary parsing APIs
    • Clear indication that these functions accept any binary data format
    • Better predictability for API discovery

    BufferSource Support

    FlexibleBinaryCSVParser and BinaryCSVParserStream now accept BufferSource (= ArrayBuffer | ArrayBufferView) instead of just Uint8Array:

    Before:

    const parser = new FlexibleBinaryCSVParser({ header: ['name', 'age'] });
    const data = new Uint8Array([...]); // Only Uint8Array
    const records = parser.parse(data);

    After:

    const parser = new FlexibleBinaryCSVParser({ header: ['name', 'age'] });
    
    // Uint8Array still works
    const uint8Data = new Uint8Array([...]);
    const records1 = parser.parse(uint8Data);
    
    // ArrayBuffer now works directly
    const buffer = await fetch('data.csv').then(r => r.arrayBuffer());
    const records2 = parser.parse(buffer);
    
    // Other TypedArray views also work
    const int8Data = new Int8Array([...]);
    const records3 = parser.parse(int8Data);

    Benefits:

    • Direct use of fetch().then(r => r.arrayBuffer()) without conversion
    • Flexibility to work with any TypedArray view
    • Alignment with Web API standards (BufferSource is widely used)

    Migration Guide

    Automatic Migration

    Use find-and-replace in your codebase:

    # Function calls
    parseUint8ArrayStream → parseBinaryStream
    parseUint8ArrayStreamToStream → parseBinaryStreamToStream
    
    # Type references
    ParseUint8ArrayStreamOptions → ParseBinaryStreamOptions

    TypeScript Users

    If you were explicitly typing with Uint8Array, you can now use the more general BufferSource:

    // Before
    function processCSV(data: Uint8Array) {
      return parseBinaryStream(data);
    }
    
    // After (more flexible)
    function processCSV(data: BufferSource) {
      return parseBinaryStream(data);
    }

    Updated API Consistency

    All binary parsing APIs now follow a consistent naming pattern:

    // Single-value binary data
    parseBinary(); // Binary → AsyncIterableIterator<Record>
    parseBinaryToArraySync(); // Binary → Array<Record> (sync)
    parseBinaryToIterableIterator(); // Binary → IterableIterator<Record>
    parseBinaryToStream(); // Binary → ReadableStream<Record>
    
    // Streaming binary data
    parseBinaryStream(); // ReadableStream<Uint8Array> → AsyncIterableIterator<Record>
    parseBinaryStreamToStream(); // ReadableStream<Uint8Array> → ReadableStream<Record>

    Note: While the stream input type remains ReadableStream<Uint8Array> (Web Streams API standard), the internal parsers now accept BufferSource for individual chunks.

    Documentation Updates

    README.md

    • Updated Low-level APIs section to reflect parseBinaryStream* naming
    • Added flush procedure documentation for streaming mode
    • Added BufferSource examples

    API Reference (docs/reference/package-exports.md)

    • Added comprehensive Low-level API Reference section
    • Documented all Parser Models (Tier 1) and Lexer + Assembler (Tier 2)
    • Included usage examples and code snippets

    Architecture Guide (docs/explanation/parsing-architecture.md)

    • Updated Binary CSV Parser section to document BufferSource support
    • Added detailed streaming mode examples with flush procedures
    • Clarified multi-byte character handling across chunk boundaries

    Flush Procedure Clarification

    Documentation now explicitly covers the requirement to call parse() without arguments when using streaming mode:

    const parser = createBinaryCSVParser({ header: ["name", "age"] });
    const encoder = new TextEncoder();
    
    // Process chunks
    const records1 = parser.parse(encoder.encode("Alice,30\nBob,"), {
      stream: true,
    });
    const records2 = parser.parse(encoder.encode("25\n"), { stream: true });
    
    // IMPORTANT: Flush remaining data (required!)
    const records3 = parser.parse();

    This prevents data loss from incomplete records or multi-byte character buffers.

    Type Safety

    All changes maintain full TypeScript strict mode compliance with proper type inference and generic constraints.

  • #608 24f04d7 Thanks @kamiazya! - Add arrayBufferThreshold option to Engine configuration for automatic Blob reading strategy selection

    New Feature

    Added engine.arrayBufferThreshold option that automatically selects the optimal Blob reading strategy based on file size:

    • Files smaller than threshold: Use blob.arrayBuffer() + parseBinary() (6-8x faster, confirmed by benchmarks)
    • Files equal to or larger than threshold: Use blob.stream() + parseBinaryStream() (memory-efficient)

    Default: 1MB (1,048,576 bytes), determined by comprehensive benchmarks

    Applies to: parseBlob() and parseFile() only

    Benchmark Results

    File Size Binary (ops/sec) Stream (ops/sec) Performance Gain
    1KB 21,691 2,685 8.08x faster
    10KB 2,187 311 7.03x faster
    100KB 219 32 6.84x faster
    1MB 20 3 6.67x faster

    Usage

    import { parseBlob, EnginePresets } from "web-csv-toolbox";
    
    // Use default (1MB threshold)
    for await (const record of parseBlob(file)) {
      console.log(record);
    }
    
    // Always use streaming (memory-efficient)
    for await (const record of parseBlob(largeFile, {
      engine: { arrayBufferThreshold: 0 },
    })) {
      console.log(record);
    }
    
    // Custom threshold (512KB)
    for await (const record of parseBlob(file, {
      engine: { arrayBufferThreshold: 512 * 1024 },
    })) {
      console.log(record);
    }
    
    // With preset
    for await (const record of parseBlob(file, {
      engine: EnginePresets.fastest({
        arrayBufferThreshold: 2 * 1024 * 1024, // 2MB
      }),
    })) {
      console.log(record);
    }

    Special Values

    • 0 - Always use streaming (maximum memory efficiency)
    • Infinity - Always use arrayBuffer (maximum performance for small files)

    Security Note

    When using arrayBufferThreshold > 0, files must stay below maxBufferSize (default 10MB) to prevent excessive memory allocation. Files exceeding this limit will throw a RangeError.

    Design Philosophy

    This option belongs to engine configuration because it affects performance and behavior only, not the parsing result specification. This follows the design principle:

    • Top-level options: Affect specification (result changes)
    • Engine options: Affect performance/behavior (same result, different execution)
  • #608 24f04d7 Thanks @kamiazya! - Add support for Blob, File, and Request objects

    This release adds native support for parsing CSV data from Web Standard Blob, File, and Request objects, making the library more versatile across different environments.

    New Functions:

    • parseBlob(blob, options) - Parse CSV from Blob or File objects

      • Automatic charset detection from blob.type property
      • Supports compression via decompression option
      • Returns AsyncIterableIterator<CSVRecord>
      • Includes .toArray() and .toStream() namespace methods
    • parseFile(file, options) - Enhanced File parsing with automatic error source tracking

      • Built on top of parseBlob with additional functionality
      • Automatically sets file.name as error source for better error reporting
      • Provides clearer intent when working specifically with File objects
      • Useful for file inputs and drag-and-drop scenarios...
Read more

web-csv-toolbox@0.13.0

04 Nov 00:45
efe97fa

Choose a tag to compare

Minor Changes

  • #545 43a6812 Thanks @kamiazya! - Add comprehensive memory protection to prevent memory exhaustion attacks

    This release introduces new security features to prevent unbounded memory growth during CSV parsing. The parser now enforces configurable limits on both buffer size and field count to protect against denial-of-service attacks via malformed or malicious CSV data.

    New Features:

    • Added maxBufferSize option to CommonOptions (default: 10 * 1024 * 1024 characters)
    • Added maxFieldCount option to RecordAssemblerOptions (default: 100,000 fields)
    • Throws RangeError when buffer exceeds size limit
    • Throws RangeError when field count exceeds limit
    • Comprehensive memory safety protection against DoS attacks

    Note: maxBufferSize is measured in UTF-16 code units (JavaScript string length), not bytes. This is approximately 10MB for ASCII text, but may vary for non-ASCII characters.

    Breaking Changes:
    None. This is a backward-compatible enhancement with sensible defaults.

    Security:
    This change addresses three potential security vulnerabilities:

    1. Unbounded buffer growth via streaming input: Attackers could exhaust system memory by streaming large amounts of malformed CSV data that cannot be tokenized. The maxBufferSize limit prevents this by throwing RangeError when the internal buffer exceeds 10 * 1024 * 1024 characters (approximately 10MB for ASCII).

    2. Quoted field parsing memory exhaustion: Attackers could exploit the quoted field parsing logic by sending strategically crafted CSV with unclosed quotes or excessive escaped quotes, causing the parser to accumulate unbounded data in the buffer. The maxBufferSize limit protects against this attack vector.

    3. Excessive column count attacks: Attackers could send CSV files with an enormous number of columns to exhaust memory during header parsing and record assembly. The maxFieldCount limit (default: 100,000 fields per record) prevents this by throwing RangeError when exceeded.

    Users processing untrusted CSV input are encouraged to use the default limits or configure appropriate maxBufferSize and maxFieldCount values for their use case.

  • #546 76eec90 Thanks @kamiazya! - BREAKING CHANGE: Change error types from RangeError to TypeError for consistency with Web Standards

    • Change all RangeError to TypeError for consistency
    • This affects error handling in:
      • getOptionsFromResponse(): Invalid MIME type, unsupported/multiple content-encodings
      • parseResponse(): Null response body
      • parseResponseToStream(): Null response body
    • Aligns with Web Standard APIs behavior (DecompressionStream throws TypeError)
    • Improves consistency for error handling with catch (error instanceof TypeError)

    Migration guide:
    If you were catching RangeError from getOptionsFromResponse(), update to catch TypeError instead:

    - } catch (error) {
    -   if (error instanceof RangeError) {
    + } catch (error) {
    +   if (error instanceof TypeError) {
          // Handle invalid content type or encoding
        }
      }

    New feature: Experimental compression format support

    • Add allowExperimentalCompressions option to enable experimental/non-standard compression formats
    • Browsers: By default, only gzip and deflate are supported (cross-browser compatible)
    • Node.js: By default, gzip, deflate, and br (Brotli) are supported
    • When enabled, allows platform-specific formats like deflate-raw (Chrome/Edge only)
    • Provides flexibility for environment-specific compression formats
    • See documentation for browser compatibility details and usage examples

    Other improvements in this release:

    • Add Content-Encoding header validation with RFC 7231 compliance
    • Normalize Content-Encoding header: convert to lowercase, trim whitespace
    • Ignore empty or whitespace-only Content-Encoding headers
    • Add comprehensive tests for Content-Encoding validation (23 tests)
    • Add security documentation with TransformStream size limit example
    • Error messages now guide users to allowExperimentalCompressions option when needed
  • #551 b21b6d8 Thanks @kamiazya! - Add comprehensive documentation for supported environments and versioning policy

    This release adds two new reference documents to clarify the library's support policies and version management strategy.

    New Documentation:

  • #551 b21b6d8 Thanks @kamiazya! - Add environment-specific compression format support for better cross-browser and Node.js compatibility

    This release adjusts the supported compression formats based on the runtime environment to ensure reliability and prevent errors across different browsers and Node.js versions.

    Changes:

    • Browser environments: Support gzip and deflate only (universal cross-browser support)
    • Node.js 20+ environments: Support gzip, deflate, and br (Brotli)

    Rationale:

    Previously, browser builds included deflate-raw in the default supported formats. However, deflate-raw is only supported in Chromium-based browsers (Chrome, Edge) and not in Firefox or Safari. To ensure the library works reliably across all modern browsers by default, we now only include universally supported formats.

    Browser Compatibility:

    Format Chrome/Edge Firefox Safari Included by Default
    gzip ✅ Yes
    deflate ✅ Yes
    deflate-raw ❌ No (experimental)

    Using Experimental Compressions:

    If you need to use deflate-raw or other non-standard compression formats in Chromium-based browsers, you can enable them with the allowExperimentalCompressions option:

    // Use deflate-raw in Chrome/Edge (may fail in Firefox/Safari)
    const response = await fetch("data.csv"); // Content-Encoding: deflate-raw
    await parseResponse(response, {
      allowExperimentalCompressions: true,
    });

    You can also detect browser support at runtime:

    // Browser-aware usage
    const isChromium = navigator.userAgent.includes("Chrome");
    await parseResponse(response, {
      allowExperimentalCompressions: isChromium,
    });

    Migration Guide:

    For users who were relying on deflate-raw in browser environments:

    1. Option 1: Use gzip or deflate compression instead (recommended for cross-browser compatibility)

      // Server-side: Use gzip instead of deflate-raw
      response.headers.set("content-encoding", "gzip");
    2. Option 2: Enable experimental compressions for Chromium-only deployments

      await parseResponse(response, {
        allowExperimentalCompressions: true,
      });
      // Works in Chrome/Edge, may fail in Firefox/Safari
    3. Option 3: Detect browser support and handle fallbacks

      try {
        await parseResponse(response, {
          allowExperimentalCompressions: true,
        });
      } catch (error) {
        // Fallback for browsers that don't support the format
        console.warn("Compression format not supported, using uncompressed");
      }

    Implementation:

    The supported compressions are now determined at build time using package.json imports field:

    • Browser/Web builds use getOptionsFromResponse.constants.web.js
    • Node.js builds use getOptionsFromResponse.constants.node.js

    This ensures type-safe, environment-appropriate compression support.

    No changes required for users already using gzip or deflate compression in browsers, or gzip, deflate, or br in Node.js.

  • #563 7d51d52 Thanks @kamiazya! - Optimize streaming API design for better performance and consistency

    Breaking Changes

    Token Stream Output Changed from Batch to Individual

    CSVLexerTransformer and CSVRecordAssemblerTransformer now emit/accept individual tokens instead of token arrays for improved streaming performance and API consistency.

    Before:

    CSVLexerTransformer: TransformStream<string, Token[]>;
    CSVRecordAssemblerTransformer: TransformStream<Token[], CSVRecord>;

    After:

    C...
Read more

web-csv-toolbox@0.12.0

24 Oct 00:54
a167ad8

Choose a tag to compare

Minor Changes

  • #533 b221fc7 Thanks @kamiazya! - Migrate to ESM-only distribution

    This release removes CommonJS (CJS) and UMD build outputs, distributing only ES modules (ESM). All build artifacts are now placed directly in the dist/ directory for a simpler and cleaner structure.

    Breaking Changes

    • Removed CommonJS support: The package no longer provides .cjs files. Node.js projects must use ES modules.
    • Removed UMD bundle: The UMD build (dist/web-csv-toolbox.umd.js) has been removed. For CDN usage, use ESM via <script type="module">.
    • Changed distribution structure: Build outputs moved from dist/es/, dist/cjs/, and dist/types/ to dist/ root directory.
    • Removed build:browser command: The separate UMD build step is no longer needed.

    Migration Guide

    For Node.js users:

    • Ensure your project uses "type": "module" in package.json, or use .mjs file extensions
    • Update any CommonJS require() calls to ESM import statements
    • Node.js 20.x or later is required (already the minimum supported version)

    For CDN users:
    Before:

    <script src="https://unpkg.com/web-csv-toolbox"></script>

    After:

    <script type="module">
      import { parse } from "https://unpkg.com/web-csv-toolbox";
    </script>

    For bundler users:
    No changes required - modern bundlers handle ESM correctly.

    Benefits

    • Simpler build configuration and faster build times
    • Smaller package size
    • Cleaner distribution structure
    • Alignment with modern JavaScript ecosystem standards
  • #476 ae54611 Thanks @kamiazya! - Drop support Node.js v18 and Add test on Node.js v24

Patch Changes

web-csv-toolbox@0.11.2

15 Jun 07:49
0b450ff

Choose a tag to compare

Patch Changes

web-csv-toolbox@0.11.1

15 Jun 04:43
c4f47f8

Choose a tag to compare

Patch Changes

  • #471 ff5534e Thanks @kamiazya! - build(deps): bump serde_json from 1.0.125 to 1.0.140 in /web-csv-toolbox-wasm

  • #471 ff5534e Thanks @kamiazya! - build(deps): bump csv from 1.3.0 to 1.3.1 in /web-csv-toolbox-wasm

  • #472 96582d0 Thanks @kamiazya! - Upgrade dev dependencies

    • Add wasm-pack to 0.13
    • Updated biome to 1.9
    • Updated typedoc to 0.28
    • Updated TypeScript to 5.8
    • Updated Vite to 6.3
    • Updated vite-plugin-dts to 4.5
    • Updated vitest to 3.2
    • Updated webdriverio to 9.15

    Summary of Changes

    • Added hexa function for generating hexadecimal strings.
    • Introduced unicode and unicodeMapper functions for better Unicode string handling.
    • Updated text function to utilize new string generation methods for "hexa", "unicode", and "string16bits".
    • Cleaned up snapshot tests in parseResponse.spec.ts and parseResponseToStream.spec.ts by removing unnecessary comments.
    • Created a new declaration file for the web-csv-toolbox-wasm module to improve type safety.
    • Modified tsconfig.json to exclude all test files from compilation, improving build performance.
  • #471 ff5534e Thanks @kamiazya! - build(deps): bump compiler_builtins from 0.1.119 to 0.1.158 in /web-csv-toolbox-wasm

  • #471 ff5534e Thanks @kamiazya! - build(deps-dev): bump typedoc-plugin-mdn-links from 3.2.4 to 4.0.15

  • #471 ff5534e Thanks @kamiazya! - build(deps-dev): bump @changesets/cli from 2.27.6 to 2.29.3

  • #471 ff5534e Thanks @kamiazya! - Use fast-check instead of @fast-check/vitest in test files

  • #471 ff5534e Thanks @kamiazya! - build(deps): bump the cargo group in /web-csv-toolbox-wasm with 2 updates

web-csv-toolbox@0.11.0

21 Aug 14:23
ae0d4f2

Choose a tag to compare

Minor Changes

Patch Changes

web-csv-toolbox@0.10.2

08 Jul 07:52
a6eea4f

Choose a tag to compare

Patch Changes

web-csv-toolbox@0.10.1

14 Jun 06:55
f0fc034

Choose a tag to compare

Patch Changes

web-csv-toolbox@0.10.0

10 Jun 09:18
21d6132

Choose a tag to compare

Minor Changes

Patch Changes