Releases: kamiazya/web-csv-toolbox
web-csv-toolbox@0.15.0
Minor Changes
-
#614
25d49eeThanks @kamiazya! - BREAKING CHANGE: RestrictcolumnCountStrategyoptions for object output tofill/strictonly.Object format now rejects
keepandtruncatestrategies at runtime, as these strategies are incompatible with object output semantics. Users relying onkeeportruncatewith object format must either:- Switch to
outputFormat: 'array'to use these strategies, or - Use
fill(default) orstrictfor object output
This change improves API clarity by aligning strategy availability with format capabilities and documenting the purpose-driven strategy matrix (including sparse/header requirements).
- Switch to
-
#616
8adf5d9Thanks @kamiazya! - Add TypeScript 5.0 const type parameters to eliminateas constrequirementsNew Features:
- Add
CSVOutputFormattype alias for"object" | "array"union type - Implement const type parameters in factory functions for automatic literal type inference
- Add function overloads to factory functions for precise return type narrowing
- Users no longer need to write
as constwhen specifying headers, delimiters, or other options
Improvements:
- Replace
import("@/...").XXXpatterns with standard import statements at file top - Update factory function type signatures to use const type parameters:
createStringCSVParser- automatically infers header, delimiter, quotation, and output format typescreateBinaryCSVParser- automatically infers header, delimiter, quotation, charset, and output format typescreateCSVRecordAssembler- automatically infers header and output format types
- Update type definitions to support const type parameters:
CSVRecordAssemblerCommonOptions- addOutputFormatandStrategytype parametersCSVProcessingOptions- addOutputFormattype parameterBinaryOptions- addCharsettype parameter
- Update JSDoc examples in factory functions to remove unnecessary
as constannotations - Update README.md examples to demonstrate simplified usage without
as const
Before:
const parser = createStringCSVParser({ header: ["name", "age"] as const, // Required as const outputFormat: "object", });
After:
const parser = createStringCSVParser({ header: ["name", "age"], // Automatically infers literal types outputFormat: "object", // Return type properly narrowed });
Technical Details:
- Leverages TypeScript 5.0's const type parameters feature
- Uses function overloads to narrow return types based on
outputFormatvalue:outputFormat: "array"→ returns array parseroutputFormat: "object"→ returns object parser- omitted → defaults to object parser
- dynamic value → returns union type
- All changes are 100% backward compatible
- Existing code using
as constcontinues to work unchanged
- Add
-
#614
25d49eeThanks @kamiazya! - ## Lexer API ChangesThis release includes low-level Lexer API changes for performance optimization.
Breaking Changes (Low-level API only)
These changes only affect users of the low-level Lexer API. High-level APIs (
parseString,parseBinary, etc.) are unchanged.- Token type constants: Changed from
Symbolto numeric constants - Location tracking: Now disabled by default. Add
trackLocation: trueto Lexer options if you need token location information. Note: Error messages still include position information even whentrackLocation: false(computed lazily only when errors occur). - Struct of token objects: Changed to improve performance. Token properties changed and reduce tokens by combining delimiter and newline information into a field.
Who is affected?
Most users are NOT affected. Only users who directly use
FlexibleStringCSVLexerand rely ontoken.locationorSymbol-based token type comparison need to update their code. - Token type constants: Changed from
-
#613
2a7b22eThanks @kamiazya! - Add factory functions for stream-based CSV parsing APIsNew Features:
- Add
createStringCSVParserStream()factory function for Mid-level string stream parsing - Add
createBinaryCSVParserStream()factory function for Mid-level binary stream parsing - Add
createStringCSVLexerTransformer()factory function for creating StringCSVLexerTransformer instances - Add
createCSVRecordAssemblerTransformer()factory function for creating CSVRecordAssemblerTransformer instances - Add
StringCSVLexerOptionstype for lexer factory function options - Add
StringCSVLexerTransformerStreamOptionstype for stream behavior options - Add
CSVRecordAssemblerFactoryOptionstype for assembler factory function options - Add
StringCSVParserFactoryOptionstype for string parser factory function options - Add
BinaryCSVParserFactoryOptionstype for binary parser factory function options - Update model factory functions (
createStringCSVLexer,createCSVRecordAssembler,createStringCSVParser,createBinaryCSVParser) to accept engine options for future optimization support - Update documentation with API level classification (High-level, Mid-level, Low-level)
Breaking Changes:
- Rename
CSVLexerTransformerclass toStringCSVLexerTransformerto clarify input type (string) - Rename
createCSVLexerTransformer()tocreateStringCSVLexerTransformer()for consistency - Rename
CSVLexerTransformerStreamOptionstype toStringCSVLexerTransformerStreamOptionsfor naming consistency - Remove unused
CSVLexerTransformerOptionstype
Migration:
// Before import { CSVLexerTransformer, createCSVLexerTransformer, type CSVLexerTransformerStreamOptions, } from "web-csv-toolbox"; new CSVLexerTransformer(lexer); createCSVLexerTransformer({ delimiter: "," }); // After import { StringCSVLexerTransformer, createStringCSVLexerTransformer, type StringCSVLexerTransformerStreamOptions, } from "web-csv-toolbox"; new StringCSVLexerTransformer(lexer); createStringCSVLexerTransformer({ delimiter: "," });
These factory functions simplify the API by handling internal parser/lexer creation, reducing the impact of future internal changes on user code. This addresses the issue where CSVLexerTransformer constructor signature changed in v0.14.0 (#612).
- Add
Patch Changes
-
#614
25d49eeThanks @kamiazya! - ## JavaScript Parser Performance ImprovementsThis release includes significant internal optimizations that improve JavaScript-based CSV parsing performance.
Before / After Comparison
Metric Before (v0.14) After Improvement 1,000 rows parsing 3.57 ms 1.42 ms 60% faster 5,000 rows parsing 19.47 ms 7.03 ms 64% faster Throughput (1,000 rows) 24.3 MB/s 61.2 MB/s 2.51x Throughput (5,000 rows) 24.5 MB/s 67.9 MB/s 2.77x Optimization Summary
Optimization Target Improvement Array copy method improvement Assembler -8.7% Quoted field parsing optimization Lexer Overhead eliminated Object assembler loop optimization Assembler -5.4% Regex removal for unquoted fields Lexer -14.8% String comparison optimization Lexer ~10% Object creation optimization Lexer ~20% Non-destructive buffer reading GC -46% Token type numeric conversion Lexer/GC -7% / -13% Location tracking made optional Lexer -19% to -31% Object.create(null) for records Assembler -31% Empty-row template cache Assembler ~4% faster on sparse CSV Row buffer reuse (no per-record slice) Assembler ~6% faster array format Header-length builder preallocation Assembler Capacity stays steady on wide CSV Object assembler row buffer pooling Assembler Lower GC spikes on object output Lexer segment-buffer pooling Lexer Smoother GC for quoted-heavy input
...
web-csv-toolbox@0.14.0
Minor Changes
-
#608
24f04d7Thanks @kamiazya! - feat!: rename binary stream APIs for consistency and add BufferSource supportSummary
This release standardizes the naming of binary stream parsing APIs to match the existing
parseBinary*family, and extends support to accept any BufferSource type (ArrayBuffer, Uint8Array, and other TypedArray views).Breaking Changes
API Renaming for Consistency
All
parseUint8Array*functions have been renamed toparseBinary*to maintain consistency with existing binary parsing APIs:Function Names:
parseUint8ArrayStream()→parseBinaryStream()parseUint8ArrayStreamToStream()→parseBinaryStreamToStream()
Type Names:
ParseUint8ArrayStreamOptions→ParseBinaryStreamOptions
Internal Functions (for reference):
parseUint8ArrayStreamInMain()→parseBinaryStreamInMain()parseUint8ArrayStreamInWorker()→parseBinaryStreamInWorker()parseUint8ArrayStreamInWorkerWASM()→parseBinaryStreamInWorkerWASM()
Rationale:
The previous naming was inconsistent with the rest of the binary API family (parseBinary,parseBinaryToArraySync,parseBinaryToIterableIterator,parseBinaryToStream). The new naming provides:- Perfect consistency across all binary parsing APIs
- Clear indication that these functions accept any binary data format
- Better predictability for API discovery
BufferSource Support
FlexibleBinaryCSVParserandBinaryCSVParserStreamnow acceptBufferSource(=ArrayBuffer | ArrayBufferView) instead of justUint8Array:Before:
const parser = new FlexibleBinaryCSVParser({ header: ['name', 'age'] }); const data = new Uint8Array([...]); // Only Uint8Array const records = parser.parse(data);
After:
const parser = new FlexibleBinaryCSVParser({ header: ['name', 'age'] }); // Uint8Array still works const uint8Data = new Uint8Array([...]); const records1 = parser.parse(uint8Data); // ArrayBuffer now works directly const buffer = await fetch('data.csv').then(r => r.arrayBuffer()); const records2 = parser.parse(buffer); // Other TypedArray views also work const int8Data = new Int8Array([...]); const records3 = parser.parse(int8Data);
Benefits:
- Direct use of
fetch().then(r => r.arrayBuffer())without conversion - Flexibility to work with any TypedArray view
- Alignment with Web API standards (BufferSource is widely used)
Migration Guide
Automatic Migration
Use find-and-replace in your codebase:
# Function calls parseUint8ArrayStream → parseBinaryStream parseUint8ArrayStreamToStream → parseBinaryStreamToStream # Type references ParseUint8ArrayStreamOptions → ParseBinaryStreamOptions
TypeScript Users
If you were explicitly typing with
Uint8Array, you can now use the more generalBufferSource:// Before function processCSV(data: Uint8Array) { return parseBinaryStream(data); } // After (more flexible) function processCSV(data: BufferSource) { return parseBinaryStream(data); }
Updated API Consistency
All binary parsing APIs now follow a consistent naming pattern:
// Single-value binary data parseBinary(); // Binary → AsyncIterableIterator<Record> parseBinaryToArraySync(); // Binary → Array<Record> (sync) parseBinaryToIterableIterator(); // Binary → IterableIterator<Record> parseBinaryToStream(); // Binary → ReadableStream<Record> // Streaming binary data parseBinaryStream(); // ReadableStream<Uint8Array> → AsyncIterableIterator<Record> parseBinaryStreamToStream(); // ReadableStream<Uint8Array> → ReadableStream<Record>
Note: While the stream input type remains
ReadableStream<Uint8Array>(Web Streams API standard), the internal parsers now acceptBufferSourcefor individual chunks.Documentation Updates
README.md
- Updated Low-level APIs section to reflect
parseBinaryStream*naming - Added flush procedure documentation for streaming mode
- Added BufferSource examples
API Reference (docs/reference/package-exports.md)
- Added comprehensive Low-level API Reference section
- Documented all Parser Models (Tier 1) and Lexer + Assembler (Tier 2)
- Included usage examples and code snippets
Architecture Guide (docs/explanation/parsing-architecture.md)
- Updated Binary CSV Parser section to document BufferSource support
- Added detailed streaming mode examples with flush procedures
- Clarified multi-byte character handling across chunk boundaries
Flush Procedure Clarification
Documentation now explicitly covers the requirement to call
parse()without arguments when using streaming mode:const parser = createBinaryCSVParser({ header: ["name", "age"] }); const encoder = new TextEncoder(); // Process chunks const records1 = parser.parse(encoder.encode("Alice,30\nBob,"), { stream: true, }); const records2 = parser.parse(encoder.encode("25\n"), { stream: true }); // IMPORTANT: Flush remaining data (required!) const records3 = parser.parse();
This prevents data loss from incomplete records or multi-byte character buffers.
Type Safety
All changes maintain full TypeScript strict mode compliance with proper type inference and generic constraints.
-
#608
24f04d7Thanks @kamiazya! - AddarrayBufferThresholdoption to Engine configuration for automatic Blob reading strategy selectionNew Feature
Added
engine.arrayBufferThresholdoption that automatically selects the optimal Blob reading strategy based on file size:- Files smaller than threshold: Use
blob.arrayBuffer()+parseBinary()(6-8x faster, confirmed by benchmarks) - Files equal to or larger than threshold: Use
blob.stream()+parseBinaryStream()(memory-efficient)
Default: 1MB (1,048,576 bytes), determined by comprehensive benchmarks
Applies to:
parseBlob()andparseFile()onlyBenchmark Results
File Size Binary (ops/sec) Stream (ops/sec) Performance Gain 1KB 21,691 2,685 8.08x faster 10KB 2,187 311 7.03x faster 100KB 219 32 6.84x faster 1MB 20 3 6.67x faster Usage
import { parseBlob, EnginePresets } from "web-csv-toolbox"; // Use default (1MB threshold) for await (const record of parseBlob(file)) { console.log(record); } // Always use streaming (memory-efficient) for await (const record of parseBlob(largeFile, { engine: { arrayBufferThreshold: 0 }, })) { console.log(record); } // Custom threshold (512KB) for await (const record of parseBlob(file, { engine: { arrayBufferThreshold: 512 * 1024 }, })) { console.log(record); } // With preset for await (const record of parseBlob(file, { engine: EnginePresets.fastest({ arrayBufferThreshold: 2 * 1024 * 1024, // 2MB }), })) { console.log(record); }
Special Values
0- Always use streaming (maximum memory efficiency)Infinity- Always use arrayBuffer (maximum performance for small files)
Security Note
When using
arrayBufferThreshold > 0, files must stay belowmaxBufferSize(default 10MB) to prevent excessive memory allocation. Files exceeding this limit will throw aRangeError.Design Philosophy
This option belongs to
engineconfiguration because it affects performance and behavior only, not the parsing result specification. This follows the design principle:- Top-level options: Affect specification (result changes)
- Engine options: Affect performance/behavior (same result, different execution)
- Files smaller than threshold: Use
-
#608
24f04d7Thanks @kamiazya! - Add support for Blob, File, and Request objectsThis release adds native support for parsing CSV data from Web Standard
Blob,File, andRequestobjects, making the library more versatile across different environments.New Functions:
-
parseBlob(blob, options)- Parse CSV from Blob or File objects- Automatic charset detection from
blob.typeproperty - Supports compression via
decompressionoption - Returns
AsyncIterableIterator<CSVRecord> - Includes
.toArray()and.toStream()namespace methods
- Automatic charset detection from
-
parseFile(file, options)- Enhanced File parsing with automatic error source tracking- Built on top of
parseBlobwith additional functionality - Automatically sets
file.nameas error source for better error reporting - Provides clearer intent when working specifically with File objects
- Useful for file inputs and drag-and-drop scenarios...
- Built on top of
-
web-csv-toolbox@0.13.0
Minor Changes
-
#545
43a6812Thanks @kamiazya! - Add comprehensive memory protection to prevent memory exhaustion attacksThis release introduces new security features to prevent unbounded memory growth during CSV parsing. The parser now enforces configurable limits on both buffer size and field count to protect against denial-of-service attacks via malformed or malicious CSV data.
New Features:
- Added
maxBufferSizeoption toCommonOptions(default:10 * 1024 * 1024characters) - Added
maxFieldCountoption toRecordAssemblerOptions(default: 100,000 fields) - Throws
RangeErrorwhen buffer exceeds size limit - Throws
RangeErrorwhen field count exceeds limit - Comprehensive memory safety protection against DoS attacks
Note:
maxBufferSizeis measured in UTF-16 code units (JavaScript string length), not bytes. This is approximately 10MB for ASCII text, but may vary for non-ASCII characters.Breaking Changes:
None. This is a backward-compatible enhancement with sensible defaults.Security:
This change addresses three potential security vulnerabilities:-
Unbounded buffer growth via streaming input: Attackers could exhaust system memory by streaming large amounts of malformed CSV data that cannot be tokenized. The
maxBufferSizelimit prevents this by throwingRangeErrorwhen the internal buffer exceeds10 * 1024 * 1024characters (approximately 10MB for ASCII). -
Quoted field parsing memory exhaustion: Attackers could exploit the quoted field parsing logic by sending strategically crafted CSV with unclosed quotes or excessive escaped quotes, causing the parser to accumulate unbounded data in the buffer. The
maxBufferSizelimit protects against this attack vector. -
Excessive column count attacks: Attackers could send CSV files with an enormous number of columns to exhaust memory during header parsing and record assembly. The
maxFieldCountlimit (default: 100,000 fields per record) prevents this by throwingRangeErrorwhen exceeded.
Users processing untrusted CSV input are encouraged to use the default limits or configure appropriate
maxBufferSizeandmaxFieldCountvalues for their use case. - Added
-
#546
76eec90Thanks @kamiazya! - BREAKING CHANGE: Change error types from RangeError to TypeError for consistency with Web Standards- Change all
RangeErrortoTypeErrorfor consistency - This affects error handling in:
getOptionsFromResponse(): Invalid MIME type, unsupported/multiple content-encodingsparseResponse(): Null response bodyparseResponseToStream(): Null response body
- Aligns with Web Standard APIs behavior (DecompressionStream throws TypeError)
- Improves consistency for error handling with
catch (error instanceof TypeError)
Migration guide:
If you were catchingRangeErrorfromgetOptionsFromResponse(), update to catchTypeErrorinstead:- } catch (error) { - if (error instanceof RangeError) { + } catch (error) { + if (error instanceof TypeError) { // Handle invalid content type or encoding } }
New feature: Experimental compression format support
- Add
allowExperimentalCompressionsoption to enable experimental/non-standard compression formats - Browsers: By default, only
gzipanddeflateare supported (cross-browser compatible) - Node.js: By default,
gzip,deflate, andbr(Brotli) are supported - When enabled, allows platform-specific formats like
deflate-raw(Chrome/Edge only) - Provides flexibility for environment-specific compression formats
- See documentation for browser compatibility details and usage examples
Other improvements in this release:
- Add Content-Encoding header validation with RFC 7231 compliance
- Normalize Content-Encoding header: convert to lowercase, trim whitespace
- Ignore empty or whitespace-only Content-Encoding headers
- Add comprehensive tests for Content-Encoding validation (23 tests)
- Add security documentation with TransformStream size limit example
- Error messages now guide users to
allowExperimentalCompressionsoption when needed
- Change all
-
#551
b21b6d8Thanks @kamiazya! - Add comprehensive documentation for supported environments and versioning policyThis release adds two new reference documents to clarify the library's support policies and version management strategy.
New Documentation:
- Supported Environments: Comprehensive documentation of runtime environment support tiers
- Versioning Policy: Detailed versioning strategy and semantic versioning rules
-
#551
b21b6d8Thanks @kamiazya! - Add environment-specific compression format support for better cross-browser and Node.js compatibilityThis release adjusts the supported compression formats based on the runtime environment to ensure reliability and prevent errors across different browsers and Node.js versions.
Changes:
- Browser environments: Support
gzipanddeflateonly (universal cross-browser support) - Node.js 20+ environments: Support
gzip,deflate, andbr(Brotli)
Rationale:
Previously, browser builds included
deflate-rawin the default supported formats. However,deflate-rawis only supported in Chromium-based browsers (Chrome, Edge) and not in Firefox or Safari. To ensure the library works reliably across all modern browsers by default, we now only include universally supported formats.Browser Compatibility:
Format Chrome/Edge Firefox Safari Included by Default gzip✅ ✅ ✅ ✅ Yes deflate✅ ✅ ✅ ✅ Yes deflate-raw✅ ❌ ❌ ❌ No (experimental) Using Experimental Compressions:
If you need to use
deflate-rawor other non-standard compression formats in Chromium-based browsers, you can enable them with theallowExperimentalCompressionsoption:// Use deflate-raw in Chrome/Edge (may fail in Firefox/Safari) const response = await fetch("data.csv"); // Content-Encoding: deflate-raw await parseResponse(response, { allowExperimentalCompressions: true, });
You can also detect browser support at runtime:
// Browser-aware usage const isChromium = navigator.userAgent.includes("Chrome"); await parseResponse(response, { allowExperimentalCompressions: isChromium, });
Migration Guide:
For users who were relying on
deflate-rawin browser environments:-
Option 1: Use
gzipordeflatecompression instead (recommended for cross-browser compatibility)// Server-side: Use gzip instead of deflate-raw response.headers.set("content-encoding", "gzip");
-
Option 2: Enable experimental compressions for Chromium-only deployments
await parseResponse(response, { allowExperimentalCompressions: true, }); // Works in Chrome/Edge, may fail in Firefox/Safari
-
Option 3: Detect browser support and handle fallbacks
try { await parseResponse(response, { allowExperimentalCompressions: true, }); } catch (error) { // Fallback for browsers that don't support the format console.warn("Compression format not supported, using uncompressed"); }
Implementation:
The supported compressions are now determined at build time using package.json
importsfield:- Browser/Web builds use
getOptionsFromResponse.constants.web.js - Node.js builds use
getOptionsFromResponse.constants.node.js
This ensures type-safe, environment-appropriate compression support.
No changes required for users already using
gzipordeflatecompression in browsers, orgzip,deflate, orbrin Node.js. - Browser environments: Support
-
#563
7d51d52Thanks @kamiazya! - Optimize streaming API design for better performance and consistencyBreaking Changes
Token Stream Output Changed from Batch to Individual
CSVLexerTransformerandCSVRecordAssemblerTransformernow emit/accept individual tokens instead of token arrays for improved streaming performance and API consistency.Before:
CSVLexerTransformer: TransformStream<string, Token[]>; CSVRecordAssemblerTransformer: TransformStream<Token[], CSVRecord>;
After:
C...
web-csv-toolbox@0.12.0
Minor Changes
-
#533
b221fc7Thanks @kamiazya! - Migrate to ESM-only distributionThis release removes CommonJS (CJS) and UMD build outputs, distributing only ES modules (ESM). All build artifacts are now placed directly in the
dist/directory for a simpler and cleaner structure.Breaking Changes
- Removed CommonJS support: The package no longer provides
.cjsfiles. Node.js projects must use ES modules. - Removed UMD bundle: The UMD build (
dist/web-csv-toolbox.umd.js) has been removed. For CDN usage, use ESM via<script type="module">. - Changed distribution structure: Build outputs moved from
dist/es/,dist/cjs/, anddist/types/todist/root directory. - Removed
build:browsercommand: The separate UMD build step is no longer needed.
Migration Guide
For Node.js users:
- Ensure your project uses
"type": "module"inpackage.json, or use.mjsfile extensions - Update any CommonJS
require()calls to ESMimportstatements - Node.js 20.x or later is required (already the minimum supported version)
For CDN users:
Before:<script src="https://unpkg.com/web-csv-toolbox"></script>
After:
<script type="module"> import { parse } from "https://unpkg.com/web-csv-toolbox"; </script>
For bundler users:
No changes required - modern bundlers handle ESM correctly.Benefits
- Simpler build configuration and faster build times
- Smaller package size
- Cleaner distribution structure
- Alignment with modern JavaScript ecosystem standards
- Removed CommonJS support: The package no longer provides
-
#476
ae54611Thanks @kamiazya! - Drop support Node.js v18 and Add test on Node.js v24
Patch Changes
-
#535
009c762Thanks @egoitz-ehu! - Close issue #524 -
#532
fc4fc57Thanks @sshekhar563! - fix(docs): correct typo 'Lexter' → 'Lexer' in Lexer.ts JSDoc -
#529
76df785Thanks @kamiazya! - Migrate npm package publishing to OIDC trusted publishing for enhanced security -
#531
a273b9dThanks @VaishnaviOnPC! - fix: correct typo in escapeField.ts comment ('ASSTPTED' → 'ASSERTED')
web-csv-toolbox@0.11.2
web-csv-toolbox@0.11.1
Patch Changes
-
#471
ff5534eThanks @kamiazya! - build(deps): bump serde_json from 1.0.125 to 1.0.140 in /web-csv-toolbox-wasm -
#471
ff5534eThanks @kamiazya! - build(deps): bump csv from 1.3.0 to 1.3.1 in /web-csv-toolbox-wasm -
#472
96582d0Thanks @kamiazya! - Upgrade dev dependencies- Add wasm-pack to 0.13
- Updated biome to 1.9
- Updated typedoc to 0.28
- Updated TypeScript to 5.8
- Updated Vite to 6.3
- Updated vite-plugin-dts to 4.5
- Updated vitest to 3.2
- Updated webdriverio to 9.15
Summary of Changes
- Added
hexafunction for generating hexadecimal strings. - Introduced
unicodeandunicodeMapperfunctions for better Unicode string handling. - Updated
textfunction to utilize new string generation methods for "hexa", "unicode", and "string16bits". - Cleaned up snapshot tests in
parseResponse.spec.tsandparseResponseToStream.spec.tsby removing unnecessary comments. - Created a new declaration file for the
web-csv-toolbox-wasmmodule to improve type safety. - Modified
tsconfig.jsonto exclude all test files from compilation, improving build performance.
-
#471
ff5534eThanks @kamiazya! - build(deps): bump compiler_builtins from 0.1.119 to 0.1.158 in /web-csv-toolbox-wasm -
#471
ff5534eThanks @kamiazya! - build(deps-dev): bump typedoc-plugin-mdn-links from 3.2.4 to 4.0.15 -
#471
ff5534eThanks @kamiazya! - build(deps-dev): bump @changesets/cli from 2.27.6 to 2.29.3 -
#471
ff5534eThanks @kamiazya! - Use fast-check instead of @fast-check/vitest in test files -
#471
ff5534eThanks @kamiazya! - build(deps): bump the cargo group in /web-csv-toolbox-wasm with 2 updates
web-csv-toolbox@0.11.0
Minor Changes
-
#343
139f3c2Thanks @nagasawaryoya! - Dynamic Type Inference and User-Defined Types from CSV Headers -
#343
139f3c2Thanks @nagasawaryoya! - Remove InvalidOptionError class -
#343
139f3c2Thanks @nagasawaryoya! - Support AbortSignal
Patch Changes
-
#343
139f3c2Thanks @nagasawaryoya! - build(deps-dev): bump typedoc from 0.25.13 to 0.26.6 -
#343
139f3c2Thanks @nagasawaryoya! - build(deps-dev): bump fast-check from 3.19.0 to 3.21.0 -
#343
139f3c2Thanks @nagasawaryoya! - build(deps): bump wasm-pack from 0.12.1 to 0.13.0 in /web-csv-toolbox-wasm -
#343
139f3c2Thanks @nagasawaryoya! - Remove unnecessary processes for convertIterableIteratorToAsync function -
#343
139f3c2Thanks @nagasawaryoya! - build(deps): bump serde_json from 1.0.117 to 1.0.125 in /web-csv-toolbox-wasm -
#343
139f3c2Thanks @nagasawaryoya! - build(deps): bump serde from 1.0.203 to 1.0.208 in /web-csv-toolbox-wasm -
#343
139f3c2Thanks @nagasawaryoya! - build(deps-dev): bump typedoc-plugin-mdn-links from 3.2.1 to 3.2.4 -
#343
139f3c2Thanks @nagasawaryoya! - Update concurrency configuration in main Workflow -
#343
139f3c2Thanks @nagasawaryoya! - build(deps): bump cxx-build from 1.0.124 to 1.0.126 in /web-csv-toolbox-wasm -
#343
139f3c2Thanks @nagasawaryoya! - build(deps): bump compiler_builtins from 0.1.112 to 0.1.119 in /web-csv-toolbox-wasm -
#343
139f3c2Thanks @nagasawaryoya! - build(deps-dev): bump vite from 5.3.1 to 5.4.2 -
#343
139f3c2Thanks @nagasawaryoya! - Refactor CI/CD workflow -
#343
139f3c2Thanks @nagasawaryoya! - build(deps-dev): bump @biomejs/biome from 1.8.2 to 1.8.3
web-csv-toolbox@0.10.2
Patch Changes
-
#269
7b84c8cThanks @dependabot! - build(deps): bump cxx-build from 1.0.123 to 1.0.124 in /web-csv-toolbox-wasm -
#272
574bee2Thanks @kamiazya! - Update Snapshot release configuration -
#274
a163f35Thanks @dependabot! - build(deps-dev): bump typedoc-plugin-mdn-links from 3.1.29 to 3.2.1 -
#276
5daa58bThanks @dependabot! - build(deps-dev): bump @biomejs/biome from 1.7.3 to 1.8.2 -
#266
2c1e872Thanks @dependabot! - build(deps-dev): bump terser from 5.31.0 to 5.31.1 -
#275
2aa667cThanks @dependabot! - build(deps-dev): bump @changesets/cli from 2.27.1 to 2.27.6 -
#267
b6db634Thanks @dependabot! - build(deps-dev): bump vite from 5.2.13 to 5.3.1
web-csv-toolbox@0.10.1
Patch Changes
-
#253
044b0e6Thanks @dependabot! - build(deps-dev): bump typedoc from 0.25.12 to 0.25.13 -
#257
926244aThanks @kamiazya! - Remove lefthook configuration file -
#259
f4dd3d8Thanks @kamiazya! - Add .node-version file and update Node.js setup in GitHub workflows -
#250
cbdb5cbThanks @dependabot! - build(deps-dev): bump vite-plugin-dts from 3.7.3 to 3.9.1 -
#255
49af679Thanks @kamiazya! - Refactor ParseError class to extend SyntaxError -
#251
65db459Thanks @dependabot! - build(deps-dev): bump @fast-check/vitest from 0.1.0 to 0.1.1 -
#258
824ef20Thanks @kamiazya! - Update package manager to pnpm@9.3.0 -
#252
1ebbdb4Thanks @dependabot! - build(deps-dev): bump typedoc-plugin-mdn-links from 3.1.18 to 3.1.29 -
#239
88fbef6Thanks @dependabot! - build(deps-dev): bump webdriverio from 8.34.1 to 8.38.2
web-csv-toolbox@0.10.0
Minor Changes
Patch Changes
-
#249
d05beb2Thanks @kamiazya! - build(deps): bump cxx-build from 1.0.119 to 1.0.123 in /web-csv-toolbox-wasm -
#249
d05beb2Thanks @kamiazya! - build(deps): bump moonrepo/setup-rust from 1.1.0 to 1.2.0 -
#249
d05beb2Thanks @kamiazya! - build(deps-dev): bump vite from 5.1.7 to 5.2.13 -
#249
d05beb2Thanks @kamiazya! - build(deps): bump compiler_builtins from 0.1.108 to 0.1.112 in /web-csv-toolbox-wasm -
#249
d05beb2Thanks @kamiazya! - build(deps): bump serde_json from 1.0.114 to 1.0.117 in /web-csv-toolbox-wasm -
#185
2b4aa28Thanks @dependabot! - build(deps): bump wasm-opt from 0.116.0 to 0.116.1 in /web-csv-toolbox-wasm -
#249
d05beb2Thanks @kamiazya! - build(deps-dev): bump fast-check from 3.15.1 to 3.19.0 -
#249
d05beb2Thanks @kamiazya! - build(deps): bump serde from 1.0.197 to 1.0.203 in /web-csv-toolbox-wasm -
#249
d05beb2Thanks @kamiazya! - Disable macos FireFox browser testing on CI -
#249
d05beb2Thanks @kamiazya! - Add type check script and update CI workflow