Convert Hashnode blog exports to framework-agnostic Markdown with YAML frontmatter. This TypeScript package transforms your Hashnode content into portable Markdown files with proper frontmatter, localized images, and cleaned formatting—ready for any static site generator or blog platform.
Status: Production-ready with 99.36% test coverage. All core components, CLI, and programmatic API are complete.
- Metadata Extraction: Parse Hashnode exports and extract essential post metadata (title, slug, dates, tags, cover image)
- Markdown Transformation: Clean Hashnode-specific formatting quirks (align attributes, trailing whitespace)
- Image Localization: Download CDN images and replace URLs with local paths
- Intelligent Retry: Marker-based strategy to skip already-downloaded images and permanent failures
- YAML Frontmatter: Generate framework-agnostic frontmatter from post metadata
- Atomic File Operations: Safe, atomic writes with directory traversal protection
- Comprehensive Logging: Dual-channel output (console + file) with detailed error tracking
- Type-Safe: Full TypeScript with strict mode and comprehensive test coverage (98%+)
npm install @alvincrespo/hashnode-content-converterRequirements: Node.js >= 18.0.0 (Unix-like systems only: macOS, Linux)
The CLI provides a simple interface for converting Hashnode exports:
# Basic usage
npx @alvincrespo/hashnode-content-converter convert \
--export ./hashnode/export-articles.json \
--output ./blog
# With all options
npx @alvincrespo/hashnode-content-converter convert \
--export ./hashnode/export-articles.json \
--output ./blog \
--log-file ./conversion.log \
--verbose
# Overwrite existing posts (default is to skip)
npx @alvincrespo/hashnode-content-converter convert \
--export ./export.json \
--output ./blog \
--no-skip-existingOptions:
| Option | Short | Description | Default |
|---|---|---|---|
--export <path> |
-e |
Path to Hashnode export JSON file | Required |
--output <path> |
-o |
Output directory for converted posts | Required |
--log-file <path> |
-l |
Path to log file | Optional |
--skip-existing |
Skip posts that already exist | true |
|
--no-skip-existing |
Overwrite existing posts | ||
--verbose |
-v |
Show detailed output including image downloads | false |
--quiet |
-q |
Suppress all output except errors | false |
Exit Codes:
0- Conversion completed successfully1- Conversion completed with errors, or validation failed
The simplest way to convert a Hashnode export:
import { Converter } from '@alvincrespo/hashnode-content-converter';
// One-liner conversion
const result = await Converter.fromExportFile('./export.json', './blog');
console.log(`Converted ${result.converted} posts in ${result.duration}`);Track conversion progress with a simple callback:
import { Converter } from '@alvincrespo/hashnode-content-converter';
const converter = Converter.withProgress((current, total, title) => {
console.log(`[${current}/${total}] Converting: ${title}`);
});
const result = await converter.convertAllPosts('./export.json', './blog');For complete control, use the event-driven API:
import { Converter } from '@alvincrespo/hashnode-content-converter';
const converter = new Converter();
// Progress tracking
converter.on('conversion-starting', ({ index, total, post }) => {
console.log(`[${index}/${total}] Starting: ${post.title}`);
});
converter.on('conversion-completed', ({ result, durationMs }) => {
console.log(`Completed in ${durationMs}ms: ${result.title}`);
});
// Error handling
converter.on('conversion-error', ({ type, slug, message }) => {
console.error(`[${type}] ${slug}: ${message}`);
});
// Image tracking
converter.on('image-downloaded', ({ filename, success, is403 }) => {
if (!success) console.warn(`Failed to download: ${filename}`);
});
const result = await converter.convertAllPosts('./export.json', './blog', {
skipExisting: true,
downloadOptions: { downloadDelayMs: 100 }
});For custom pipelines, use individual processors:
import {
PostParser,
MarkdownTransformer,
ImageProcessor,
FrontmatterGenerator,
FileWriter
} from '@alvincrespo/hashnode-content-converter';
// Parse metadata
const parser = new PostParser();
const metadata = parser.parse(hashnodePost);
// Transform markdown
const transformer = new MarkdownTransformer({ trimTrailingWhitespace: true });
const cleanedMarkdown = transformer.transform(metadata.contentMarkdown);
// Process images
const imageProcessor = new ImageProcessor({ downloadDelayMs: 100 });
const imageResult = await imageProcessor.process(cleanedMarkdown, './blog/my-post');
// Generate frontmatter
const generator = new FrontmatterGenerator();
const frontmatter = generator.generate(metadata);
// Write file
const writer = new FileWriter();
await writer.writePost('./blog', metadata.slug, frontmatter, imageResult.markdown);All components are feature-complete with 99.36% test coverage (363 tests):
| Component | Description | Coverage |
|---|---|---|
| Converter | Main orchestrator with event-driven progress tracking | 99.27% |
| PostParser | Extract metadata from Hashnode posts | 100% |
| MarkdownTransformer | Clean Hashnode-specific formatting | 100% |
| ImageProcessor | Download and localize images with marker-based retry | 98%+ |
| FrontmatterGenerator | Generate YAML frontmatter from metadata | 100% |
| ImageDownloader | HTTP downloads with retry logic and 403 tracking | 98.36% |
| FileWriter | Atomic file operations with path validation | 97.77% |
| Logger | Dual-channel logging with error tracking | 98.85% |
| CLI | Command-line interface with progress display | 98%+ |
See docs/TRANSITION.md for the complete implementation history.
The package uses a modular, service-oriented design with clear separation of concerns:
Hashnode Export JSON
↓
PostParser (extract metadata)
↓
MarkdownTransformer (fix formatting)
↓
ImageProcessor (download & localize images)
↓
FrontmatterGenerator (create YAML frontmatter)
↓
FileWriter (persist to disk)
↓
Logger (track results & errors)
Key Directories:
- src/types/ - TypeScript interfaces and type definitions
- src/processors/ - Content transformation classes
- src/services/ - Infrastructure services (HTTP, filesystem, logging)
- src/cli/ - Command-line interface
- tests/ - Unit and integration tests (363 tests, 99.36% coverage)
This project uses nvm for Node.js version management:
# Set correct Node version
nvm use $(cat .node-version)
# Install dependencies
npm install# Build TypeScript to dist/
npm run build
# Watch mode (auto-rebuild on changes)
npm run dev
# Run tests with coverage
npm test
# Watch tests
npm run test:watch
# Interactive test dashboard
npm run test:ui
# Type-check without emitting
npm run type-check
# Lint code
npm run lint
# Full pre-publication checks
npm run prepublishOnlyThe project uses Vitest with comprehensive test coverage:
- 363 tests with 99.36% code coverage
- Test patterns: AAA (Arrange-Act-Assert), mocked dependencies, comprehensive edge cases
| Test Suite | Tests |
|---|---|
| Unit Tests | 305 |
| Integration Tests | 58 |
npm run test:coverage # Generate detailed coverage reportThis package uses a PR-based release workflow with GitHub Actions for automated npm publishing.
Releases follow a PR-based workflow to ensure CI passes before publishing:
/release patch → Create PR → CI passes → Merge → Auto-tag → npm publish
Steps:
-
Create release PR: Run the release command (or manually create a
release/v*branch)npm version patch --no-git-tag-version # or minor, major git checkout -b release/v0.2.4 git add package.json package-lock.json git commit -m "chore: bump version to 0.2.4" git push -u origin release/v0.2.4 # Create PR via GitHub
-
Wait for CI: Tests, linting, and type-checking must pass
-
Merge PR: After approval and CI passes
-
Automatic tagging: The
auto-tag-release.ymlworkflow automatically:- Detects the merged
release/v*branch - Creates and pushes the
v<version>tag
- Detects the merged
-
Automatic publishing: The
release.ymlworkflow then:- Runs lint, type-check, and tests
- Builds the package
- Publishes to npm
- Creates a GitHub Release with auto-generated notes
To merge a release PR without publishing (e.g., for testing):
- Include
[SKIP RELEASE]in the PR title - Example:
[SKIP RELEASE] chore: bump version to 0.2.4
You can manually create and push the tag later when ready.
For manual publishing without the automated workflow:
# Build and test
npm run prepublishOnly
# Login to npm (first time only)
npm login
# Publish
npm publish --access public- All tests passing (
npm test) - Version bumped in package.json
- No uncommitted changes
- Release PR created from
release/v*branch
If you're migrating from the original convert-hashnode.js script, here are the key differences:
| Original Script | This Package |
|---|---|
Environment variables (EXPORT_DIR, READ_DIR) |
CLI arguments (--export, --output) |
| Hardcoded paths | User-specified paths |
| Single output format | Same output format, more control |
-
Install the package:
npm install @alvincrespo/hashnode-content-converter
-
Replace script invocation:
# Old way (convert-hashnode.js) EXPORT_DIR=blog READ_DIR=blog node convert-hashnode.js # New way npx @alvincrespo/hashnode-content-converter convert \ --export ./hashnode/export-articles.json \ --output ./blog
-
Output format: The generated Markdown files maintain the same structure:
- YAML frontmatter with title, date, description, cover image
- Cleaned markdown content (align attributes removed)
- Downloaded images in post directories
If you were importing functions from the original JavaScript script, you can now use the new typed API:
// Old: CommonJS JavaScript (no type information)
const { processPost, downloadImage } = require('./convert-hashnode');// New: ESM TypeScript with full type support
import { Converter, PostParser, ImageProcessor } from '@alvincrespo/hashnode-content-converter';Note: This package uses ESM (ECMAScript Modules). If your project uses CommonJS, you'll need to use dynamic imports:
const { Converter } = await import('@alvincrespo/hashnode-content-converter');
API Reference: alvincrespo.github.io/hashnode-content-converter
Additional documentation:
- Getting Started Guide - Installation and basic usage
- CLI Reference - Command-line options
- Programmatic API - Using the converter in code
- Advanced Usage - Custom processors and events
Internal documentation:
- docs/TRANSITION.md - Architecture and implementation history
- CLAUDE.md - Project guidelines for development
- docs/phases/ - Phase-by-phase implementation plans
This project follows strict TypeScript and testing standards:
- TypeScript: Strict mode, no
anytypes in critical paths - Testing: 90%+ coverage required for new implementations
- Documentation: JSDoc on all public APIs
- Code Style: ESLint enforced
See CLAUDE.md for detailed development guidelines.
MIT