This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
@alvincrespo/hashnode-content-converter is a TypeScript npm package that converts Hashnode blog exports into framework-agnostic Markdown with YAML frontmatter. It was refactored from a monolithic Node.js script (convert-hashnode.js) into a modular, type-safe, reusable package.
Current Status: Feature-complete and production-ready. All core processors, services, and CLI are fully implemented with 99.36% test coverage (363 tests). See TRANSITION.md for the implementation history.
Platform Support: This package is designed for Unix-like systems (macOS, Linux). Windows is not supported.
Reference: TRANSITION.md contains the full implementation roadmap and architectural design.
- Runtime: Node.js >=18.0.0 (using nvm for version management)
- Language: TypeScript 5.0+ (target: ES2022, module: NodeNext)
- Build: TypeScript compiler with incremental builds
- Testing: Vitest with @vitest/ui dashboard
- Linting: ESLint + @typescript-eslint
- CLI: commander.js for argument parsing
- Package Manager: npm (ESM with
"type": "module", published to npm registry)
This project uses nvm (Node Version Manager) to manage Node.js versions. Before running any npm or node commands, you MUST set the correct Node version:
# Set Node version from .node-version file
nvm use $(cat .node-version)
# Verify the correct version is active
node --version # Should show v24.4.0 (or the version in .node-version)
npm --version # Should show 11.6.0 or compatibleFor interactive terminal work (zsh recommended):
- Your zsh shell should already have nvm properly initialized via your
.zshrc - Simply run npm commands directly - nvm will automatically use the version from
.node-version - This is the preferred way to work locally
For Claude Code automated tasks:
- Claude's bash environment doesn't persist nvm initialization across commands
- Always chain commands with
&&to keep nvm in the shell context - Pattern:
nvm use $(cat .node-version) && npm run build - Example:
# Good - nvm stays active for the npm command nvm use $(cat .node-version) && npm run type-check && npm run build # Avoid - nvm state is lost between separate commands nvm use $(cat .node-version) npm run build # ❌ npm may not be found
# Build TypeScript to dist/
npm run build
# Watch mode (auto-rebuild on file changes)
npm run dev
# Run tests once
npm test
# Run tests in watch mode
npm run test:watch
# Open interactive test dashboard (useful for debugging)
npm run test:ui
# Generate coverage reports (includes html report)
npm run test:coverage
# Type-check without emitting (fast feedback loop)
npm run type-check
# Run full pre-publication checks (build + test)
npm run prepublishOnlyQuick Feedback Loop for Development:
# Terminal 1: Watch TypeScript compilation
npm run dev
# Terminal 2: Watch tests in background
npm run test:watch
# Terminal 3: Type checking before commit
npm run type-checkThe application uses a modular, service-oriented design with clear separation of concerns. The conversion pipeline flows through distinct phases:
Hashnode Export JSON
↓
PostParser (extract metadata)
↓
MarkdownTransformer (fix Hashnode-specific issues)
↓
ImageProcessor (download & localize images)
↓
FrontmatterGenerator (create YAML frontmatter)
↓
FileWriter (persist to disk)
↓
Logger (track results & errors)
- src/types/ - TypeScript interfaces defining the Hashnode export schema, conversion options, and result types
- src/processors/ - Single-responsibility classes that transform content (parsing, markdown fixing, image processing, frontmatter generation)
- src/services/ - Infrastructure services (HTTP downloads, filesystem I/O, logging)
- src/cli/ - Command-line interface using commander.js
- src/converter.ts - Main orchestrator that coordinates the pipeline
- tests/ - Unit and integration tests with Vitest (363 tests, 99.36% coverage)
- tests/fixtures/ - Sample Hashnode export JSON for testing
| Class | Location | Status | Purpose |
|---|---|---|---|
Converter |
src/converter.ts | Complete | Orchestrates entire conversion pipeline with event system |
PostParser |
src/processors/post-parser.ts | Complete | Extracts and validates metadata from Hashnode posts |
MarkdownTransformer |
src/processors/markdown-transformer.ts | Partial | Align attribute removal and whitespace trimming work; callout conversion stubbed |
ImageProcessor |
src/processors/image-processor.ts | Complete | Downloads images with marker-based retry strategy |
FrontmatterGenerator |
src/processors/frontmatter-generator.ts | Complete | Generates YAML frontmatter with proper escaping |
FileWriter |
src/services/file-writer.ts | Complete | Atomic file writes with path sanitization |
ImageDownloader |
src/services/image-downloader.ts | Complete | HTTPS downloads with retry logic and 403 handling |
Logger |
src/services/logger.ts | Complete | Dual logging (console + file) with HTTP 403 tracking |
Reference Implementation: convert-hashnode.js contains the original 343-line monolithic script that this package was refactored from.
Processors handle content transformation with single responsibility. When implementing a new processor:
- Define input/output types in src/types/
- Create processor class in src/processors/ with a single
process()ortransform()method - Add unit tests in tests/unit/
- Integrate into the pipeline in src/converter.ts
Example pattern:
class MyProcessor {
process(input: Input): Output {
// single responsibility transformation
return output;
}
}Services handle infrastructure concerns (I/O, networking). Create in src/services/ following the dependency injection pattern:
class MyService {
constructor(private config?: ServiceConfig) {}
async doWork(): Promise<Result> {
// ...
}
}Tests use Vitest. Follow this pattern:
import { describe, it, expect, vi } from 'vitest';
import { ClassToTest } from '../src/path/to/class';
describe('ClassToTest', () => {
it('should do something', async () => {
// Arrange
const instance = new ClassToTest();
// Act
const result = await instance.method();
// Assert
expect(result).toEqual(expected);
});
});Use tests/fixtures/ sample data for realistic test cases. Mock external dependencies (HTTP, filesystem) for unit tests.
The Hashnode schema is defined in src/types/hashnode-schema.ts. Key types:
HashnodePost- Full post schema from exportPostMetadata- Subset of fields extracted by PostParserConversionOptions- Configuration (skipExisting, downloadDelayMs)ConversionResult- Result with stats (converted, skipped, errors, duration)
Always use these types when working with the data pipeline to catch errors early.
The CLI is defined in src/cli/convert.ts and registered in package.json as the hashnode-converter binary. It provides:
Options:
--export <path>(required) - Path to Hashnode export JSON file--output <path>(required) - Output directory for converted markdown files--log-file <path>(optional) - Path for conversion log file--skip-existing/--no-skip-existing- Skip already converted posts (default: true)--verbose- Show detailed output including image downloads--quiet- Suppress all output except errors
Features:
- Comprehensive path validation with helpful error messages
- Progress bar with ASCII visualization during conversion
- Proper exit codes (0 for success, 1 for errors)
Usage:
npx @alvincrespo/hashnode-content-converter convert --export ./export.json --output ./blog
npx @alvincrespo/hashnode-content-converter convert --export ./export.json --output ./blog --verbose
npx @alvincrespo/hashnode-content-converter convert --export ./export.json --output ./blog --no-skip-existing- Main entry: dist/index.ts (compiles to dist/index.js)
- Types entry: dist/index.d.ts (auto-generated)
- CLI entry: dist/cli/convert.js
- Output format: ESM (ECMAScript Modules) for Node.js >=18
- Module settings:
"type": "module"in package.json,verbatimModuleSyntax: truein tsconfig
Build configuration excludes tests and uses tsconfig.build.json.
Comments should explain the "why" and "what's non-obvious", not restate the code.
ADD comments when:
- Explaining non-obvious algorithmic choices
- Clarifying why a certain error handling strategy was chosen
- Documenting gotchas, edge cases, or side effects that aren't obvious
- Adding business logic or domain knowledge needed to understand intent
- Explaining performance considerations or tradeoffs
SKIP comments for:
- Simple boolean checks where variable names are self-explanatory
- Standard control flow (if/else, loops)
- What standard library functions do
- Code that clearly states what it does
Example:
// Bad: Just restates what the code does
if (result.is403) {
return result;
}
// Good: Explains why we don't retry
// Don't retry on 403 - indicates the URL is permanently inaccessible
// rather than a transient network failure, so further attempts are wasteful
if (result.is403) {
return result;
}Use JSDoc for:
- Public methods, functions, and interfaces
- Complex configuration options
- Return types and error conditions
- Usage examples for non-obvious behavior
Keep JSDoc concise but complete.
- Service Oriented: Each service has a single purpose
- Dependency Injection: Services accept configuration via constructor
- Pipeline Pattern: Converter orchestrates sequential processors
- Error Tracking: Logger tracks errors separately for reporting
- Markdown includes
align="..."attributes that need removal - Images reference CDN URLs that should be downloaded locally
- Metadata fields may be null/undefined (handle with defaults)
- Posts have both
contentMarkdown(raw) andcontent(HTML) - use contentMarkdown
The ImageProcessor needs to:
- Extract all image URLs from markdown
- Download each image using ImageDownloader (with retry logic)
- Replace CDN URLs with local relative paths
- Handle download failures gracefully (track in Logger)
- Skip already-downloaded images
HTTP 403 errors should be tracked separately as they indicate permission issues rather than transient failures.
- Unit Tests: 305 tests across 8 test files covering all processors and services
- Integration Tests: 58 tests for full pipeline in tests/integration/converter.test.ts
- Fixtures: tests/fixtures/sample-hashnode-export.json contains real-world example data
- Mocks: tests/mocks/mocks.ts provides factory functions for HTTP responses, file streams, and console output
The project currently has 363 tests with 99.36% code coverage:
| Component | Tests | Coverage |
|---|---|---|
| PostParser | 51 | 100% |
| MarkdownTransformer | 41 | 100% |
| ImageProcessor | 51 | 98%+ |
| FrontmatterGenerator | 9 | 100% |
| ImageDownloader | 28 | 98.36% |
| FileWriter | 32 | 97.77% |
| Logger | 48 | 98.85% |
| CLI | 45 | 98%+ |
| Converter (integration) | 58 | 99.27% |
When implementing or modifying a service or processor, verify completeness with:
- Tests Pass: Run
npm test- all tests must pass without errors - Code Coverage: Run
npm run test:coverage- target 90%+ coverage for the code being tested- Statements: ≥90%
- Branches: ≥90% (critical for error handling paths)
- Functions: ≥90%
- Lines: ≥90%
Coverage Goal: 80%+ overall project coverage, 90%+ for new implementations. Current project coverage (99.36%) exceeds all targets.
This project uses an automated release workflow with GitHub Actions. Releases go through a PR-based process to ensure CI passes before publishing.
Use the /release Claude command to cut a new release:
/release patch # Bug fixes: 0.2.2 → 0.2.3
/release minor # New features: 0.2.2 → 0.3.0
/release major # Breaking changes: 0.2.2 → 1.0.0The /release command automates the following process:
- Create release branch:
release/v<new-version>from main - Bump version: Updates package.json (without creating a tag)
- Push and create PR: Opens a PR for review
- CI runs: Tests, linting, and type-checking must pass
- Merge PR: After approval and CI passes
- Auto-tag (GitHub Action): The auto-tag-release.yml workflow automatically creates and pushes the
v<version>tag - Publish (GitHub Action): The release.yml workflow publishes to npm and creates a GitHub Release
To merge a release PR without triggering the npm publish (e.g., for testing):
- Include
[SKIP RELEASE]in the PR title - Example:
[SKIP RELEASE] chore: bump version to 0.2.3
After merging, you can manually tag later using /release tag.
If auto-tagging was skipped, use:
/release tagThis will:
- Pull the latest main branch
- Read the version from package.json
- Create and push the
v<version>tag - Trigger the release workflow
- .claude/commands/release.md - The
/releasecommand definition - .github/workflows/auto-tag-release.yml - Auto-tags merged release PRs
- .github/workflows/release.yml - Publishes to npm on tag push
- TRANSITION.md - Implementation history and architectural decisions
- src/converter.ts - Main orchestrator with event system (shows how pieces fit together)
- src/types/hashnode-schema.ts - Data shapes throughout the pipeline
- src/index.ts - Public API exports with JSDoc documentation
- tests/integration/converter.test.ts - Full pipeline integration tests (58 tests)