Core Classes
TranscriptCompressor
Main compression engine for Claude Code transcripts.Copy
Ask AI
class TranscriptCompressor {
constructor(options: CompressionOptions = {})
compress(transcriptPath: string, sessionId?: string, originalProjectName?: string): Promise<string>
showFilteredOutput(transcriptPath: string, enableChunking?: boolean): void
}
Constructor Options
Copy
Ask AI
interface CompressionOptions {
output?: string; // Output directory (optional)
dryRun?: boolean; // Preview mode without saving
verbose?: boolean; // Enable verbose logging
}
Basic Usage
Copy
Ask AI
import { TranscriptCompressor } from 'claude-mem';
// Initialize compressor
const compressor = new TranscriptCompressor({
verbose: true
});
// Compress a transcript
const archivePath = await compressor.compress(
'/path/to/transcript.jsonl'
);
console.log(`Archive created: ${archivePath}`);
Compression Process
Compression stages:- Reading - Parse JSONL transcript file
- Analyzing - Extract conversation structure and content
- Compressing - Generate semantic memories using LLM
- Writing - Store memories and create archive
ChunkManager
Handle large transcripts exceeding token limits through chunking.Copy
Ask AI
class ChunkManager {
needsChunking(content: string): boolean
chunkTranscript(content: string, options?: ChunkingOptions): Chunk[]
getChunkingStats(chunks: Chunk[]): string
createChunkHeader(metadata: ChunkMetadata): string
}
Chunking Options
Copy
Ask AI
interface ChunkingOptions {
maxTokensPerChunk?: number; // Default: 40000
overlapMessages?: number; // Default: 2 (context continuity)
preserveMessageBoundaries?: boolean; // Default: true
}
Chunk Structure
Copy
Ask AI
interface Chunk {
content: string;
metadata: ChunkMetadata;
}
interface ChunkMetadata {
chunkIndex: number;
totalChunks: number;
startIndex: number;
endIndex: number;
estimatedTokens: number;
hasOverlap: boolean;
overlapMessages?: number;
firstTimestamp?: string;
lastTimestamp?: string;
}
Copy
Ask AI
import { ChunkManager } from 'claude-mem';
const chunkManager = new ChunkManager();
const content = readLargeTranscript();
// Check if chunking is needed
if (chunkManager.needsChunking(content)) {
console.log("Large transcript detected, chunking...");
const chunks = chunkManager.chunkTranscript(content, {
maxTokensPerChunk: 35000,
overlapMessages: 3
});
console.log(chunkManager.getChunkingStats(chunks));
// Process each chunk
for (const chunk of chunks) {
console.log(`Processing chunk ${chunk.metadata.chunkIndex + 1}/${chunk.metadata.totalChunks}`);
// Each chunk maintains context from previous chunk
}
}
PromptOrchestrator
Manage prompt generation for LLM analysis.Copy
Ask AI
class PromptOrchestrator {
constructor(projectName?: string)
createAnalysisPrompt(context: AnalysisContext): AnalysisPrompt
createSessionStartPrompt(context: SessionContext): SessionPrompt
createHookResponse(context: HookContext): HookResponse
}
Analysis Context
Copy
Ask AI
interface AnalysisContext {
transcriptContent: string; // The conversation to analyze
sessionId: string; // Session identifier
projectName?: string; // Project context
customInstructions?: string; // Additional analysis instructions
trigger?: 'manual' | 'auto'; // How compression was triggered
originalTokens?: number; // Original token count
targetCompressionRatio?: number; // Desired compression ratio
}
Copy
Ask AI
import {
PromptOrchestrator,
createAnalysisContext
} from 'claude-mem';
const orchestrator = new PromptOrchestrator('my-project');
// Create analysis context
const context = createAnalysisContext(
transcriptContent,
'session_123',
{
projectName: 'web-app',
trigger: 'manual',
originalTokens: 45000
}
);
// Generate analysis prompt
const analysisPrompt = orchestrator.createAnalysisPrompt(context);
console.log('Prompt type:', analysisPrompt.type); // 'analysis'
console.log('Generated at:', analysisPrompt.timestamp);
Message Processing
Transcript Message Structure
Process various message types from transcripts:Copy
Ask AI
interface TranscriptMessage {
type: string; // 'user', 'assistant', 'system', 'tool_result'
message?: {
content?: string | ContentItem[];
role?: string;
timestamp?: string;
};
content?: string | ContentItem[];
role?: string;
uuid?: string;
session_id?: string;
timestamp?: string;
subtype?: string;
result?: string;
model?: string;
tools?: unknown[];
mcp_servers?: unknown[];
toolUseResult?: ToolUseResult;
}
interface ContentItem {
type: 'text' | 'tool_use' | 'tool_result' | 'thinking';
text?: string;
thinking?: string;
name?: string; // For tool_use
id?: string; // For tool_use
content?: unknown; // For tool_result
}
interface ToolUseResult {
stdout?: string;
stderr?: string;
interrupted?: boolean;
isImage?: boolean;
}
Content Extraction
Extract content from different message types:Copy
Ask AI
// User and assistant messages
if (message.type === 'user' || message.type === 'assistant') {
const content = message.message?.content;
if (Array.isArray(content)) {
// Handle mixed content (text, tool_use, tool_result)
const extractedText = content
.map(item => extractContentItem(item))
.filter(Boolean)
.join(' ');
}
}
// Tool results with large content filtering
if (message.type === 'tool_result') {
const contentSize = calculateContentSize(message.content);
if (contentSize > 1024 * 1024) { // 1MB threshold
const sizeMB = Math.round(contentSize / (1024 * 1024) * 10) / 10;
return `[FILTERED: Large tool result ~${sizeMB}MB]`;
}
}
Compression Results
CompressionResult Interface
Copy
Ask AI
interface CompressionResult {
compressedLines: string[]; // Generated memory summaries
originalTokens: number; // Input token count
compressedTokens: number; // Output token count
compressionRatio: number; // Achieved compression ratio
memoryNodes: string[]; // Created memory document IDs
}
Memory Document Format
Memory JSON structure:Copy
Ask AI
interface ExtractedMemory {
text: string; // The memory content
document_id: string; // Unique identifier
keywords: string; // Comma-separated keywords
timestamp: string; // ISO timestamp
archive: string; // Archive filename reference
}
Archive Structure
Compression artifacts:Copy
Ask AI
~/.claude-mem/
├── archives/
│ └── {project}/
│ └── session_123.jsonl.archive # Original transcript
├── index.jsonl # Memory index (JSONL format)
└── logs/
├── claude-mem-{timestamp}.log # Debug logs
├── claude-prompt-{timestamp}.txt # LLM prompts
└── claude-response-{timestamp}.txt # LLM responses
Copy
Ask AI
{
"type": "memory",
"text": "Implemented JWT authentication with refresh token rotation",
"document_id": "auth_implementation_jwt",
"keywords": "authentication, JWT, security, tokens",
"session_id": "session_123",
"project": "web_app",
"timestamp": "2024-01-15T10:30:00Z",
"archive": "session_123.jsonl.archive"
}
Command-Line Integration
compress Command
Copy
Ask AI
import { compress } from 'claude-mem/commands';
// Programmatic usage
await compress('/path/to/transcript.jsonl', {
sessionId: 'custom_session',
verbose: true
});
CLI Options
Copy
Ask AI
# Basic compression
claude-mem compress /path/to/transcript.jsonl
# With custom session ID
claude-mem compress /path/to/transcript.jsonl --session-id=my_session
# Verbose output
claude-mem compress /path/to/transcript.jsonl --verbose
# Show filtered output without compression
claude-mem compress --show-filtered /path/to/transcript.jsonl
Error Handling
CompressionError
Copy
Ask AI
import { CompressionError } from 'claude-mem';
try {
await compressor.compress(transcriptPath);
} catch (error) {
if (error instanceof CompressionError) {
console.error(`Compression failed at stage: ${error.stage}`);
console.error(`Transcript: ${error.transcriptPath}`);
switch (error.stage) {
case 'reading':
console.error('Check file permissions and format');
break;
case 'analyzing':
console.error('Transcript content may be corrupted');
break;
case 'compressing':
console.error('LLM analysis failed');
break;
case 'writing':
console.error('Check disk space and permissions');
break;
}
}
}
Common Error Scenarios
Copy
Ask AI
// File not found
if (!fs.existsSync(transcriptPath)) {
throw new CompressionError(
'Transcript file not found',
transcriptPath,
'reading'
);
}
// Invalid JSON format
try {
JSON.parse(line);
} catch (e) {
console.warn(`Parse error on line ${i}: ${e.message}`);
// Continues processing, logs parse errors
}
Performance Optimization
Token Management
Copy
Ask AI
// Automatic chunking for large transcripts
const needsChunking = chunkManager.needsChunking(conversationText);
if (needsChunking) {
// Process in chunks with context overlap
const chunks = chunkManager.chunkTranscript(conversationText);
for (const chunk of chunks) {
// Each chunk maintains context continuity
await processChunk(chunk);
}
} else {
// Single-pass processing for smaller transcripts
await processSingleTranscript(conversationText);
}
Memory Efficiency
Copy
Ask AI
// Large content filtering
const LARGE_CONTENT_THRESHOLD = 1024 * 1024; // 1MB
if (contentSize > LARGE_CONTENT_THRESHOLD) {
const sizeMB = Math.round(contentSize / (1024 * 1024) * 10) / 10;
return `[FILTERED: Large content ~${sizeMB}MB]`;
}
// Stream processing for memory efficiency
for await (const message of response) {
processStreamChunk(message);
}
Best Practices
1. Project Organization
Copy
Ask AI
// Use consistent project naming
const projectName = PathResolver.getCurrentProjectPrefix();
const compressor = new TranscriptCompressor();
// This ensures memories are project-scoped
await compressor.compress(transcriptPath, sessionId, projectName);
2. Session Management
Copy
Ask AI
// Generate meaningful session IDs
const sessionId = `${projectName}_${new Date().toISOString().split('T')[0]}_${Math.random().toString(36).substr(2, 9)}`;
// Or use timestamp-based IDs
const sessionId = `session_${Date.now()}`;
3. Error Recovery
Copy
Ask AI
// Implement retry logic for transient failures
async function compressWithRetry(transcriptPath: string, maxRetries = 3): Promise<string> {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await compressor.compress(transcriptPath);
} catch (error) {
if (attempt === maxRetries) throw error;
if (error instanceof CompressionError && error.stage === 'compressing') {
console.warn(`Attempt ${attempt} failed, retrying in ${attempt * 1000}ms...`);
await new Promise(resolve => setTimeout(resolve, attempt * 1000));
} else {
throw error; // Don't retry for non-transient errors
}
}
}
}
Next Steps
- Memory API - Learn about vector database operations
- Hooks API - Integrate with Claude Code events
- API Overview - General API concepts