tryError Documentation
Performance & Best Practices
Optimize tryError for performance and follow best practices for robust error handling
Performance Overview
📊 Real-World Performance Metrics
✅ Success Path
- • <3% overhead compared to native try/catch
- • Direct value return with no wrapper objects
- • No performance penalty for successful operations
- • Suitable for hot paths and performance-critical code
Benchmark: 1M successful JSON.parse operations
Native: 100ms | tryError: 103ms (3% overhead)
⚠️ Error Path
- • 20% to 120% overhead (configurable)
- • Default config: ~100-120% overhead (rich debugging)
- • Production config: ~40% overhead (no stack traces)
- • Minimal config: ~20% overhead (bare essentials)
Benchmark: 1M failed JSON.parse operations
Native: 4,708ms | Default: 85,734ms | Minimal: 7,062ms
💡 Understanding the Error Overhead
- • Stack trace capture: ~60% of total overhead
- • Context deep cloning: ~25% of total overhead
- • Source location parsing: ~10% of total overhead
- • Timestamp generation: ~5% of total overhead
- • Errors should be exceptional (rare)
- • Rich debugging saves developer time
- • Configurable for high-error scenarios
- • Still faster than many logging libraries
🚀 Did you know? With minimal configuration, tryError can actually be 15% FASTER than native try/catch for error handling!Learn why →
tryError is designed to be lightweight and performant, but there are several ways to optimize it further for your specific use case.
Stack Trace Optimization
Stack trace capture can be expensive. Optimize based on your environment.
1// Production: Disable stack traces for performance
2configureTryError({
3 captureStackTrace: process.env.NODE_ENV !== 'production',
4 stackTraceLimit: process.env.NODE_ENV === 'production' ? 0 : 10
5});
6
7// Alternative: Lazy stack trace capture
8configureTryError({
9 lazyStackTrace: true, // Only capture when accessed
10 stackTraceLimit: 5 // Limit depth in production
11});
Performance Impact
Disabling stack traces can improve error creation performance by 60-80% in production.
Context Size Management
Large error contexts can impact memory usage and serialization performance.
1// Limit context size
2configurePerformance({
3 contextCapture: {
4 maxContextSize: 1024 * 5, // 5KB limit
5 deepClone: false, // Avoid deep cloning large objects
6 timeout: 50 // Timeout async context capture
7 }
8});
9
10// Smart context filtering
11function createOptimizedError(type: string, message: string, context: any) {
12 // Filter out large or unnecessary context
13 const filteredContext = {
14 ...context,
15 // Remove large arrays/objects
16 largeData: context.largeData ? '[TRUNCATED]' : undefined,
17 // Keep only essential fields
18 userId: context.userId,
19 requestId: context.requestId,
20 timestamp: Date.now()
21 };
22
23 return createTryError(type, message, filteredContext);
24}
Error Object Pooling
For high-frequency error scenarios, consider object pooling to reduce GC pressure.
1// Enable experimental object pooling
2configurePerformance({
3 errorCreation: {
4 objectPooling: true,
5 poolSize: 100
6 }
7});
8
9// Manual pooling for critical paths
10class ErrorPool {
11 private pool: TryError[] = [];
12
13 get(type: string, message: string, context?: any): TryError {
14 const error = this.pool.pop() || this.createNew();
15 this.reset(error, type, message, context);
16 return error;
17 }
18
19 release(error: TryError): void {
20 if (this.pool.length < 50) { // Max pool size
21 this.pool.push(error);
22 }
23 }
24
25 private createNew(): TryError {
26 return createTryError('', '');
27 }
28
29 private reset(error: TryError, type: string, message: string, context?: any): void {
30 (error as any).type = type;
31 (error as any).message = message;
32 (error as any).context = context;
33 (error as any).timestamp = Date.now();
34 }
35}
36
37const errorPool = new ErrorPool();
Performance Optimization Strategies
Configuration Presets
Use built-in presets to quickly optimize for different scenarios.
1import { configure, ConfigPresets } from '@try-error/core';
2
3// Development: Full debugging (100-120% error overhead)
4configure(ConfigPresets.development());
5// ✅ Stack traces, source location, detailed logging
6
7// Production: Balanced (40% error overhead)
8configure(ConfigPresets.production());
9// ✅ No stack traces, minimal logging, better performance
10
11// Performance: Optimized (30% error overhead)
12configure(ConfigPresets.performance());
13// ✅ Caching, lazy evaluation, object pooling
14
15// Minimal: Ultra-light (20% error overhead)
16configure(ConfigPresets.minimal());
17// ✅ Bare minimum, no stack traces, no timestamps, no context
18
19// Custom configuration for specific needs
20configure({
21 captureStackTrace: false, // Removes ~60% of error overhead
22 skipTimestamp: true, // Removes ~5% of error overhead
23 skipContext: true, // Removes ~25% of error overhead
24 minimalErrors: true // Enable all optimizations
25});
When to Use Each Preset
- • Development: Local development, debugging
- • Production: Standard production apps
- • Performance: High-throughput services
- • Minimal: Parsing, validation, expected errors
Scenario-Based Optimization
Different parts of your application may need different error handling strategies.
1// High error rate scenario (e.g., user input validation)
2function validateUserInput(data: unknown[]) {
3 // Use minimal config for expected errors
4 configure(ConfigPresets.minimal());
5
6 return data.map(item => {
7 const result = trySync(() => validateSchema(item));
8 if (isTryError(result)) {
9 return { valid: false, error: result.message };
10 }
11 return { valid: true, data: result };
12 });
13}
14
15// Low error rate scenario (e.g., internal APIs)
16async function fetchCriticalData(id: string) {
17 // Use full config for unexpected errors
18 configure(ConfigPresets.development());
19
20 const result = await tryAsync(() => fetchFromAPI(id));
21 if (isTryError(result)) {
22 // Rich error info helps debugging
23 logger.error('Critical API failure', {
24 error: result,
25 stack: result.stack,
26 context: result.context
27 });
28 throw result;
29 }
30 return result;
31}
32
33// Mixed scenario with scoped configs
34import { createScope } from '@try-error/core';
35
36const validationScope = createScope({
37 captureStackTrace: false,
38 minimalErrors: true
39});
40
41const apiScope = createScope({
42 captureStackTrace: true,
43 includeSource: true
44});
45
46async function processRequest(request: Request) {
47 // Validation with minimal overhead
48 const { createError: createValidationError } = validationScope;
49 const validationResult = trySync(() => validateRequest(request));
50
51 if (isTryError(validationResult)) {
52 return { status: 400, error: validationResult };
53 }
54
55 // API call with full debugging
56 const { createError: createAPIError } = apiScope;
57 const apiResult = await tryAsync(() => callAPI(validationResult));
58
59 if (isTryError(apiResult)) {
60 return { status: 500, error: apiResult };
61 }
62
63 return { status: 200, data: apiResult };
64}
Async Performance & Edge Cases
⚡ The Tight Loop Edge Case
In very specific scenarios, tryAsync
can show higher overhead compared to native try/catch. This only occurs in tight loops with minimal async work - a pattern that's rare in real applications.
📊 When overhead is noticeable:
- • Tight loops with thousands of iterations
- • No real async work (empty promises)
- • Micro-benchmarks measuring nanoseconds
- • Creating functions inside loops
Example: 10,000 empty async calls
Native: 0.63ms | tryAsync: 1.54ms (145% overhead)
✅ When overhead is negligible:
- • Real async operations (API calls, DB queries)
- • Any I/O operations (file, network)
- • CPU-intensive work between awaits
- • Normal application code patterns
Example: 100 API calls (1ms each)
Native: 112ms | tryAsync: 113ms (1.2% overhead)
Understanding the Overhead
The overhead in tight loops comes from JavaScript engine optimizations, not tryError itself.
1// ❌ Pathological case: tight loop with no real work
2async function processEmptyPromises() {
3 for (let i = 0; i < 10000; i++) {
4 // This creates overhead due to function creation + microtask scheduling
5 await tryAsync(() => Promise.resolve(i));
6 }
7}
8
9// ✅ Real-world pattern: actual async work
10async function processApiRequests(urls: string[]) {
11 for (const url of urls) {
12 // Network latency (10-1000ms) completely dominates any overhead
13 const result = await tryAsync(() => fetch(url));
14 if (isTryError(result)) {
15 console.error('Failed to fetch ' + url + ':', result.message);
16 continue;
17 }
18 // Process response...
19 }
20}
21
22// ✅ Alternative: batch processing to minimize overhead
23async function batchProcess<T>(items: T[], processor: (item: T) => Promise<any>) {
24 // Process all items in parallel
25 const results = await Promise.allSettled(
26 items.map(item => tryAsync(() => processor(item)))
27 );
28
29 // Handle results
30 return results.map((result, index) => {
31 if (result.status === 'rejected') {
32 return createTryError('ProcessingError', 'Failed to process item', {
33 index,
34 item: items[index]
35 });
36 }
37 return result.value;
38 });
39}
Why This Happens
V8 (JavaScript engine) optimizes async/await differently when functions are created dynamically. In tight loops, the overhead of function creation and microtask scheduling becomes measurable. With real async work, this overhead is insignificant (< 0.001ms per call).
Performance Optimization for Edge Cases
If you genuinely need maximum performance in tight async loops, consider these patterns.
1// Option 1: Use tryPromise (coming in v2) for existing promises
2// 64% overhead instead of 145%
3const results = await Promise.all(
4 items.map(item =>
5 tryPromise(processItem(item)) // Pass promise directly
6 )
7);
8
9// Option 2: Pre-create functions outside loops
10const processFunction = async (item: any) => {
11 // Your processing logic
12 return await someAsyncOperation(item);
13};
14
15// Reuse the same function reference
16for (const item of items) {
17 const result = await tryAsync(() => processFunction(item));
18 // Only 3.6% overhead with pre-created functions
19}
20
21// Option 3: Use native try/catch for truly performance-critical paths
22async function criticalPath(items: any[]) {
23 const results = [];
24 for (const item of items) {
25 try {
26 results.push(await processItem(item));
27 } catch (error) {
28 // Handle error manually
29 results.push(createTryError('ProcessingError', error.message));
30 }
31 }
32 return results;
33}
34
35// Option 4: Batch operations to amortize overhead
36async function batchedOperation<T>(
37 items: T[],
38 batchSize: number,
39 processor: (batch: T[]) => Promise<any[]>
40) {
41 const results: any[] = [];
42
43 for (let i = 0; i < items.length; i += batchSize) {
44 const batch = items.slice(i, i + batchSize);
45 const batchResult = await tryAsync(() => processor(batch));
46
47 if (isTryError(batchResult)) {
48 // Handle batch error
49 results.push(...batch.map(() => batchResult));
50 } else {
51 results.push(...batchResult);
52 }
53 }
54
55 return results;
56}
Real-World Performance Comparison
Here's how tryError performs in realistic scenarios.
1// Benchmark results from actual use cases
2
3// 1. API Call (average 50ms network latency)
4// Native try/catch: 50.2ms
5// tryAsync: 50.3ms
6// Overhead: 0.2% ✅
7
8// 2. Database Query (average 5ms)
9// Native try/catch: 5.02ms
10// tryAsync: 5.04ms
11// Overhead: 0.4% ✅
12
13// 3. File Operation (average 2ms)
14// Native try/catch: 2.01ms
15// tryAsync: 2.02ms
16// Overhead: 0.5% ✅
17
18// 4. CPU-intensive task with async checkpoints
19async function processLargeDataset(data: any[]) {
20 const chunkSize = 1000;
21 const results = [];
22
23 for (let i = 0; i < data.length; i += chunkSize) {
24 // CPU work dominates (10ms per chunk)
25 const chunk = data.slice(i, i + chunkSize);
26 const processed = await tryAsync(async () => {
27 // Simulate CPU work
28 const result = chunk.map(item => complexCalculation(item));
29
30 // Yield to event loop
31 await new Promise(resolve => setImmediate(resolve));
32
33 return result;
34 });
35
36 if (isTryError(processed)) {
37 console.error('Failed at chunk ' + (i / chunkSize));
38 continue;
39 }
40
41 results.push(...processed);
42 }
43
44 return results;
45}
46// Native: 1,025ms | tryAsync: 1,027ms | Overhead: 0.2% ✅
The Bottom Line
In 99.9% of real-world use cases, tryAsync overhead is negligible (< 1%). The convenience, type safety, and consistent error handling far outweigh the microseconds of overhead. Only optimize for tight loops if you've measured and identified it as an actual bottleneck.
Implementation Guide
Here's exactly where to put these configurations in different types of projects.
🤔 When is initialization necessary?
✅ Initialization IS needed when:
- • You want to customize error behavior (stack traces, logging, etc.)
- • You need performance optimizations for production
- • You want to set up error monitoring/reporting
- • You need environment-specific configurations
- • You want to use the setup utilities for convenience
❌ Initialization is NOT needed when:
- • You're just using basic
trySync()
,tryAsync()
, andcreateTryError()
- • You're happy with the default behavior
- • You're prototyping or in early development
TL;DR: tryError works out of the box without any setup. Initialization is only needed when you want to customize its behavior or optimize for production.
⚡ Quick Setup (Recommended)
Use our one-liner setup utilities for instant optimization with sensible defaults:
1// Node.js/Express - Automatic environment detection
2import { setupNode } from '@try-error/core/setup';
3setupNode(); // ✨ That's it! Optimized for dev/prod automatically
4
5// React/Vite - Browser-optimized configuration
6import { setupReact } from '@try-error/core/setup';
7setupReact(); // ✨ Perfect for client-side apps
8
9// Next.js - Handles both SSR and client-side
10import { setupNextJs } from '@try-error/core/setup';
11setupNextJs(); // ✨ Works for both server and client
12
13// Auto-detect environment (works everywhere)
14import { autoSetup } from '@try-error/core/setup';
15autoSetup(); // ✨ Detects Node.js, React, Next.js, etc.
16
17// High-performance (for critical applications)
18import { setupPerformance } from '@try-error/core/setup';
19setupPerformance(); // ✨ Maximum performance, minimal overhead
20
21// With custom options (still easy!)
22setupNode({
23 onError: (error) => sendToSentry(error) // Add your monitoring
24});
🎯 Benefits of Quick Setup
- • Zero boilerplate: One line replaces 20+ lines of configuration
- • Environment-aware: Automatically optimizes for dev/prod/test
- • Best practices: Includes performance optimizations by default
- • Extensible: Easy to customize with your own options
- • Tree-shakeable: Only includes what you use
Node.js/Express Applications
Configure tryError early in your application startup, before importing other modules.
1// Project structure:
2// src/
3// config/
4// tryError.config.ts
5// app.ts
6// server.ts
7
8// src/config/tryError.config.ts
9import { configureTryError, configurePerformance } from '@try-error/core';
10
11export function initializeTryError() {
12 // Configure based on environment
13 configureTryError({
14 captureStackTrace: process.env.NODE_ENV !== 'production',
15 stackTraceLimit: process.env.NODE_ENV === 'production' ? 5 : 50,
16 developmentMode: process.env.NODE_ENV === 'development',
17
18 onError: (error) => {
19 if (process.env.NODE_ENV === 'production') {
20 // Send to monitoring service
21 console.error(`[ERROR] ${error.type}: ${error.message}`);
22 } else {
23 // Detailed logging in development
24 console.error('TryError Details:', {
25 type: error.type,
26 message: error.message,
27 context: error.context,
28 stack: error.stack
29 });
30 }
31 return error;
32 }
33 });
34
35 // Performance optimizations
36 configurePerformance({
37 contextCapture: {
38 maxContextSize: 1024 * 10, // 10KB
39 deepClone: false
40 },
41 memory: {
42 maxErrorHistory: 50,
43 useWeakRefs: true
44 }
45 });
46}
47
48// src/app.ts
49import express from 'express';
50import { initializeTryError } from './config/tryError.config';
51
52// Initialize tryError FIRST, before other imports
53initializeTryError();
54
55// Now import your application modules
56import { userRoutes } from './routes/users';
57import { errorHandler } from './middleware/error-handler';
58
59const app = express();
60
61// Your middleware and routes
62app.use('/api/users', userRoutes);
63app.use(errorHandler);
64
65export default app;
66
67// src/server.ts
68import app from './app';
69
70const PORT = process.env.PORT || 3000;
71app.listen(PORT, () => {
72 console.log(`Server running on port ${PORT}`);
73});
Next.js Applications
Configure tryError in your Next.js app using the app directory structure and the instrumentation.ts file for server startup initialization.
1// Project structure:
2// src/
3// app/
4// layout.tsx
5// globals.css
6// lib/
7// tryError.config.ts
8// instrumentation.ts (for server initialization)
9
10// src/lib/tryError.config.ts
11import { configureTryError } from '@try-error/core';
12
13export function initializeTryError() {
14 configureTryError({
15 captureStackTrace: process.env.NODE_ENV !== 'production',
16 stackTraceLimit: process.env.NODE_ENV === 'production' ? 3 : 20,
17 developmentMode: process.env.NODE_ENV === 'development',
18
19 onError: (error) => {
20 // Client-side error reporting
21 if (typeof window !== 'undefined') {
22 // Send to analytics or error reporting service
23 if (process.env.NODE_ENV === 'production') {
24 fetch('/api/errors', {
25 method: 'POST',
26 headers: { 'Content-Type': 'application/json' },
27 body: JSON.stringify({
28 type: error.type,
29 message: error.message,
30 url: window.location.href,
31 userAgent: navigator.userAgent
32 })
33 }).catch(() => {}); // Silent fail
34 }
35 }
36 return error;
37 }
38 });
39}
40
41// instrumentation.ts (in project root, for server-side initialization)
42export async function register() {
43 if (process.env.NEXT_RUNTIME === 'nodejs') {
44 // Server-side initialization - runs once on server startup
45 const { initializeTryError } = await import('./src/lib/tryError.config');
46 initializeTryError();
47 }
48}
49
50// src/app/layout.tsx
51import { initializeTryError } from '@/lib/tryError.config';
52import './globals.css';
53
54// Initialize tryError on client-side
55if (typeof window !== 'undefined') {
56 initializeTryError();
57}
58
59export default function RootLayout({
60 children,
61}: {
62 children: React.ReactNode;
63}) {
64 return (
65 <html lang="en">
66 <body>{children}</body>
67 </html>
68 );
69}
70
71// src/app/api/users/route.ts
72import { tryAsync, isTryError } from '@try-error/core';
73import { NextRequest, NextResponse } from 'next/server';
74
75export async function GET(request: NextRequest) {
76 const result = await tryAsync(async () => {
77 // Your API logic here
78 const users = await fetchUsers();
79 return users;
80 });
81
82 if (isTryError(result)) {
83 return NextResponse.json(
84 { error: result.message },
85 { status: 500 }
86 );
87 }
88
89 return NextResponse.json(result);
90}
📝 About instrumentation.ts
The instrumentation.ts
file runs once on server startup and is the correct place for server-side initialization. Don't use middleware.ts
for initialization - it runs on every request!
React Applications (Vite/CRA)
Configure tryError in your React app's entry point.
1// Project structure:
2// src/
3// config/
4// tryError.config.ts
5// components/
6// hooks/
7// main.tsx (Vite) or index.tsx (CRA)
8// App.tsx
9
10// src/config/tryError.config.ts
11import { configureTryError } from '@try-error/core';
12
13export function initializeTryError() {
14 configureTryError({
15 captureStackTrace: import.meta.env.DEV, // Vite
16 // captureStackTrace: process.env.NODE_ENV === 'development', // CRA
17 stackTraceLimit: import.meta.env.PROD ? 3 : 20,
18 developmentMode: import.meta.env.DEV,
19
20 onError: (error) => {
21 // Client-side error tracking
22 if (import.meta.env.PROD) {
23 // Send to error tracking service
24 window.gtag?.('event', 'exception', {
25 description: `${error.type}: ${error.message}`,
26 fatal: false
27 });
28 } else {
29 // Development logging
30 console.group(`🚨 TryError: ${error.type}`);
31 console.error('Message:', error.message);
32 console.error('Context:', error.context);
33 console.groupEnd();
34 }
35 return error;
36 }
37 });
38}
39
40// src/main.tsx (Vite) or src/index.tsx (CRA)
41import React from 'react';
42import ReactDOM from 'react-dom/client';
43import { initializeTryError } from './config/tryError.config';
44import App from './App';
45
46// Initialize tryError BEFORE rendering
47initializeTryError();
48
49ReactDOM.createRoot(document.getElementById('root')!).render(
50 <React.StrictMode>
51 <App />
52 </React.StrictMode>
53);
54
55// src/App.tsx
56import { TryErrorBoundary } from '@try-error/react';
57import { UserProfile } from './components/UserProfile';
58
59function App() {
60 return (
61 <TryErrorBoundary
62 fallback={({ error, retry }) => (
63 <div className="error-fallback">
64 <h2>Something went wrong</h2>
65 <p>{error.message}</p>
66 <button onClick={retry}>Try again</button>
67 </div>
68 )}
69 >
70 <UserProfile />
71 </TryErrorBoundary>
72 );
73}
74
75export default App;
Environment-Specific Configuration Files
Use separate configuration files for different environments.
1// config/tryError/
2// index.ts
3// development.ts
4// production.ts
5// test.ts
6
7// config/tryError/index.ts
8import { TryErrorConfig } from '@try-error/core';
9
10const env = process.env.NODE_ENV || 'development';
11
12let config: TryErrorConfig;
13
14switch (env) {
15 case 'production':
16 config = require('./production').default;
17 break;
18 case 'test':
19 config = require('./test').default;
20 break;
21 default:
22 config = require('./development').default;
23}
24
25export default config;
26
27// config/tryError/development.ts
28import { TryErrorConfig } from '@try-error/core';
29
30const config: TryErrorConfig = {
31 captureStackTrace: true,
32 stackTraceLimit: 50,
33 developmentMode: true,
34
35 onError: (error) => {
36 console.group(`🚨 TryError: ${error.type}`);
37 console.error('Message:', error.message);
38 console.error('Context:', error.context);
39 console.error('Stack:', error.stack);
40 console.groupEnd();
41 return error;
42 }
43};
44
45export default config;
46
47// config/tryError/production.ts
48import { TryErrorConfig } from '@try-error/core';
49
50const config: TryErrorConfig = {
51 captureStackTrace: false,
52 stackTraceLimit: 0,
53 developmentMode: false,
54
55 onError: (error) => {
56 // Send to monitoring service
57 sendToSentry(error);
58 sendToDatadog(error);
59
60 // Minimal logging
61 console.error(`Error: ${error.type} - ${error.message}`);
62 return error;
63 }
64};
65
66export default config;
67
68// Usage in your app:
69// src/app.ts
70import { configureTryError } from '@try-error/core';
71import tryErrorConfig from '../config/tryError';
72
73// Apply configuration
74configureTryError(tryErrorConfig);
Package.json Scripts Integration
Set up npm scripts to handle different environments automatically.
1{
2 "scripts": {
3 "dev": "NODE_ENV=development tsx watch src/server.ts",
4 "build": "NODE_ENV=production tsc",
5 "start": "NODE_ENV=production node dist/server.js",
6 "test": "NODE_ENV=test jest",
7 "start:staging": "NODE_ENV=staging node dist/server.js"
8 },
9 "dependencies": {
10 "tryError": "^1.0.0"
11 },
12 "devDependencies": {
13 "tsx": "^4.0.0",
14 "typescript": "^5.0.0"
15 }
16}
🔑 Key Implementation Tips
- • Use Quick Setup: Start with setup utilities (setupNode, setupReact, etc.) for instant optimization
- • Initialize Early: Configure tryError before importing other modules
- • Environment Variables: Use NODE_ENV to switch between configurations
- • Separate Configs: Keep environment-specific settings in separate files for complex setups
- • Error Boundaries: Wrap React components with TryErrorBoundary
- • API Integration: Configure error reporting endpoints for production
- • Performance: Disable stack traces and limit context size in production
Best Practices
Error Type Consistency
Use consistent error types across your application for better error handling.
1// Define error types as constants
2export const ErrorTypes = {
3 VALIDATION: 'ValidationError',
4 NETWORK: 'NetworkError',
5 AUTH: 'AuthenticationError',
6 PERMISSION: 'AuthorizationError',
7 NOT_FOUND: 'NotFoundError',
8 CONFLICT: 'ConflictError',
9 RATE_LIMIT: 'RateLimitError'
10} as const;
11
12// Use type-safe error creation
13function createValidationError(field: string, value: unknown, rule: string) {
14 return createTryError(ErrorTypes.VALIDATION, `Validation failed for ${field}`, {
15 field,
16 value,
17 rule
18 });
19}
20
21// Centralized error handling
22function handleApiError(error: TryError): ApiResponse {
23 switch (error.type) {
24 case ErrorTypes.VALIDATION:
25 return { status: 400, message: error.message, details: error.context };
26 case ErrorTypes.AUTH:
27 return { status: 401, message: 'Authentication required' };
28 case ErrorTypes.PERMISSION:
29 return { status: 403, message: 'Insufficient permissions' };
30 case ErrorTypes.NOT_FOUND:
31 return { status: 404, message: 'Resource not found' };
32 default:
33 return { status: 500, message: 'Internal server error' };
34 }
35}
Context Best Practices
Include relevant context while avoiding sensitive or excessive data.
1// Good: Relevant, structured context
2const result = await tryAsync(async () => {
3 const user = await fetchUser(userId);
4 if (!user) {
5 throw createTryError('NotFoundError', 'User not found', {
6 userId,
7 requestId: getRequestId(),
8 timestamp: Date.now(),
9 searchCriteria: { id: userId }
10 });
11 }
12 return user;
13});
14
15// Bad: Sensitive or excessive context
16const badResult = await tryAsync(async () => {
17 const user = await fetchUser(userId);
18 if (!user) {
19 throw createTryError('NotFoundError', 'User not found', {
20 password: user?.password, // ❌ Sensitive data
21 entireDatabase: database, // ❌ Too much data
22 randomData: Math.random() // ❌ Irrelevant data
23 });
24 }
25 return user;
26});
27
28// Context sanitization helper
29function sanitizeContext(context: Record<string, any>): Record<string, any> {
30 const sanitized = { ...context };
31
32 // Remove sensitive fields
33 const sensitiveFields = ['password', 'token', 'secret', 'key', 'auth'];
34 sensitiveFields.forEach(field => delete sanitized[field]);
35
36 // Truncate large strings
37 Object.keys(sanitized).forEach(key => {
38 if (typeof sanitized[key] === 'string' && sanitized[key].length > 1000) {
39 sanitized[key] = sanitized[key].substring(0, 1000) + '...[truncated]';
40 }
41 });
42
43 return sanitized;
44}
Error Propagation
Handle errors at the appropriate level and avoid swallowing important errors.
1// Good: Handle errors at the right level
2async function processUserData(userId: string): Promise<TryResult<ProcessedData, TryError>> {
3 const userResult = await fetchUser(userId);
4 if (isTryError(userResult)) {
5 // Log but don't handle - let caller decide
6 console.error('Failed to fetch user:', userResult);
7 return userResult;
8 }
9
10 const validationResult = validateUser(userResult);
11 if (isTryError(validationResult)) {
12 // Handle validation errors here - they're specific to this function
13 return createTryError('ProcessingError', 'User data validation failed', {
14 userId,
15 validationError: validationResult,
16 step: 'validation'
17 });
18 }
19
20 return processData(validationResult);
21}
22
23// Bad: Swallowing errors
24async function badProcessUserData(userId: string): Promise<ProcessedData | null> {
25 try {
26 const user = await fetchUser(userId);
27 return processData(user);
28 } catch (error) {
29 console.log('Something went wrong'); // ❌ Lost error information
30 return null; // ❌ Caller can't distinguish between "no data" and "error"
31 }
32}
33
34// Error enrichment pattern
35function enrichError(originalError: TryError, additionalContext: Record<string, any>): TryError {
36 return createTryError(
37 originalError.type,
38 originalError.message,
39 {
40 ...originalError.context,
41 ...additionalContext,
42 originalError: {
43 type: originalError.type,
44 message: originalError.message,
45 source: originalError.source
46 }
47 },
48 originalError
49 );
50}
Memory Management
Proper memory management is crucial for long-running applications.
1// Configure memory limits
2configurePerformance({
3 memory: {
4 maxErrorHistory: 50, // Limit error history
5 useWeakRefs: true, // Use weak references for large contexts
6 gcHints: true, // Provide GC hints
7 autoCleanup: true, // Automatic cleanup of old errors
8 cleanupInterval: 60000 // Cleanup every minute
9 }
10});
11
12// Manual memory management
13class ErrorManager {
14 private errorHistory: TryError[] = [];
15 private readonly maxHistory = 100;
16
17 addError(error: TryError): void {
18 this.errorHistory.push(error);
19
20 // Cleanup old errors
21 if (this.errorHistory.length > this.maxHistory) {
22 const removed = this.errorHistory.splice(0, this.errorHistory.length - this.maxHistory);
23 // Clear references to help GC
24 removed.forEach(err => {
25 (err as any).context = null;
26 (err as any).cause = null;
27 });
28 }
29 }
30
31 getRecentErrors(count: number = 10): TryError[] {
32 return this.errorHistory.slice(-count);
33 }
34
35 clearHistory(): void {
36 this.errorHistory.forEach(err => {
37 (err as any).context = null;
38 (err as any).cause = null;
39 });
40 this.errorHistory.length = 0;
41 }
42}
43
44// Avoid memory leaks in long-running processes
45setInterval(() => {
46 if (global.gc) {
47 global.gc(); // Force garbage collection if available
48 }
49}, 300000); // Every 5 minutes
Monitoring and Observability
Set up proper monitoring to track error patterns and performance.
1// Error metrics collection
2class ErrorMetrics {
3 private metrics = new Map<string, number>();
4 private performanceMetrics = new Map<string, number[]>();
5
6 recordError(error: TryError): void {
7 // Count by error type
8 const count = this.metrics.get(error.type) || 0;
9 this.metrics.set(error.type, count + 1);
10
11 // Track performance impact
12 const duration = Date.now() - error.timestamp;
13 const durations = this.performanceMetrics.get(error.type) || [];
14 durations.push(duration);
15 this.performanceMetrics.set(error.type, durations.slice(-100)); // Keep last 100
16 }
17
18 getErrorCounts(): Record<string, number> {
19 return Object.fromEntries(this.metrics);
20 }
21
22 getAverageErrorDuration(type: string): number {
23 const durations = this.performanceMetrics.get(type) || [];
24 return durations.length > 0
25 ? durations.reduce((a, b) => a + b, 0) / durations.length
26 : 0;
27 }
28
29 exportMetrics(): any {
30 return {
31 errorCounts: this.getErrorCounts(),
32 averageDurations: Object.fromEntries(
33 Array.from(this.performanceMetrics.keys()).map(type => [
34 type,
35 this.getAverageErrorDuration(type)
36 ])
37 ),
38 timestamp: Date.now()
39 };
40 }
41}
42
43const errorMetrics = new ErrorMetrics();
44
45// Configure monitoring
46configureTryError({
47 onError: (error) => {
48 errorMetrics.recordError(error);
49
50 // Send to monitoring service
51 if (process.env.NODE_ENV === 'production') {
52 sendToDatadog(error);
53 sendToSentry(error);
54 }
55
56 return error;
57 }
58});
59
60// Health check endpoint
61app.get('/health/errors', (req, res) => {
62 res.json(errorMetrics.exportMetrics());
63});
Testing Best Practices
Comprehensive testing strategies for error handling code.
1// Test error scenarios explicitly
2describe('User Service', () => {
3 it('should handle user not found', async () => {
4 const result = await UserService.findById('non-existent');
5
6 expect(result).toBeTryError();
7 expect(result).toHaveErrorType('NotFoundError');
8 expect(result.context).toEqual({
9 userId: 'non-existent',
10 searchCriteria: { id: 'non-existent' }
11 });
12 });
13
14 it('should handle network errors', async () => {
15 // Mock network failure
16 jest.spyOn(fetch, 'fetch').mockRejectedValue(new Error('Network error'));
17
18 const result = await UserService.fetchFromApi('123');
19
20 expect(result).toBeTryError();
21 expect(result).toHaveErrorType('NetworkError');
22 });
23
24 it('should preserve error context through transformations', async () => {
25 const originalError = createTryError('ValidationError', 'Invalid email', {
26 field: 'email',
27 value: 'invalid'
28 });
29
30 const enrichedError = enrichError(originalError, { userId: '123' });
31
32 expect(enrichedError.context.field).toBe('email');
33 expect(enrichedError.context.userId).toBe('123');
34 expect(enrichedError.cause).toBe(originalError);
35 });
36});
37
38// Performance testing
39describe('Error Performance', () => {
40 it('should create errors efficiently', () => {
41 const start = performance.now();
42
43 for (let i = 0; i < 1000; i++) {
44 createTryError('TestError', 'Test message', { iteration: i });
45 }
46
47 const duration = performance.now() - start;
48 expect(duration).toBeLessThan(100); // Should complete in under 100ms
49 });
50
51 it('should not leak memory', () => {
52 const initialMemory = process.memoryUsage().heapUsed;
53
54 // Create many errors
55 for (let i = 0; i < 10000; i++) {
56 const error = createTryError('TestError', 'Test', { data: new Array(100).fill(i) });
57 // Simulate error being processed and discarded
58 }
59
60 // Force garbage collection
61 if (global.gc) global.gc();
62
63 const finalMemory = process.memoryUsage().heapUsed;
64 const memoryIncrease = finalMemory - initialMemory;
65
66 // Memory increase should be reasonable (less than 10MB)
67 expect(memoryIncrease).toBeLessThan(10 * 1024 * 1024);
68 });
69});