What Tokenlens is, why it exists, and core concepts.
⚠️ Experimental Docs:
These docs cover Tokenlens v2. For early access, install the alpha:
npm i tokenlens@alpha
Tokenlens provides provider-aware model metadata and usage utilities for AI applications. It resolves canonical model ids across multiple data sources, normalizes usage payloads, and estimates token costs/context limits with a consistent TypeScript API.
Loaders are async functions with the signature SourceLoader = (fetchImpl) => Promise<SourceProviders>. Tokenlens merges the returned provider maps in the order sources are specified.
The root module also exports computeCostUSD, describeModel, and getContextLimits helpers. The first call to any helper lazily creates a shared Tokenlens instance with default options (OpenRouter source, in-memory cache). Use them when the defaults match your needs.
When you require custom configuration (multiple sources, custom cache, fixture loaders), instantiate an explicit Tokenlens client via createTokenlens and reuse it within your application.