Glossary
Canonical terminology used across the Tokenlens v2 codebase and docs.
This file defines canonical terminology used across the TokenLens v2 codebase and docs.
provider id
- The vendor namespace for a model (e.g.,
openai,anthropic,xai). - Appears as the prefix in canonical ids such as
provider/model.
model id (canonical)
- Fully qualified identifier in the form
provider/model. - Example:
openai/gpt-4o-mini,anthropic/claude-3-5-sonnet-20241022.
source
- The upstream dataset Tokenlens can load (
openrouter,models.dev, orpackage). - Configured via
TokenlensOptions.sourcesorcreateTokenlens({ sources }).
source loader
- A function that fetches and normalizes a given source into
SourceProviders. - Defaults live in
packages/tokenlens/src/default-loaders.ts; custom loaders can be supplied viaTokenlensOptions.loaders.
SourceProvider
- Raw provider metadata emitted by a loader.
- Fields include
id,name?,api?,doc?,env?,models. - Represents source-specific data before Tokenlens composes higher-level DTOs.
SourceModel
- Raw per-model metadata entry delivered by a loader.
- Fields include
id,name,limit,cost,modalities, capability flags, and source-specificextras.
ModelDetails
- Alias for the
SourceModelreturned bydescribeModel(orundefinedwhen not found).
Usage
- Union type capturing usage counters from common SDKs (e.g.,
input_tokens,outputTokens,reasoning_tokens). - Passed to
computeCostUSDordescribeModelto compute cost breakdowns.
TokenCosts
- Normalized USD cost breakdown calculated by
computeCostUSD/computeTokenCostsForModel. - Fields:
inputTokenCostUSD,outputTokenCostUSD, optionalreasoningTokenCostUSD, caching costs, andtotalTokenCostUSDplus theratesUsed.
DEFAULT_SOURCE
- The source used when none is specified (
openrouter). - Ensures stand-alone helpers work without configuration.
provider id normalization
- Tokenlens trims suffixes like
.responsesor slash-prefixed ids soopenai.responses/gpt-4oresolves to the canonicalopenai/gpt-4oentry. - Needed for Vercel AI SDK integration where provider ids include namespaces.
LanguageModelV2
- AI SDK model abstraction (
@ai-sdk/*) whosemodelIdandprovider/providerIdcan be passed directly to Tokenlens helpers. - Tests covering this live in
packages/provider-tests/tests/ai-sdk-usage.spec.ts.
models.dev
- External dataset consumed via
https://models.dev/api.json. - In v2 we ingest it into
packages/models/src/modelsdev/.
OpenRouter
- External dataset consumed via
https://openrouter.ai/api/v1/models. - In v2 we ingest it into
packages/models/src/openrouter/.
package source
- In-memory catalog provided by Tokenlens for tests/examples.
- Typically wired via
createTokenlens({ sources: ["package"], loaders: { package: async () => ... } }).
context limits
- Token budget derived from provider metadata.
limit.contextrepresents combined tokens;limit.input/limit.outputprovide per-direction caps when available.
pricing
- Approximate USD cost per 1M tokens as supplied by the source catalog (
cost.input,cost.output, etc.). - Tokenlens converts these to per-request USD using
computeCostUSD.
modalities
- Supported input/output modalities of a model (
modalities.input,modalities.output). - Used to derive hints returned from
describeModel.
computeCostUSD
- Stand-alone helper (and class method) that resolves a model, normalizes usage, and returns
TokenCosts.
describeModel
- Stand-alone helper (and class method) returning
ModelDetailsfor a model. - Accepts optional
usageto embedTokenCostsin the result.
getContextLimits
- Stand-alone helper returning
{ context?, input?, output? }for a resolved model.
Tokenlens (class)
- Configurable client responsible for loading sources, caching provider catalogs, and exposing helpers (
computeCostUSD,describeModel,getContextLimits). - Instances share a cache unless
cacheKeyis overridden.
createTokenlens
- Convenience factory that instantiates
Tokenlenswith sensible defaults (fallback loaders for each source,DEFAULT_SOURCEwhen none provided).
MemoryCache
- Default cache adapter for Tokenlens.
- Stores provider catalogs in-memory with TTL jitter to prevent thundering herds.
Conventions
- Prefer
provider/modelin code and docs. - Reference terms by anchor, e.g., “See docs/glossary.md#model”.
- New terms must be added here in the same PR.