Glossary
Canonical terminology used across the Tokenlens v2 codebase and docs.
This file defines canonical terminology used across the TokenLens v2 codebase and docs.
provider id
- The vendor namespace for a model (e.g.,
openai,anthropic,xai). - Appears as the prefix in canonical ids such as
provider/model.
model id (canonical)
- Fully qualified identifier in the form
provider/model. - Example:
openai/gpt-4o-mini,anthropic/claude-3-5-sonnet-20241022.
catalog
- Identifier for the dataset Tokenlens loads (
auto,openrouter,models.dev, orvercel). - Configured via
TokenlensOptions.catalogor per-call helper options such asgateway.
SourceProvider
- Raw provider metadata emitted by a loader.
- Fields include
id,name?,api?,doc?,env?,models. - Represents source-specific data before Tokenlens composes higher-level DTOs.
SourceModel
- Raw per-model metadata entry delivered by a loader.
- Fields include
id,name,limit,cost,modalities, capability flags, and source-specificextras.
ModelDetails
- Alias for the
SourceModelreturned bygetModelData(orundefinedwhen not found).
Usage
- Union type capturing usage counters from common SDKs (e.g.,
input_tokens,outputTokens,reasoning_tokens). - Passed to
computeCostUSD,estimateCostUSD, orgetContextHealthto compute derived metrics.
TokenCosts
- Normalized USD cost breakdown calculated by
computeCostUSD/computeTokenCostsForModel. - Fields:
inputTokenCostUSD,outputTokenCostUSD, optionalreasoningTokenCostUSD, caching costs, andtotalTokenCostUSDplus theratesUsed.
provider id normalization
- Tokenlens trims suffixes like
.responsesor slash-prefixed ids soopenai.responses/gpt-4oresolves to the canonicalopenai/gpt-4oentry. - Needed for Vercel AI SDK integration where provider ids include namespaces.
LanguageModelV2
- AI SDK model abstraction (
@ai-sdk/*) whosemodelIdandprovider/providerIdcan be passed directly to Tokenlens helpers. - Tests covering this live in
packages/provider-tests/tests/ai-sdk-usage.spec.ts.
models.dev
- External dataset consumed via
https://models.dev/api.json. - In v2 we ingest it into
packages/models/src/modelsdev/.
OpenRouter
- External dataset consumed via
https://openrouter.ai/api/v1/models. - In v2 we ingest it into
packages/models/src/openrouter/.
Vercel AI Gateway
- External dataset consumed via
https://ai-gateway.vercel.sh/v1/models. - Accessed live via
fetchVercelor the"vercel"source.
context limits
- Token budget derived from provider metadata.
limit.contextrepresents combined tokens;limit.input/limit.outputprovide per-direction caps when available.
pricing
- Approximate USD cost per 1M tokens as supplied by the source catalog (
cost.input,cost.output, etc.). - Tokenlens converts these to per-request USD using
computeCostUSD.
modalities
- Supported input/output modalities of a model (
modalities.input,modalities.output). - Used to derive hints returned from
getModelData.
computeCostUSD
- Stand-alone helper (and class method) that resolves a model, normalizes usage, and returns
TokenCosts.
getContextLimits
- Stand-alone helper returning
{ context?, input?, output? }for a resolved model.
Tokenlens (class)
- Configurable client responsible for loading catalogs, caching provider metadata, and exposing helpers (
getModelData,computeCostUSD,getContextLimits,getContextHealth,estimateCostUSD,countTokens). - Instances share a cache unless
cacheKeyis overridden.
createTokenlens
- Convenience factory that instantiates
Tokenlenswith sensible defaults (OpenRouter catalog when none provided;{ catalog: "auto" }is equivalent to the same OpenRouter gateway).
MemoryCache
- Default cache adapter for Tokenlens.
- Stores provider catalogs in-memory with TTL jitter to prevent thundering herds.
Conventions
- Prefer
provider/modelin code and docs. - Reference terms by anchor, e.g., “See docs/glossary.md#model”.
- New terms must be added here in the same PR.