tokenlens

Getting Started with Tokenlens

Install Tokenlens, use quick helpers, and configure a custom instance.

1. Installation

npm install tokenlens
# or
yarn add tokenlens
# or
pnpm add tokenlens

Tokenlens ships TypeScript types out of the box. Node.js 18+ (or a Fetch-compatible runtime) is required for the hosted catalogs (OpenRouter default, models.dev, Vercel, etc.).

2. Quick usage (standalone helpers)

If you just need metadata, cost, or context helpers, import the top-level functions. They reuse a lazily-created Tokenlens instance configured with the default catalog: "auto" setting (an alias for the OpenRouter gateway). To use a different dataset (for example, models.dev), pass { gateway: "models.dev" } to the helper or instantiate your own client with createTokenlens({ catalog: "models.dev" }).

import {
  getModelData,
  computeCostUSD,
  getContextLimits,
} from "tokenlens";

const usage = { inputTokens: 2_000, outputTokens: 500 };

const details = await getModelData({ modelId: "openai/gpt-4o-mini" });
const costs = await computeCostUSD({ modelId: "openai/gpt-4o-mini", usage });
const limits = await getContextLimits({ modelId: "openai/gpt-4o-mini" });

console.log(details?.id);             // "openai/gpt-4o-mini"
console.log(details?.limit?.context); // combined token limit
console.log(costs.totalTokenCostUSD); // USD estimate for provided usage
console.log(limits?.context);         // context window in tokens

3. Create a configured instance

Use createTokenlens when you need to pin a specific catalog, adjust caching, or supply fixture data:

import {
  createTokenlens,
  type TokenlensOptions,
  type SourceProviders,
} from "tokenlens";

const options: TokenlensOptions = {
  catalog: "openrouter",             // choose a built-in gateway
  ttlMs: 10 * 60 * 1000,             // refresh provider data every 10 minutes
  cacheKey: "tokenlens:openrouter",  // optional custom cache key
};

const tokenlens = createTokenlens(options);

const cost = await tokenlens.computeCostUSD({
  modelId: "demo/chat",
  usage: { input_tokens: 400, output_tokens: 200 },
});

console.log(cost.totalTokenCostUSD);

// Provide your own catalog data (useful during tests)
const fixtureCatalog: SourceProviders = {
  demo: {
    id: "demo",
    models: {
      "demo/chat": {
        id: "demo/chat",
        name: "Chat Demo",
        cost: { input: 1, output: 1 },
        limit: { context: 128_000, output: 4_096 },
      },
    },
  },
};

const fixtureTokenlens = createTokenlens({
  catalog: fixtureCatalog,
  ttlMs: 0, // always reload
});

Controlling caching

  • TTL: Use ttlMs to set how long provider catalogs should live in the cache.
  • Cache adapters: supply cache with your own { get, set, delete } implementation to back the cache with Redis or another storage.
  • No caching: set ttlMs: 0 to force a reload on every call (useful for tests).

4. Standalone vs. custom instance

Use caseRecommended approach
Default catalog and minimal setupCall the top-level helpers (computeCostUSD, getModelData, getContextLimits).
Specific gateway or custom TTL/cacheCreate your own instance via createTokenlens and reuse it within your app.
Fixture or offline catalogsPass a SourceProviders object via catalog when creating the instance.

5. TypeScript types

Tokenlens re-exports the primary DTOs so you can import them from the root package:

import type { Usage, ModelDetails, TokenCosts, TokenlensOptions } from "tokenlens";

const details = await getModelData({
  modelId: "openai/gpt-4o-mini",
});

const costs = await computeCostUSD({
  modelId: "openai/gpt-4o-mini",
  usage: { input_tokens: 1_000, output_tokens: 200 } satisfies Usage,
});

ModelDetails is simply the resolved SourceModel entry (or undefined when the model is missing). Use computeCostUSD and getContextLimits for derived information.

6. Next steps