Skip to content

Getting Started

Terminal window
pnpm add pickai
quick-start.ts
import { fromModelsDev, find, recommend, Purpose } from "pickai";
const models = await fromModelsDev();
// Latest 5 OpenAI models
const openai = find(models, { filter: { providers: ["openai"] }, limit: 5 });
// Top-scored for coding from that list
const [top] = recommend(openai, Purpose.Coding);

find() filters and sorts the catalog. It answers: “what’s available that meets these requirements?”

// Models with image input under $5/1M input tokens
const vision = find(models, {
filter: { inputModalities: ["image"], maxCostInput: 5 },
limit: 10,
});
// Tool-calling models, sorted by cost
const tools = find(models, {
filter: { toolCall: true },
sort: sortByCost("asc"),
limit: 10,
});

See Filtering for declarative filters and predicate functions, and Data Sources for where the model catalog comes from.

recommend() goes further — it scores each model across multiple weighted dimensions (cost, recency, context size, knowledge freshness) and recommends the best fits for a given use case. This is what makes pickai more than a filter.

import { fromModelsDev, recommend, Purpose } from "pickai";
const models = await fromModelsDev();
// Top-scored model for coding (default limit: 1)
const [top] = recommend(models, Purpose.Coding);
// Top 5 reasoning models from Anthropic
const reasoning = recommend(models, Purpose.Reasoning, {
filter: { providers: ["anthropic"] },
limit: 5,
});

See Scoring & Ranking for how criteria and weights produce scores, Purpose Profiles for the built-in profiles, and Constraints for diversity controls like perProvider and perFamily.

After recommend() picks a model, you need to call it. The Model object includes everything you need to route to the right provider: model.provider identifies the provider, model.id is the direct API identifier, and model.openRouterId is the OpenRouter slug.

Here are some quick examples to scan. See the Examples section for full working examples.

import { createAnthropic } from "@ai-sdk/anthropic";
import { generateText } from "ai";
// Use find() or recommend() to select a model, stored in `model`
const anthropic = createAnthropic({ apiKey: ANTHROPIC_API_KEY });
const { text } = await generateText({
model: anthropic(model.id), // e.g. "claude-sonnet-4-6"
prompt: "Explain how promises work in JavaScript",
});
// Use find() or recommend() to select a model, stored in `model`
const response = await fetch("https://api.anthropic.com/v1/messages", {
method: "POST",
headers: {
"x-api-key": ANTHROPIC_API_KEY,
"content-type": "application/json",
// ...other required headers
},
body: JSON.stringify({
model: model.id, // e.g. "claude-sonnet-4-6"
max_tokens: 1024,
messages: [{ role: "user", content: "Explain how promises work in JavaScript" }],
}),
});

pickai ships six built-in profiles (Cheap, Balanced, Quality, Coding, Creative, Reasoning), but you can create your own with custom filters and scoring weights. See Purpose Profiles.

Built-in profiles score on metadata: cost, context size, recency, and knowledge freshness. These are useful proxies, but models with aggressive specs can rank highly without being the best performers. Adding benchmark data as a custom criterion lets you score on actual model quality:

const arenaScore = minMaxCriterion((model) => {
const match = benchmarks.find((b) => matchesModel(b.modelId, model.id));
return match?.score;
});
const [best] = recommend(models, {
criteria: [
{ criterion: arenaScore, weight: 5 },
{ criterion: costEfficiency, weight: 2 },
],
});

See the Benchmark Scoring example for a full working implementation.

Lower-level building blocks like scoreModels, applyFilter, and matchesModel for composing your own pipelines. See the Utilities reference.