Auspexi

Build the Right Model: 8 Starters and How They Work

Auspexi • Updated:
TL;DR: We shipped a Model Starters gallery with 8 presets—LLM, SLM, LAM, MoE, VLM, MLM, LCM, SAM—each pre‑wired with routing (on‑device/hybrid), Context Engineering, pre‑generation Risk Guard, default SLOs, and evidence hooks. Start fast, ship with proof.

Why we built it

Not all models are built for the same thing. 2025 AI is a toolbox—generation, retrieval, planning, and perception. Picking the right model and wrapping it with context, policy, and evidence is the shortest path to reliable outcomes. Starters are small, opinionated scaffolds so you can move from idea → audited prototype in hours.

The 8 presets

LLM

Text generation with Context Engine (hybrid retrieval, signals, budget packing) + Risk Guard. Good for chat, copy, code.

SLM (On‑Device)

Small model with on‑device routing by default, cloud fallback within SLOs. Good for private, low‑latency assistants.

LAM (Agents)

Plan/act loop with typed tool wrappers, memory stub, and error normalization. Good for workflows and RPA‑like tasks.

MoE

Lightweight router sends inputs to specialized experts; per‑expert thresholds. Good for heterogeneous workloads at scale.

VLM

Image+text understanding; citation‑friendly context packing. Good for search, robotics, and inspection.

MLM (Embeddings)

Retrieval pipeline with P@k/nDCG harness. Good foundation for RAG and classification.

LCM (Images)

Fast image generation placeholder with device guardrails. Good for efficient creative tooling.

SAM (Segmentation)

Pixel‑level masks; mask viewer and export. Good for medical/industrial segmentation and AR.

What’s inside each starter

How to use it

  1. Open Build a Model and select a starter.
  2. Click “Download Starter” to get a scaffold ZIP with configs and stubs.
  3. Plug in your model/tooling, run acceptance checks, then export evidence.

Roadmap

Get started: /buildContext EngineeringWhitepaper