RunpodLabs
NewUpdated 2026-04-24
wandler.ai

wandler.ai

transformers.js inference server with an OpenAI-compatible API for Mac, Linux, and Windows.

local-aitransformers.jsopenai-apiinferencetypescript
wandler.ai screenshot 1

wandler.ai is a transformers.js inference server that lets you run open-weight models through an OpenAI-compatible API. It is built in TypeScript, runs locally on Mac, Linux, and Windows, and drops into existing apps and agents with minimal changes.

Features

  • OpenAI-Compatible API — Point existing SDKs, apps, and agents at a local base URL
  • Cross-Platform Local Inference — Run on macOS, Linux, and Windows
  • transformers.js Powered — Built in TypeScript on top of transformers.js
  • Model Registry — Discover and filter supported LLM, embedding, and STT models
  • Agent-Friendly Setup — Works with custom OpenAI endpoints and local workflows

How It Works

  1. Install wandler globally or run it with npx
  2. Start the local server with the model you want to run
  3. Point your app or agent at the local OpenAI-compatible endpoint
  4. Swap models or inspect the registry as needed

Stack

  • TypeScript
  • transformers.js
  • OpenAI-compatible HTTP API
  • Local inference runtime