NewUpdated 2026-04-24
wandler.ai
transformers.js inference server with an OpenAI-compatible API for Mac, Linux, and Windows.
local-aitransformers.jsopenai-apiinferencetypescript

wandler.ai is a transformers.js inference server that lets you run open-weight models through an OpenAI-compatible API. It is built in TypeScript, runs locally on Mac, Linux, and Windows, and drops into existing apps and agents with minimal changes.
Features
- OpenAI-Compatible API — Point existing SDKs, apps, and agents at a local base URL
- Cross-Platform Local Inference — Run on macOS, Linux, and Windows
- transformers.js Powered — Built in TypeScript on top of transformers.js
- Model Registry — Discover and filter supported LLM, embedding, and STT models
- Agent-Friendly Setup — Works with custom OpenAI endpoints and local workflows
How It Works
- Install
wandlerglobally or run it withnpx - Start the local server with the model you want to run
- Point your app or agent at the local OpenAI-compatible endpoint
- Swap models or inspect the registry as needed
Stack
- TypeScript
- transformers.js
- OpenAI-compatible HTTP API
- Local inference runtime