Aether-SLM Hub Production model delivery at https://vibercoderofek.uk

Private browser AI, ready to test.

Aether-SLM lets developers add local inference, local RAG, smart model defaults, and zero API-key AI to web apps. This Hub is both the public demo surface and the stable model delivery origin.

Local Chat

Run small language models directly in the browser with WebGPU/WASM fallback and OPFS caching.

Local RAG

Index private text in the browser and retrieve relevant context without sending documents to a server.

Shared Delivery

Use one production Hub origin for model provenance, range requests, CORS, and cache coordination.

Hub model manifest

Loading /models.json...checking