Local LLM inference via Ollama. Run Llama, Mistral, and other open models on your own GPU.
Requires Basic ($0.99/mo) membership
Become a member then run: nself plugin install ollama
1. Get a membership
Join at nself.org/pricing and set your key:
nself license set nself_pro_<your-key>
2. Install the plugin