安装和使用
Ollama命令用法
1 |
|
运行大模型
1 |
|
Send a message (/? for help)
显示帮助命令-/?
1 |
|
API调用
1 |
|
generate
流式返回
1 |
|
非流式返回
1 |
|
chat
流式返回
1 |
|
非流式返回
1 |
|
web UI
除了上面终端和 API 调用的方式,目前还有许多开源的 Web UI,可以本地搭建一个可视化的页面来实现对话,比如:
- open-webui
https://github.com/open-webui/open-webui
- lollms-webui
https://github.com/ParisNeo/lollms-webui
Ollama
Get up and running with large language models.
macOS
Windows
Linux
1 |
|
Docker
The official Ollama Docker image ollama/ollama
is available on Docker Hub.
Libraries
Community
Quickstart
To run and chat with Llama 3.2:
1 |
|
Model library
Ollama supports a list of models available on ollama.com/library
Here are some example models that can be downloaded:
Model | Parameters | Size | Download |
---|---|---|---|
Llama 3.3 | 70B | 43GB | ollama run llama3.3 |
Llama 3.2 | 3B | 2.0GB | ollama run llama3.2 |
Llama 3.2 | 1B | 1.3GB | ollama run llama3.2:1b |
Llama 3.2 Vision | 11B | 7.9GB | ollama run llama3.2-vision |
Llama 3.2 Vision | 90B | 55GB | ollama run llama3.2-vision:90b |
Llama 3.1 | 8B | 4.7GB | ollama run llama3.1 |
Llama 3.1 | 405B | 231GB | ollama run llama3.1:405b |
Phi 4 | 14B | 9.1GB | ollama run phi4 |
Phi 3 Mini | 3.8B | 2.3GB | ollama run phi3 |
Gemma 2 | 2B | 1.6GB | ollama run gemma2:2b |
Gemma 2 | 9B | 5.5GB | ollama run gemma2 |
Gemma 2 | 27B | 16GB | ollama run gemma2:27b |
Mistral | 7B | 4.1GB | ollama run mistral |
Moondream 2 | 1.4B | 829MB | ollama run moondream |
Neural Chat | 7B | 4.1GB | ollama run neural-chat |
Starling | 7B | 4.1GB | ollama run starling-lm |
Code Llama | 7B | 3.8GB | ollama run codellama |
Llama 2 Uncensored | 7B | 3.8GB | ollama run llama2-uncensored |
LLaVA | 7B | 4.5GB | ollama run llava |
Solar | 10.7B | 6.1GB | ollama run solar |
[!NOTE] You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Customize a model
Import from GGUF
Ollama supports importing GGUF models in the Modelfile:
-
Create a file named
Modelfile
, with aFROM
instruction with the local filepath to the model you want to import.1
FROM ./vicuna-33b.Q4_0.gguf
-
Create the model in Ollama
1
ollama create example -f Modelfile
-
Run the model
1
ollama run example
Import from Safetensors
See the guide on importing models for more information.
Customize a prompt
Models from the Ollama library can be customized with a prompt. For example, to customize the llama3.2
model:
1 |
|
Create a Modelfile
:
1 |
|
Next, create and run the model:
1 |
|
For more information on working with a Modelfile, see the Modelfile documentation.
CLI Reference
Create a model
ollama create
is used to create a model from a Modelfile.
1 |
|
Pull a model
1 |
|
This command can also be used to update a local model. Only the diff will be pulled.
Remove a model
1 |
|
Copy a model
1 |
|
Multiline input
For multiline input, you can wrap text with """
:
1 |
|
Multimodal models
1 |
|
Pass the prompt as an argument
1 |
|
Show model information
1 |
|
List models on your computer
1 |
|
List which models are currently loaded
1 |
|
Stop a model which is currently running
1 |
|
Start Ollama
ollama serve
is used when you want to start ollama without running the desktop application.
Building
See the developer guide
Running local builds
Next, start the server:
1 |
|
Finally, in a separate shell, run a model:
1 |
|
REST API
Ollama has a REST API for running and managing models.
Generate a response
1 |
|
Chat with a model
1 |
|
See the API documentation for all endpoints.
Community Integrations
Web & Desktop
- Open WebUI
- Enchanted (macOS native)
- Hollama
- Lollms-Webui
- LibreChat
- Bionic GPT
- HTML UI
- Saddle
- Chatbot UI
- Chatbot UI v2
- Typescript UI
- Minimalistic React UI for Ollama Models
- Ollamac
- big-AGI
- Cheshire Cat assistant framework
- Amica
- chatd
- Ollama-SwiftUI
- Dify.AI
- MindMac
- NextJS Web Interface for Ollama
- Msty
- Chatbox
- WinForm Ollama Copilot
- NextChat with Get Started Doc
- Alpaca WebUI
- OllamaGUI
- OpenAOE
- Odin Runes
- LLM-X (Progressive Web App)
- AnythingLLM (Docker + MacOs/Windows/Linux native app)
- Ollama Basic Chat: Uses HyperDiv Reactive UI
- Ollama-chats RPG
- IntelliBar (AI-powered assistant for macOS)
- QA-Pilot (Interactive chat tool that can leverage Ollama models for rapid understanding and navigation of GitHub code repositories)
- ChatOllama (Open Source Chatbot based on Ollama with Knowledge Bases)
- CRAG Ollama Chat (Simple Web Search with Corrective RAG)
- RAGFlow (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
- StreamDeploy (LLM Application Scaffold)
- chat (chat web app for teams)
- Lobe Chat with Integrating Doc
- Ollama RAG Chatbot (Local Chat with multiple PDFs using Ollama and RAG)
- BrainSoup (Flexible native client with RAG & multi-agent automation)
- macai (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
- RWKV-Runner (RWKV offline LLM deployment tool, also usable as a client for ChatGPT and Ollama)
- Ollama Grid Search (app to evaluate and compare models)
- Olpaka (User-friendly Flutter Web App for Ollama)
- OllamaSpring (Ollama Client for macOS)
- LLocal.in (Easy to use Electron Desktop Client for Ollama)
- Shinkai Desktop (Two click install Local AI using Ollama + Files + RAG)
- AiLama (A Discord User App that allows you to interact with Ollama anywhere in discord )
- Ollama with Google Mesop (Mesop Chat Client implementation with Ollama)
- R2R (Open-source RAG engine)
- Ollama-Kis (A simple easy to use GUI with sample custom LLM for Drivers Education)
- OpenGPA (Open-source offline-first Enterprise Agentic Application)
- Painting Droid (Painting app with AI integrations)
- Kerlig AI (AI writing assistant for macOS)
- AI Studio
- Sidellama (browser-based LLM client)
- LLMStack (No-code multi-agent framework to build LLM agents and workflows)
- BoltAI for Mac (AI Chat Client for Mac)
- Harbor (Containerized LLM Toolkit with Ollama as default backend)
- PyGPT (AI desktop assistant for Linux, Windows and Mac)
- Alpaca (An Ollama client application for linux and macos made with GTK4 and Adwaita)
- AutoGPT (AutoGPT Ollama integration)
- Go-CREW (Powerful Offline RAG in Golang)
- PartCAD (CAD model generation with OpenSCAD and CadQuery)
- Ollama4j Web UI - Java-based Web UI for Ollama built with Vaadin, Spring Boot and Ollama4j
- PyOllaMx - macOS application capable of chatting with both Ollama and Apple MLX models.
- Claude Dev - VSCode extension for multi-file/whole-repo coding
- Cherry Studio (Desktop client with Ollama support)
- ConfiChat (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
- Archyve (RAG-enabling document library)
- crewAI with Mesop (Mesop Web Interface to run crewAI with Ollama)
- Tkinter-based client (Python tkinter-based Client for Ollama)
- LLMChat (Privacy focused, 100% local, intuitive all-in-one chat interface)
- Local Multimodal AI Chat (Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.)
- ARGO (Locally download and run Ollama and Huggingface models with RAG on Mac/Windows/Linux)
- OrionChat - OrionChat is a web interface for chatting with different AI providers
- G1 (Prototype of using prompting strategies to improve the LLM’s reasoning through o1-like reasoning chains.)
- Web management (Web management page)
- Promptery (desktop client for Ollama.)
- Ollama App (Modern and easy-to-use multi-platform client for Ollama)
- SpaceLlama (Firefox and Chrome extension to quickly summarize web pages with ollama in a sidebar)
- YouLama (Webapp to quickly summarize any YouTube video, supporting Invidious as well)
- DualMind (Experimental app allowing two models to talk to each other in the terminal or in a web interface)
- ollamarama-matrix (Ollama chatbot for the Matrix chat protocol)
- ollama-chat-app (Flutter-based chat app)
- Perfect Memory AI (Productivity AI assists personalized by what you have seen on your screen, heard and said in the meetings)
- Hexabot (A conversational AI builder)
- Reddit Rate (Search and Rate Reddit topics with a weighted summation)
- OpenTalkGpt (Chrome Extension to manage open-source models supported by Ollama, create custom models, and chat with models from a user-friendly UI)
- VT (A minimal multimodal AI chat app, with dynamic conversation routing. Supports local models via Ollama)
- Nosia (Easy to install and use RAG platform based on Ollama)
- Witsy (An AI Desktop application available for Mac/Windows/Linux)
- Abbey (A configurable AI interface server with notebooks, document storage, and YouTube support)
- Minima (RAG with on-premises or fully local workflow)
- aidful-ollama-model-delete (User interface for simplified model cleanup)
- Perplexica (An AI-powered search engine & an open-source alternative to Perplexity AI)
- AI Toolkit for Visual Studio Code (Microsoft-official VSCode extension to chat, test, evaluate models with Ollama support, and use them in your AI applications.)
- MinimalNextOllamaChat (Minimal Web UI for Chat and Model Control)
- Chipper AI interface for tinkerers (Ollama, Haystack RAG, Python)
Cloud
Terminal
- oterm
- Ellama Emacs client
- Emacs client
- neollama UI client for interacting with models from within Neovim
- gen.nvim
- ollama.nvim
- ollero.nvim
- ollama-chat.nvim
- ogpt.nvim
- gptel Emacs client
- Oatmeal
- cmdh
- ooo
- shell-pilot(Interact with models via pure shell scripts on Linux or macOS)
- tenere
- llm-ollama for Datasette’s LLM CLI.
- typechat-cli
- ShellOracle
- tlm
- podman-ollama
- gollama
- ParLlama
- Ollama eBook Summary
- Ollama Mixture of Experts (MOE) in 50 lines of code
- vim-intelligence-bridge Simple interaction of “Ollama” with the Vim editor
- x-cmd ollama
- bb7
- SwollamaCLI bundled with the Swollama Swift package. Demo
- aichat All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.
- PowershAI PowerShell module that brings AI to terminal on Windows, including support for Ollama
- orbiton Configuration-free text editor and IDE with support for tab completion with Ollama.
Apple Vision Pro
Database
- pgai - PostgreSQL as a vector database (Create and search embeddings from Ollama models using pgvector)
- MindsDB (Connects Ollama models with nearly 200 data platforms and apps)
- chromem-go with example
- Kangaroo (AI-powered SQL client and admin tool for popular databases)
Package managers
Libraries
- LangChain and LangChain.js with example
- Firebase Genkit
- crewAI
- Yacana (User-friendly multi-agent framework for brainstorming and executing predetermined flows with built-in tool integration)
- Spring AI with reference and example
- LangChainGo with example
- LangChain4j with example
- LangChainRust with example
- LangChain for .NET with example
- LLPhant
- LlamaIndex and LlamaIndexTS
- LiteLLM
- OllamaFarm for Go
- OllamaSharp for .NET
- Ollama for Ruby
- Ollama-rs for Rust
- Ollama-hpp for C++
- Ollama4j for Java
- ModelFusion Typescript Library
- OllamaKit for Swift
- Ollama for Dart
- Ollama for Laravel
- LangChainDart
- Semantic Kernel - Python
- Haystack
- Elixir LangChain
- Ollama for R - rollama
- Ollama for R - ollama-r
- Ollama-ex for Elixir
- Ollama Connector for SAP ABAP
- Testcontainers
- Portkey
- PromptingTools.jl with an example
- LlamaScript
- llm-axe (Python Toolkit for Building LLM Powered Apps)
- Gollm
- Gollama for Golang
- Ollamaclient for Golang
- High-level function abstraction in Go
- Ollama PHP
- Agents-Flex for Java with example
- Parakeet is a GoLang library, made to simplify the development of small generative AI applications with Ollama.
- Haverscript with examples
- Ollama for Swift
- Swollama for Swift with DocC
- GoLamify
- Ollama for Haskell
- multi-llm-ts (A Typescript/JavaScript library allowing access to different LLM in unified API)
- LlmTornado (C# library providing a unified interface for major FOSS & Commercial inference APIs)
Mobile
- Enchanted
- Maid
- Ollama App (Modern and easy-to-use multi-platform client for Ollama)
- ConfiChat (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
Extensions & Plugins
- Raycast extension
- Discollama (Discord bot inside the Ollama discord channel)
- Continue
- Vibe (Transcribe and analyze meetings with Ollama)
- Obsidian Ollama plugin
- Logseq Ollama plugin
- NotesOllama (Apple Notes Ollama plugin)
- Dagger Chatbot
- Discord AI Bot
- Ollama Telegram Bot
- Hass Ollama Conversation
- Rivet plugin
- Obsidian BMO Chatbot plugin
- Cliobot (Telegram bot with Ollama support)
- Copilot for Obsidian plugin
- Obsidian Local GPT plugin
- Open Interpreter
- Llama Coder (Copilot alternative using Ollama)
- Ollama Copilot (Proxy that allows you to use ollama as a copilot like Github copilot)
- twinny (Copilot and Copilot chat alternative using Ollama)
- Wingman-AI (Copilot code and chat alternative using Ollama and Hugging Face)
- Page Assist (Chrome Extension)
- Plasmoid Ollama Control (KDE Plasma extension that allows you to quickly manage/control Ollama model)
- AI Telegram Bot (Telegram bot using Ollama in backend)
- AI ST Completion (Sublime Text 4 AI assistant plugin with Ollama support)
- Discord-Ollama Chat Bot (Generalized TypeScript Discord Bot w/ Tuning Documentation)
- ChatGPTBox: All in one browser extension with Integrating Tutorial
- Discord AI chat/moderation bot Chat/moderation bot written in python. Uses Ollama to create personalities.
- Headless Ollama (Scripts to automatically install ollama client & models on any OS for apps that depends on ollama server)
- Terraform AWS Ollama & Open WebUI (A Terraform module to deploy on AWS a ready-to-use Ollama service, together with its front end Open WebUI service.)
- node-red-contrib-ollama
- Local AI Helper (Chrome and Firefox extensions that enable interactions with the active tab and customisable API endpoints. Includes secure storage for user prompts.)
- vnc-lm (Discord bot for messaging with LLMs through Ollama and LiteLLM. Seamlessly move between local and flagship models.)
- LSP-AI (Open-source language server for AI-powered functionality)
- QodeAssist (AI-powered coding assistant plugin for Qt Creator)
- Obsidian Quiz Generator plugin
- AI Summmary Helper plugin
- TextCraft (Copilot in Word alternative using Ollama)
- Alfred Ollama (Alfred Workflow)
- TextLLaMA A Chrome Extension that helps you write emails, correct grammar, and translate into any language
Supported backends
- llama.cpp project founded by Georgi Gerganov.
Observability
- OpenLIT is an OpenTelemetry-native tool for monitoring Ollama Applications & GPUs using traces and metrics.
- HoneyHive is an AI observability and evaluation platform for AI agents. Use HoneyHive to evaluate agent performance, interrogate failures, and monitor quality in production.
- Langfuse is an open source LLM observability platform that enables teams to collaboratively monitor, evaluate and debug AI applications.