什么是 Ollama?
Ollama 是一个能够在本地部署和管理开源大言语模型的结构,因为它极大的简化了开源大言语模型的装置和装备细节,一经推出就广受好评,目前已在github上获得了46k star。
不管是闻名的羊驼系列,仍是最新的AI新贵Mistral,等等各种开源大言语模型,都能够用Ollama完结一键装置并运转,支撑的更多模型的列表能够检查Ollama官网。
Model | Parameters | Size | Download |
---|---|---|---|
Llama 2 | 7B | 3.8GB | ollama run llama2 |
Mistral | 7B | 4.1GB | ollama run mistral |
本文就让咱们一同入门Ollama。
怎么装置 Ollama结构?
Ollama支撑各个平台:Mac、Windows 和 Linux,也供给了docker image。
在Ollama官网或者Github能够下载,然后一键装置Ollama结构:
因为Ollama刚支撑windows不久,在windows上的相关装备还不行完善,以下我将主要以Linux上运转Ollama来举例说明。
运转 Ollama 服务
在Ollama装置完结后, 一般会主动发动 Ollama 服务,而且会主动设置为开机自发动。装置完结后,能够运用如下指令检查是否Ollama是否正常发动。如下例子中显示“Active: active (running)”表明Ollama已经正常发动。
$ systemctl status ollama
● ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/ollama.service.d
└─environment.conf
Active: active (running) since Thu 2024-03-07 09:09:39 HKT; 4 days ago
Main PID: 19975 (ollama)
Tasks: 29 (limit: 69456)
Memory: 1.1G
CPU: 14min 44.702s
CGroup: /system.slice/ollama.service
└─19975 /usr/local/bin/ollama serve
在Linux上,假如Ollama未发动,能够用如下指令发动 Ollama 服务:ollama serve
,或者 sudo systemctl start ollama
。
通过剖析Linux的装置脚本install.sh,就会看到其中已经将ollama serve
装备为一个体系服务,所以能够运用systemctl
来 start / stop ollama进程。
status "Creating ollama systemd service..."
cat <<EOF | $SUDO tee /etc/systemd/system/ollama.service >/dev/null
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=$BINDIR/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"
发动Ollama服务后,能够检查当前的Ollama版别,以及常用指令
~$ ollama -v
ollama version is 0.1.20
~$ ollama --help
Large language model runner
Usage:
ollama [flags]
ollama [command]
Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
cp Copy a model
rm Remove a model
help Help about any command
Flags:
-h, --help help for ollama
-v, --version Show version information
Use "ollama [command] --help" for more information about a command.
怎么下载并运转大言语模型?
至此,已经完结Ollama结构的装置,接下来,能够用一条指令在本地运转大言语模型。以闻名的羊驼举例:ollama run llama2
。
假如还没有下载过指定的大言语模型,这条指令将会先执行ollama pull llama2
,将大言语模型下载到本地,再在本地运转大言语模型。
下载完结后,运转作用如下:
:~$ ollama run llama2
>>> who are you?
I am LLaMA, an AI assistant developed by Meta AI that can understand and respond to human input in a conversational manner. I am trained on a massive dataset of text from the internet and can
generate human-like responses to a wide range of topics and questions. I can be used to create chatbots, virtual assistants, and other applications that require natural language understanding and
generation capabilities.
>>> Send a message (/? for help)
REST API
Ollama还供给了API接口:
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?",
"stream": false
}'
回来成果如下:
{
"model": "llama2",
"created_at": "2024-02-26T04:35:10.787352404Z",
"response": "The sky appears blue because of a phenomenon called Rayleigh scattering, which occurs when sunlight enters Earth's atmosphere. The sunlight encounters tiny molecules of gases such as nitrogen and oxygen, which scatter the light in all directions. The shorter wavelengths of light, such as blue and violet, are scattered more than the longer wavelengths, such as red and orange. This is known as Rayleigh scattering, named after Lord Rayleigh, who first described the phenomenon in the late 19th century. As a result of this scattering, the light that reaches our eyes from the sun appears blue, especially when viewed from a distance. The closer we get to the horizon, the more the blue color appears to fade, as the light has to travel through more of the atmosphere, which scatters the shorter wavelengths even more. It's worth noting that the exact shade of blue can vary depending on the time of day and atmospheric conditions. For example, during sunrise and sunset, when the sun is low in the sky, the sky can take on a more orange or red hue due to the scattering of light by atmospheric particles. So, to summarize, the sky appears blue because of the way light interacts with the tiny molecules of gases in Earth's atmosphere, particularly nitrogen and oxygen.",
"done": true,
"total_duration": 7001870820,
"load_duration": 4930376,
"prompt_eval_duration": 60907000,
"eval_count": 309,
"eval_duration": 6931593000
}
运用API接口,就能够完结更多灵活的功用,比如与IDE插件合作,完结本地的编程助手,可参阅如下文章:
零根底入门AI:搭建本地的编程助手
FAQ
怎么检查运转的日志?
在Linux上运转指令journalctl -u ollama
,即可检查运转日志。
怎么装备本地大模型对局域网供给服务?
在Linux上创立如下装备文件,并装备环境变量 OLLAMA_HOST
来指定对局域网供给服务的地址,再重启Ollama服务即可。
:~$ cat /etc/systemd/system/ollama.service.d/environment.conf
[Service]
Environment=OLLAMA_HOST=0.0.0.0:11434
如此装备后,即可由一台GPU服务器为本地局域网供给大言语模型的服务。
本地有多张GPU,怎么用指定的GPU来运转Ollama?
在Linux上创立如下装备文件,并装备环境变量 CUDA_VISIBLE_DEVICES
来指定运转Ollama的GPU,再重启Ollama服务即可。
:~$ cat /etc/systemd/system/ollama.service.d/environment.conf
[Service]
Environment=CUDA_VISIBLE_DEVICES=1,2
下载的大模型存储在哪个途径?
默许情况下,不同操作体系存储的途径如下:
- macOS:
~/.ollama/models
- Linux:
/usr/share/ollama/.ollama/models
- Windows:
C:Users<username>.ollamamodels
怎么修正大模型存储的途径?
Linux平台装置Ollama时,默许装置时会创立用户ollama,再将模型文件存储到该用户的目录/usr/share/ollama/.ollama/models
。但因为大模型文件往往特别大,有时需要将大模型文件存储到专门的数据盘,此刻就需要修正大模型文件的存储途径。
官方供给的方法是设置环境变量“OLLAMA_MODELS”,但我在Linux上测验后,并没有成功。
剖析Linux版的装置脚本install.sh后,我发现是因为其中创立了用户ollama和用户组ollama,然后将大模型存储到了该用户的目录/usr/share/ollama/.ollama/models
,而我的帐户对ollama帐户的一些操作并不能生效,即便我再手动将我的帐户添加进ollama用户组,也仍然会有一些权限问题,导致对ollama帐户的目录操作不生效。
因为新建的ollama帐户并没有给我带来额外的便利,最后我用以下过程来完结修正大模型文件的存储途径:
- 修正装置文件 install.sh,撤销其中创立用户ollama的过程,参阅如下:
# if ! id ollama >/dev/null 2>&1; then # status "Creating ollama user..." # $SUDO useradd -r -s /bin/false -m -d /usr/share/ollama ollama # fi # status "Adding current user to ollama group..." # $SUDO usermod -a -G ollama $(whoami)
- 修正装置文件 install.sh,运用我的帐户来发动ollama服务,参阅如下:
status "Creating ollama systemd service..." cat <<EOF | $SUDO tee /etc/systemd/system/ollama.service >/dev/null [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=$BINDIR/ollama serve User=<myusername> Group=<myusername>
- 修正装置文件 install.sh,添加如下装备中指定环境变量
OLLAMA_MODELS
指定存储途径,再用此装置文件来装置ollama。Environment="OLLAMA_MODELS=/home/paco/lab/LLM/ollama/OLLAMA_MODELS"
或者在装置完结后,创立如下装备文件,并装备环境变量
OLLAMA_MODELS
来指定存储途径,再重启Ollama服务。:~$ cat /etc/systemd/system/ollama.service.d/environment.conf [Service] Environment=OLLAMA_MODELS=<path>/OLLAMA_MODELS