Simple, scalable AI model deployment on GPU clusters
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Go to file
gitlawr 899bea6697
ci: pin action-gh-release version
7 months ago
.github ci: pin action-gh-release version 7 months ago
benchmarks chore: update default completion tokens 8 months ago
docs feat: support vllm ascend 7 months ago
gpustack fix: disallow abbrev parsing backend parameters 7 months ago
hack ci(npu): adjust processing 7 months ago
static/catalog_icons feat: add dia 7 months ago
tests fix: min gguf unit test 8 months ago
.dockerignore refactor(npu): tidy up mindie and vllm 7 months ago
.flake8 feat: db migration 2 years ago
.gitattributes chore: Add .gitattributes to enforce LF line endings 8 months ago
.gitignore refactor(npu): tidy up mindie and vllm 7 months ago
.pre-commit-config.yaml docs: add CNAME file 1 year ago
Dockerfile ci: drop cuda11.8 7 months ago
Dockerfile.corex support iluvatar 7 months ago
Dockerfile.cpu feat: add dia 7 months ago
Dockerfile.dcu feat: upgrade dcu vllm to 0.7.2 8 months ago
Dockerfile.musa chore(tool): bump version 7 months ago
Dockerfile.npu ci(npu): adjust processing 7 months ago
Dockerfile.rocm chore(dcu): optimize Dockerfile to reduce image size 8 months ago
Dockerfile.rocm.base chore: upgrade rocm vllm to v0.6.6.post1 11 months ago
LICENSE docs: update 1 year ago
Makefile fix: skip profile and execution policy on powershell make 10 months ago
README.md feat: support vllm ascend 7 months ago
README_CN.md feat: support vllm ascend 7 months ago
README_JP.md feat: support vllm ascend 7 months ago
alembic.ini feat: db migration 2 years ago
gpustack.pth feat: support distributed vLLM 10 months ago
install.ps1 chore: update install scripts timestamp 8 months ago
install.ps1.sha256sum chore: Update install.ps1.sha256sum 8 months ago
install.sh chore: update install scripts timestamp 8 months ago
install.sh.sha256sum chore: update install scripts timestamp 8 months ago
mkdocs.yml remove Installation Script 7 months ago
poetry.lock chore: update vox-box 7 months ago
pyproject.toml chore: update vox-box 7 months ago

README.md


GPUStack


Documentation License WeChat Discord Follow on X(Twitter)


English | 简体中文 | 日本語


demo

GPUStack is an open-source GPU cluster manager for running AI models.

Key Features

  • Broad GPU Compatibility: Seamlessly supports GPUs from various vendors across Apple Macs, Windows PCs, and Linux servers.
  • Extensive Model Support: Supports a wide range of models including LLMs, VLMs, image models, audio models, embedding models, and rerank models.
  • Flexible Inference Backends: Flexibly integrates with multiple inference backends including llama-box (llama.cpp & stable-diffusion.cpp), vox-box, vLLM and Ascend MindIE.
  • Multi-Version Backend Support: Run multiple versions of inference backends concurrently to meet the diverse runtime requirements of different models.
  • Distributed Inference: Supports single-node and multi-node multi-GPU inference, including heterogeneous GPUs across vendors and runtime environments.
  • Scalable GPU Architecture: Easily scale up by adding more GPUs or nodes to your infrastructure.
  • Robust Model Stability: Ensures high availability with automatic failure recovery, multi-instance redundancy, and load balancing for inference requests.
  • Intelligent Deployment Evaluation: Automatically assess model resource requirements, backend and architecture compatibility, OS compatibility, and other deployment-related factors.
  • Automated Scheduling: Dynamically allocate models based on available resources.
  • Lightweight Python Package: Minimal dependencies and low operational overhead.
  • OpenAI-Compatible APIs: Fully compatible with OpenAIs API specifications for seamless integration.
  • User & API Key Management: Simplified management of users and API keys.
  • Real-Time GPU Monitoring: Track GPU performance and utilization in real time.
  • Token and Rate Metrics: Monitor token usage and API request rates.

Installation

Linux or macOS

GPUStack provides a script to install it as a service on systemd or launchd based systems with default port 80. To install GPUStack using this method, just run:

curl -sfL https://get.gpustack.ai | sh -s -

Windows

Run PowerShell as administrator (avoid using PowerShell ISE), then run the following command to install GPUStack:

Invoke-Expression (Invoke-WebRequest -Uri "https://get.gpustack.ai" -UseBasicParsing).Content

Other Installation Methods

For manual installation, docker installation or detailed configuration options, please refer to the Installation Documentation.

Getting Started

  1. Run and chat with the llama3.2 model:
gpustack chat llama3.2 "tell me a joke."
  1. Run and generate an image with the stable-diffusion-v3-5-large-turbo model:

💡 Tip

This command downloads the model (~12GB) from Hugging Face. The download time depends on your network speed. Ensure you have enough disk space and VRAM (12GB) to run the model. If you encounter issues, you can skip this step and move to the next one.

gpustack draw hf.co/gpustack/stable-diffusion-v3-5-large-turbo-GGUF:stable-diffusion-v3-5-large-turbo-Q4_0.gguf \
"A minion holding a sign that says 'GPUStack'. The background is filled with futuristic elements like neon lights, circuit boards, and holographic displays. The minion is wearing a tech-themed outfit, possibly with LED lights or digital patterns. The sign itself has a sleek, modern design with glowing edges. The overall atmosphere is high-tech and vibrant, with a mix of dark and neon colors." \
--sample-steps 5 --show

Once the command completes, the generated image will appear in the default viewer. You can experiment with the prompt and CLI options to customize the output.

Generated Image

  1. Open http://your_host_ip in the browser to access the GPUStack UI. Log in to GPUStack with username admin and the default password. You can run the following command to get the password for the default setup:

Linux or macOS

cat /var/lib/gpustack/initial_admin_password

Windows

Get-Content -Path "$env:APPDATA\gpustack\initial_admin_password" -Raw
  1. Click Playground - Chat in the navigation menu. Now you can chat with the LLM in the UI playground.

Playground Screenshot

  1. Click API Keys in the navigation menu, then click the New API Key button.

  2. Fill in the Name and click the Save button.

  3. Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.

  4. Now you can use the API key to access the OpenAI-compatible API. For example, use curl as the following:

export GPUSTACK_API_KEY=your_api_key
curl http://your_gpustack_server_url/v1-openai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GPUSTACK_API_KEY" \
  -d '{
    "model": "llama3.2",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ],
    "stream": true
  }'

Supported Platforms

  • macOS
  • Linux
  • Windows

Supported Accelerators

  • NVIDIA CUDA (Compute Capability 6.0 and above)
  • Apple Metal (M-series chips)
  • AMD ROCm
  • Ascend CANN
  • Hygon DTK
  • Moore Threads MUSA
  • Iluvatar Corex

We plan to support the following accelerators in future releases.

  • Intel oneAPI
  • Qualcomm AI Engine

Supported Models

GPUStack uses llama-box (bundled llama.cpp and stable-diffusion.cpp server), vLLM, Ascend MindIE and vox-box as the backends and supports a wide range of models. Models from the following sources are supported:

  1. Hugging Face

  2. ModelScope

  3. Local File Path

Example Models:

Category Models
Large Language Models(LLMs) Qwen, LLaMA, Mistral, DeepSeek, Phi, Gemma
Vision Language Models(VLMs) Llama3.2-Vision, Pixtral , Qwen2.5-VL, LLaVA, InternVL2.5
Diffusion Models Stable Diffusion, FLUX
Embedding Models BGE, BCE, Jina
Reranker Models BGE, BCE, Jina
Audio Models Whisper (Speech-to-Text), CosyVoice (Text-to-Speech)

For full list of supported models, please refer to the supported models section in the inference backends documentation.

OpenAI-Compatible APIs

GPUStack serves the following OpenAI compatible APIs under the /v1-openai path:

For example, you can use the official OpenAI Python API library to consume the APIs:

from openai import OpenAI
client = OpenAI(base_url="http://your_gpustack_server_url/v1-openai", api_key="your_api_key")

completion = client.chat.completions.create(
  model="llama3.2",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

GPUStack users can generate their own API keys in the UI.

Documentation

Please see the official docs site for complete documentation.

Build

  1. Install Python (version 3.10 to 3.12).

  2. Run make build.

You can find the built wheel package in dist directory.

Contributing

Please read the Contributing Guide if you're interested in contributing to GPUStack.

Join Community

Any issues or have suggestions, feel free to join our Community for support.

License

Copyright (c) 2024 The GPUStack authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.