Simple, scalable AI model deployment on GPU clusters
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Go to file
jshixiong 2c538ed37b
fix: model instance time
4 months ago
.github ci(docker/cuda): build flashinfer from source 5 months ago
benchmarks fix: the environment variable configuration "HTTP_PROXY HTTPS_PROXY" is invalid 5 months ago
docs docs: update huggingface_token config example 4 months ago
gpustack fix: model instance time 4 months ago
hack fix: update linux install 4 months ago
pack ci(docker): lock ray version 4 months ago
static/catalog_icons feat: add GLM4.5, Qwen3-Coder, Qwen3-2507 and gpt-oss models 4 months ago
tests fix: Enhance multi-GPU scheduling tests and improve attention heads validation messages 5 months ago
.dockerignore refactor(npu): tidy up mindie and vllm 7 months ago
.flake8 feat: db migration 2 years ago
.gitattributes ci(docker/npu): bump mindie version to 2.1.rc1 5 months ago
.gitignore refactor(npu): tidy up mindie and vllm 7 months ago
.pre-commit-config.yaml feat: improve SSO 4 months ago
LICENSE docs: update 1 year ago
Makefile fix: skip profile and execution policy on powershell make 10 months ago
README.md docs: update wechat qrcode 5 months ago
README_CN.md docs: update wechat qrcode 5 months ago
README_JP.md docs: update wechat qrcode 5 months ago
alembic.ini feat: db migration 2 years ago
conftest.py refactor: simplify test fixtures 6 months ago
gpustack.pth feat: support distributed vLLM 10 months ago
install.ps1 chore: remove pydantic workaround 7 months ago
install.ps1.sha256sum chore: remove pydantic workaround 7 months ago
install.sh chore: remove pydantic workaround 7 months ago
install.sh.sha256sum chore: remove pydantic workaround 7 months ago
mkdocs.yml docs: update sso cli flags 4 months ago
poetry.lock fix:update poetry.lock 4 months ago
pyproject.toml fix:update poetry.lock 4 months ago

README.md


GPUStack


Documentation License WeChat Discord Follow on X(Twitter)


English | 简体中文 | 日本語


demo

GPUStack is an open-source GPU cluster manager for running AI models.

Key Features

  • Broad GPU Compatibility: Seamlessly supports GPUs from various vendors across Apple Macs, Windows PCs, and Linux servers.
  • Extensive Model Support: Supports a wide range of models including LLMs, VLMs, image models, audio models, embedding models, and rerank models.
  • Flexible Inference Backends: Flexibly integrates with multiple inference backends including vLLM, Ascend MindIE, llama-box (llama.cpp & stable-diffusion.cpp) and vox-box.
  • Multi-Version Backend Support: Run multiple versions of inference backends concurrently to meet the diverse runtime requirements of different models.
  • Distributed Inference: Supports single-node and multi-node multi-GPU inference, including heterogeneous GPUs across vendors and runtime environments.
  • Scalable GPU Architecture: Easily scale up by adding more GPUs or nodes to your infrastructure.
  • Robust Model Stability: Ensures high availability with automatic failure recovery, multi-instance redundancy, and load balancing for inference requests.
  • Intelligent Deployment Evaluation: Automatically assess model resource requirements, backend and architecture compatibility, OS compatibility, and other deployment-related factors.
  • Automated Scheduling: Dynamically allocate models based on available resources.
  • Lightweight Python Package: Minimal dependencies and low operational overhead.
  • OpenAI-Compatible APIs: Fully compatible with OpenAIs API specifications for seamless integration.
  • User & API Key Management: Simplified management of users and API keys.
  • Real-Time GPU Monitoring: Track GPU performance and utilization in real time.
  • Token and Rate Metrics: Monitor token usage and API request rates.

Installation

Linux

If you are using NVIDIA GPUs, ensure Docker and NVIDIA Container Toolkit are installed on your system. Then, run the following command to start the GPUStack server.

docker run -d --name gpustack \
      --restart=unless-stopped \
      --gpus all \
      --network=host \
      --ipc=host \
      -v gpustack-data:/var/lib/gpustack \
      gpustack/gpustack

For more details on the installation or other GPU hardware platforms, please refer to the Installation Documentation.

After the server starts, run the following command to get the default admin password:

docker exec gpustack cat /var/lib/gpustack/initial_admin_password

Open your browser and navigate to http://your_host_ip to access the GPUStack UI. Use the default username admin and the password you retrieved above to log in.

macOS & Windows

A desktop installer is available for macOS and Windows — see the documentation for installation details.

Deploy a Model

  1. Navigate to the Catalog page in the GPUStack UI.

  2. Select the Qwen3 model from the list of available models.

  3. After the deployment compatibility checks pass, click the Save button to deploy the model.

deploy qwen3 from catalog

  1. GPUStack will start downloading the model files and deploying the model. When the deployment status shows Running, the model has been deployed successfully.

model is running

  1. Click Playground - Chat in the navigation menu, check that the model qwen3 is selected from the top-right Model dropdown. Now you can chat with the model in the UI playground.

quick chat

Use the model via API

  1. Hover over the user avatar and navigate to the API Keys page, then click the New API Key button.

  2. Fill in the Name and click the Save button.

  3. Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.

  4. You can now use the API key to access the OpenAI-compatible API endpoints provided by GPUStack. For example, use curl as the following:

# Replace `your_api_key` and `your_gpustack_server_url`
# with your actual API key and GPUStack server URL.
export GPUSTACK_API_KEY=your_api_key
curl http://your_gpustack_server_url/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GPUSTACK_API_KEY" \
  -d '{
    "model": "qwen3",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Tell me a joke."
      }
    ],
    "stream": true
  }'

Supported Platforms

  • Linux
  • macOS
  • Windows

Supported Accelerators

  • NVIDIA CUDA (Compute Capability 6.0 and above)
  • Apple Metal (M-series chips)
  • AMD ROCm
  • Ascend CANN
  • Hygon DTK
  • Moore Threads MUSA
  • Iluvatar Corex
  • Cambricon MLU

Supported Models

GPUStack uses vLLM, Ascend MindIE, llama-box (bundled llama.cpp and stable-diffusion.cpp server) and vox-box as the backends and supports a wide range of models. Models from the following sources are supported:

  1. Hugging Face

  2. ModelScope

  3. Local File Path

Example Models

Category Models
Large Language Models(LLMs) Qwen, LLaMA, Mistral, DeepSeek, Phi, Gemma
Vision Language Models(VLMs) Llama3.2-Vision, Pixtral , Qwen2.5-VL, LLaVA, InternVL3
Diffusion Models Stable Diffusion, FLUX
Embedding Models BGE, BCE, Jina, Qwen3-Embedding
Reranker Models BGE, BCE, Jina, Qwen3-Reranker
Audio Models Whisper (Speech-to-Text), CosyVoice (Text-to-Speech)

For full list of supported models, please refer to the supported models section in the inference backends documentation.

OpenAI-Compatible APIs

GPUStack serves the following OpenAI compatible APIs under the /v1-openai path:

For example, you can use the official OpenAI Python API library to consume the APIs:

from openai import OpenAI
client = OpenAI(base_url="http://your_gpustack_server_url/v1-openai", api_key="your_api_key")

completion = client.chat.completions.create(
  model="llama3.2",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

GPUStack users can generate their own API keys in the UI.

Documentation

Please see the official docs site for complete documentation.

Build

  1. Install Python (version 3.10 to 3.12).

  2. Run make build.

You can find the built wheel package in dist directory.

Contributing

Please read the Contributing Guide if you're interested in contributing to GPUStack.

Join Community

Any issues or have suggestions, feel free to join our Community for support.

License

Copyright (c) 2024 The GPUStack authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.