'To use the desired version of vLLM/llama-box/vox-box, the system will automatically create a virtual environment in the online environment to install the corresponding version. After a GPUStack upgrade, the backend version will remain fixed. {link}',
'To use the desired version of {backend}, the system will automatically create a virtual environment in the online environment to install the corresponding version. After a GPUStack upgrade, the backend version will remain fixed. {link}',
'models.form.gpuselector':'GPU Selector',
'models.form.backend.llamabox':
'For GGUF format models, supports Linux, macOS, and Windows.',
@ -114,6 +114,8 @@ export default {
'models.table.vram.allocated':'Allocated VRAM',
'models.form.backend.warning':
'The backend for GGUF format models uses llama-box.',
'models.form.ollama.warning':
'Deploy the Ollama model backend using llama-box.',
'models.form.backend.warning.llamabox':
'To use the llama-box backend, specify the full path to the model file (e.g.,<span style="font-weight: 700">/data/models/model.gguf</span>). For sharded models, provide the path to the first shard (e.g.,<span style="font-weight: 700">/data/models/model-00001-of-00004.gguf</span>).'
'To use the desired version of vLLM/llama-box/vox-box, the system will automatically create a virtual environment in the online environment to install the corresponding version. After a GPUStack upgrade, the backend version will remain fixed. {link}',
'To use the desired version of {backend}, the system will automatically create a virtual environment in the online environment to install the corresponding version. After a GPUStack upgrade, the backend version will remain fixed. {link}',
'models.form.gpuselector':'Селектор GPU',
'models.form.backend.llamabox':
'Для моделей формата GGUF. Поддержка Linux, macOS и Windows.',
@ -113,6 +113,8 @@ export default {
'models.table.vram.allocated':'Allocated VRAM',
'models.form.backend.warning':
'The backend for GGUF format models uses llama-box.',
'models.form.ollama.warning':
'Deploy the Ollama model backend using llama-box.',
'models.form.backend.warning.llamabox':`To use the llama-box backend, specify the full path to the model file (e.g.,<span style="font-weight: 700">/data/models/model.gguf</span>). For sharded models, provide the path to the first shard (e.g.,<span style="font-weight: 700">/data/models/model-00001-of-00004.gguf</span>).`