Troubleshooting¶
Issues specific to Llamactl deployment and operation.
Configuration Issues¶
Invalid Configuration¶
Problem: Invalid configuration preventing startup
Solutions: 1. Use minimal configuration:
- Check data directory permissions:
Instance Management Issues¶
Model Loading Failures¶
Problem: Instance fails to start with model loading errors
Common Solutions:
- llama-server not found: Ensure llama-server
binary is in PATH
- Wrong model format: Ensure model is in GGUF format
- Insufficient memory: Use smaller model or reduce context size
- Path issues: Use absolute paths to model files
Memory Issues¶
Problem: Out of memory errors or system becomes unresponsive
Solutions: 1. Reduce context size:
- Use quantized models:
- Try Q4_K_M instead of higher precision models
- Use smaller model variants (7B instead of 13B)
GPU Configuration¶
Problem: GPU not being used effectively
Solutions: 1. Configure GPU layers:
Advanced Instance Issues¶
Problem: Complex model loading, performance, or compatibility issues
Since llamactl uses llama-server
under the hood, many instance-related issues are actually llama.cpp issues. For advanced troubleshooting:
Resources:
- llama.cpp Documentation: https://github.com/ggml/llama.cpp
- llama.cpp Issues: https://github.com/ggml/llama.cpp/issues
- llama.cpp Discussions: https://github.com/ggml/llama.cpp/discussions
Testing directly with llama-server:
# Test your model and parameters directly with llama-server
llama-server --model /path/to/model.gguf --port 8081 --n-gpu-layers 35
This helps determine if the issue is with llamactl or with the underlying llama.cpp/llama-server.
API and Network Issues¶
CORS Errors¶
Problem: Web UI shows CORS errors in browser console
Solutions: 1. Configure allowed origins:
Authentication Issues¶
Problem: API requests failing with authentication errors
Solutions: 1. Disable authentication temporarily:
-
Configure API keys:
-
Use correct Authorization header:
Debugging and Logs¶
Viewing Instance Logs¶
# Get instance logs via API
curl http://localhost:8080/api/v1/instances/{name}/logs
# Or check log files directly
tail -f ~/.local/share/llamactl/logs/{instance-name}.log
Enable Debug Logging¶
Getting Help¶
When reporting issues, include:
-
System information:
-
Configuration file (remove sensitive keys)
-
Relevant log output
-
Steps to reproduce the issue