https://github.com/ggerganov/llama.cpp
The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook
Supported platforms:
- Mac OS
- Linux
- Windows (via CMake)
- Docker
Supported models:
https://github.com/ggerganov/llama.cpp
The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook
Supported platforms:
Supported models: