Quick and correct GGUF fashions on your CPU
GGUF is a binary file format designed for environment friendly storage and quick giant language mannequin (LLM) loading with GGML, a C-based tensor library for machine studying.
GGUF encapsulates all mandatory parts for inference, together with the tokenizer and code, inside a single file. It helps the conversion of varied language fashions, corresponding to Llama 3, Phi, and Qwen2. Moreover, it facilitates mannequin quantization to decrease precisions to enhance velocity and reminiscence effectivity on CPUs.
We regularly write “GGUF quantization” however GGUF itself is simply a file format, not a quantization technique. There are a number of quantization algorithms applied in llama.cpp to scale back the mannequin dimension and serialize the ensuing mannequin within the GGUF format.
On this article, we are going to see methods to precisely quantize an LLM and convert it to GGUF, utilizing an significance matrix (imatrix) and the Okay-Quantization technique. I present the GGUF conversion code for Gemma 2 Instruct, utilizing an imatrix. It really works the identical with different fashions supported by llama.cpp: Qwen2, Llama 3, Phi-3, and so on. We may also see methods to consider the accuracy of the quantization and inference throughput of the ensuing fashions.