Liu Song’s Projects


~/Projects/llama.cpp

git clone https://code.lsong.org/llama.cpp

Commit

Commit
4a7129acd2e939b92d70dd568c746f2fa078232c
Author
Georgi Gerganov <[email protected]>
Date
2023-03-25 16:30:32 +0200 +0200
Diffstat
 README.md | 10 +---------

Remove obsolete information from README


diff --git a/README.md b/README.md
index 0830074bf5f4a8fdb4520972596e471370ab9288..8a84324b1ce13fd22bedf4dfaf993d8b1f95faf8 100644
--- a/README.md
+++ b/README.md
@@ -17,7 +17,7 @@
 The main goal is to run the model using 4-bit quantization on a MacBook
 
 - Plain C/C++ implementation without dependencies
-- Apple silicon first-class citizen - optimized via ARM NEON
+- Apple silicon first-class citizen - optimized via ARM NEON and Accelerate framework
 - AVX2 support for x86 architectures
 - Mixed F16 / F32 precision
 - 4-bit quantization support
@@ -322,14 +322,6 @@
 ```bash
 docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:light -m /models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512
 ```
-
-## Limitations
-
-- Probably the token sampling can be improved
-- The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder,
-  there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simply don't
-  know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the
-  performance will be the same, since no BLAS calls are invoked by the current implementation
 
 ### Contributing