Liu Song’s Projects


~/Projects/whisper.cpp

git clone https://code.lsong.org/whisper.cpp

Commit

Commit
fd3f3d748f5776e39c662868f3cd88e988156d6b
Author
Georgi Gerganov <[email protected]>
Date
2022-09-29 23:37:59 +0300 +0300
Diffstat
 README.md | 6 +++---

Update README.md


diff --git a/README.md b/README.md
index 143ebd6a79f8669172e4a499c30245fb79c3038f..9185a3c8aaab5f3ceb1cf5cb51fecfe1616983ca 100644
--- a/README.md
+++ b/README.md
@@ -27,7 +27,7 @@
 For a quick demo, simply run `make base.en`:
 
 # whisper.cpp
-# whisper.cpp
+whisper_model_load: model size  =   140.54 MB
 $ make base.en
 
 gcc -pthread -O3 -mavx -mavx2 -mfma -mf16c -c ggml.c
@@ -125,14 +125,14 @@ Note that `whisper.cpp` runs only with 16-bit WAV files, so make sure to convert your input before running the tool.
 For example, you can use `ffmpeg` like this:
 
 # whisper.cpp
-# whisper.cpp
+whisper_model_load: model size  =   140.54 MB
 ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
 ```
 
 Here is another example of transcribing a [3:24 min speech](https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg) in less than a minute, using `medium.en` model:
 
 # whisper.cpp
-# whisper.cpp
+whisper_model_load: model size  =   140.54 MB
 $ ./main -m models/ggml-medium.en.bin -f samples/gb1.wav -t 8
 whisper_model_load: loading model from 'models/ggml-medium.en.bin'
 whisper_model_load: n_vocab       = 51864