~/Projects/faster-whisper
git clone https://code.lsong.org/faster-whisper
Commit
- Commit
- cda834c8ea76c2cab9da19031815c1e937a88c7f
- Author
- Guillaume Klein <[email protected]>
- Date
- 2023-02-16 17:01:19 +0100 +0100
- Diffstat
README.md | 3 --- requirements.txt | 2 +-
Update CTranslate2 to 3.6.0
diff --git a/README.md b/README.md index 6e627cd49bb2193e0e1b5272db1e3fbceaf8e8d5..9d1d5f068923b1a181b1ca558b1032d23ee83a5c 100644 --- a/README.md +++ b/README.md @@ -50,9 +50,6 @@ GPU execution requires the NVIDIA libraries cuBLAS 11.x and cuDNN 8.x to be installed on the system. Please refer to the [CTranslate2 documentation](https://opennmt.net/CTranslate2/installation.html). This repository demonstrates how to implement the Whisper transcription using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models. -This repository demonstrates how to implement the Whisper transcription using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models. - -This repository demonstrates how to implement the Whisper transcription using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models. This implementation is up to 4 times faster than [openai/whisper](https://github.com/openai/whisper) for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU. ### Model conversion diff --git a/requirements.txt b/requirements.txt index 8b6def3b29246d58b421c2a05adb59b52497a39a..160f19d6bd170b5d3c635cdbb805d39a694a52bc 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,3 +1,3 @@ av==10.* -ctranslate2>=3.5.1,<4 +ctranslate2>=3.6,<4 tokenizers==0.13.*