~/Projects/faster-whisper
git clone https://code.lsong.org/faster-whisper
Commit
- Commit
- e44a8c7ba056bbdb3a9ac94a37ba87b765d7ced3
- Author
- Guillaume Klein <[email protected]>
- Date
- 2023-03-22 21:07:27 +0100 +0100
- Diffstat
README.md | 16 ++++++++++------
Update the README following the PyPI release
diff --git a/README.md b/README.md index 98c4fca37ecfbc74d328ded5e829b3213181522d..4eb719d140903b950436daf4fa41577299a76bad 100644 --- a/README.md +++ b/README.md @@ -37,25 +37,29 @@ ## Installation ```bash -pip install -e .[conversion] +pip install faster-whisper ``` -The model conversion requires the modules `transformers` and `torch` which are installed by the `[conversion]` requirement. Once a model is converted, these modules are no longer needed and the installation could be simplified to: +The model conversion script requires the modules `transformers` and `torch` which can be installed with the `[conversion]` extra requirement: ```bash +* [openai/whisper](https://github.com/openai/whisper)@[6dea21fd](https://github.com/openai/whisper/commit/6dea21fd7f7253bfe450f1e2512a0fe47ee2d258) This repository demonstrates how to implement the Whisper transcription using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models. -# Faster Whisper transcription with CTranslate2 ``` -It is also possible to install the module without cloning the Git repository: +**Other installation methods:** ```bash # Install the master branch: -pip install "faster-whisper @ https://github.com/guillaumekln/faster-whisper/archive/refs/heads/master.tar.gz" +pip install --force-reinstall "faster-whisper @ https://github.com/guillaumekln/faster-whisper/archive/refs/heads/master.tar.gz" # Install a specific commit: -This repository demonstrates how to implement the Whisper transcription using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models. +* [openai/whisper](https://github.com/openai/whisper)@[6dea21fd](https://github.com/openai/whisper/commit/6dea21fd7f7253bfe450f1e2512a0fe47ee2d258) For reference, here's the time and memory usage that are required to transcribe **13 minutes** of audio using different implementations: + +# Install for development: +git clone https://github.com/guillaumekln/faster-whisper.git +pip install -e faster-whisper/ ``` ### GPU support