You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- For the future: Coveralls for codecoverage -->
10
12
11
-
## HuggingFace models used
12
-
Artifical Intelegence models are powerful and in the wrong hands can be dangerous.
13
-
The models used by fave-asr are cost-free, but you need to accept additional terms of use which confirm you will not misuse these powerful tools.
13
+
The FAVE-asr package provides a system for the automated transcription of sociolinguistic interview data on local machines for use by aligners like [FAVE](https://github.com/JoFrhwld/FAVE) or the [Montreal Forced Aligner](https://montreal-forced-aligner.readthedocs.io/en/latest/). The package provides functions to label different speakers in the same audio (diarization), transcribe speech, and output TextGrids with phrase- or word-level alignments.
14
+
15
+
## Example Use Cases
16
+
17
+
- You want a transcription of an interview for more detailed hand correction.
18
+
- You want to transcribe a large corpus and your analysis can tolerate a small error rate.
19
+
- You want to make an audio corpus into a text corpus.
20
+
- You want to know the number of speakers in an audio file.
21
+
22
+
For examples on how to use the pacakge, see the [Usage](usage/) pages.
23
+
24
+
## Installation
25
+
To install fave-asr using pip, run the following command in your terminal:
26
+
27
+
```bash
28
+
pip install fave-asr
29
+
```
30
+
31
+
### Other software required
32
+
*`ffmpeg` is needed to process the audio. You can [download it from their website](https://ffmpeg.org/download.html)
33
+
34
+
## Not another transcription service
35
+
36
+
There are several services which automate the process of transcribing audio, including
37
+
38
+
-[DARLA CAVE](http://darla.dartmouth.edu/cave)
39
+
-[Otter AI](https://otter.ai/)
40
+
41
+
Unlike other services, `fave-asr` does not require uploading your data to other servers and instead focuses on processing audio on your own computer. Audio data can contain highly confidential information, and uploading this data to other services may not comply with ethical or legal data protection obligations. The goal of `fave-asr` is to serve those use cases where data protection makes local transcription necessary while making the process as seamless as cloud-based transcription services.
42
+
43
+
### Example
44
+
45
+
As an example, we'll transcribe an audio interview of Snoop Dogg by the 85 South Media podcast and output it as a TextGrid.
Artifical Intelegence models are powerful and in the wrong hands can be dangerous. The models used by fave-asr are cost-free, but you need to accept additional terms of use.
14
62
15
63
To use these models:
16
64
1. On HuggingFace, [create an account](https://huggingface.co/join) or [log in](https://huggingface.co/login)
@@ -21,24 +69,23 @@ To use these models:
21
69
Keep track of your token and keep it safe (e.g. don't accidentally upload it to GitHub).
22
70
We suggest creating an environment variable for your token so that you don't need to paste it into your files.
23
71
24
-
### Creating an environment variable for your token
25
-
#### Linux and Mac
26
-
1. Open `~/.bashrc` in a text editor
72
+
## Creating an environment variable for your token
73
+
Storing your tokens as environment variables is a good way to avoid accidentally leaking them. Instead of typing the token into your code and deleting it before you commit, you can use `os.environ["HF_TOKEN"]` to access it from Python instead. This also makes your code more readable since it's obvious what `HF_TOKEN` is while a string of numbers and letters isn't clear.
74
+
75
+
### Linux and Mac
76
+
On Linux and Mac you can store your token in `.bashrc`
77
+
78
+
1. Open `$HOME/.bashrc` in a text editor
27
79
2. At the end of that file, add the following `HF_TOKEN='<your token>' ; export HF_TOKEN` replacing `<your token>` with [your HuggingFace token](https://hf.co/settings/tokens)
80
+
3. Add the changes to your current session using `source $HOME/.bashrc`
81
+
82
+
### Windows
83
+
On Windows, use the `setx` command to create an environment variable.
84
+
```
85
+
setx HF_TOKEN <your token>
86
+
```
28
87
29
-
#### Windows
30
-
If you run windows and know a solution, edit this file and create a pull request!
31
-
32
-
## Use
33
-
This module is in active development. The use documentation may be out of date. Feel free to edit this file with updated instructions and create a pull request.
34
-
1. Follow the [instructions on using HuggingFace models](#HuggingFace models used)
35
-
2. Download `pipeline.py`
36
-
3. Import that file into your project
37
-
4. Set `audio_file = <path to your audio file>`
38
-
5. Set `hf_token = <your huggingface token from step 1>`
39
-
6. Set `model_name = <whisper model name>`, we recommend `"medium.en"` for English data, otherwise `"large"`
40
-
7. Set `device = "cpu"` unless you can run on a GPU, then use `"cuda"`
41
-
8. Run `results_segments_w_speakers = pipeline.transcribe_and_diarize(audio_file, hf_token, model_name, device)`
88
+
You need to restart the command line afterwards to make the environment variable available for use. If you try to use the variable in the same window you set the variable, you will run into problems.
0 commit comments