mirror of
https://github.com/macaodha/batdetect2.git
synced 2025-06-29 14:41:58 +02:00
Updated README with new installation instructions and API
This commit is contained in:
parent
b0c3dbe403
commit
2d8d175f3f
142
README.md
142
README.md
@ -1,62 +1,122 @@
|
||||
# BatDetect2
|
||||
<img align="left" width="64" height="64" src="ims/bat_icon.png">
|
||||
<img style="display: block-inline;" width="64" height="64" src="ims/bat_icon.png"> Code for detecting and classifying bat echolocation calls in high frequency audio recordings.
|
||||
|
||||
Code for detecting and classifying bat echolocation calls in high frequency audio recordings.
|
||||
## Getting started
|
||||
|
||||
### Python Environment
|
||||
|
||||
We recommend using an isolated Python environment to avoid dependency issues. Choose one
|
||||
of the following options:
|
||||
|
||||
* Install the Anaconda Python 3.10 distribution for your operating system from [here](https://www.continuum.io/downloads). Create a new environment and activate it:
|
||||
|
||||
```bash
|
||||
conda create -y --name batdetect2 python==3.10`
|
||||
conda activate batdetect2
|
||||
``````
|
||||
|
||||
* If you already have Python installed (version >= 3.8,< 3.11) and prefer using virtual environments then:
|
||||
|
||||
```bash
|
||||
python -m venv .venv
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
### Installing BatDetect2
|
||||
|
||||
You can use pip to install `batdetect2`:
|
||||
|
||||
```bash
|
||||
pip install batdetect2
|
||||
```
|
||||
|
||||
Alternatively, download this code from the repository (by clicking on the green button on top right) and unzip it.
|
||||
Once unziped, run this from extracted folder.
|
||||
|
||||
```bash
|
||||
pip install .
|
||||
```
|
||||
|
||||
Make sure you have the environment activated before installing `batdetect2`.
|
||||
|
||||
## Try the model
|
||||
1) You can try a demo of the model (for UK species) on [huggingface](https://huggingface.co/spaces/macaodha/batdetect2).
|
||||
|
||||
2) Alternatively, click [here](https://colab.research.google.com/github/macaodha/batdetect2/blob/master/batdetect2_notebook.ipynb) to run the model using Google Colab. You can also run this notebook locally.
|
||||
|
||||
|
||||
### Getting started
|
||||
1) Install the Anaconda Python 3.10 distribution for your operating system from [here](https://www.continuum.io/downloads).
|
||||
2) Download this code from the repository (by clicking on the green button on top right) and unzip it.
|
||||
3) Create a new environment and install the required packages:
|
||||
`conda create -y --name batdetect2 python==3.10`
|
||||
`conda activate batdetect2`
|
||||
`conda install --file requirements.txt`
|
||||
## Running the model on your own data
|
||||
|
||||
|
||||
### Try the model
|
||||
1) You can try a demo of the model (for UK species) on [huggingface](https://huggingface.co/spaces/macaodha/batdetect2).
|
||||
After following the above steps to install the code you can run the model on your own data.
|
||||
|
||||
2) Alternatively, click [here](https://colab.research.google.com/github/macaodha/batdetect2/blob/master/batdetect2_notebook.ipynb) to run the model using Google Colab. You can also run this notebook locally.
|
||||
### Using the command line
|
||||
|
||||
You can run the model by opening the command line and typing:
|
||||
```bash
|
||||
batdetect2 detect AUDIO_DIR ANN_DIR DETECTION_THRESHOLD
|
||||
```
|
||||
e.g.
|
||||
```bash
|
||||
batdetect2 detect example_data/audio/ example_data/anns/ 0.3
|
||||
```
|
||||
|
||||
`AUDIO_DIR` is the path on your computer to the audio wav files of interest.
|
||||
`ANN_DIR` is the path on your computer where the model predictions will be saved. The model will output both `.csv` and `.json` results for each audio file.
|
||||
`DETECTION_THRESHOLD` is a number between 0 and 1 specifying the cut-off threshold applied to the calls. A smaller number will result in more calls detected, but with the chance of introducing more mistakes.
|
||||
|
||||
There are also optional arguments, e.g. you can request that the model outputs features (i.e. estimated call parameters) such as duration, max_frequency, etc. by setting the flag `--spec_features`. These will be saved as `*_spec_features.csv` files:
|
||||
`python run_batdetect.py example_data/audio/ example_data/anns/ 0.3 --spec_features`
|
||||
|
||||
You can also specify which model to use by setting the `--model_path` argument. If not specified, it will default to using a model trained on UK data e.g.
|
||||
`python run_batdetect.py example_data/audio/ example_data/anns/ 0.3 --model_path models/Net2DFast_UK_same.pth.tar`
|
||||
|
||||
|
||||
### Running the model on your own data
|
||||
After following the above steps to install the code you can run the model on your own data by opening the command line where the code is located and typing:
|
||||
`python run_batdetect.py AUDIO_DIR ANN_DIR DETECTION_THRESHOLD`
|
||||
e.g.
|
||||
`python run_batdetect.py example_data/audio/ example_data/anns/ 0.3`
|
||||
### Using the Python API
|
||||
|
||||
If you prefer to process your data within a Python script then you can use the `batdetect2` Python API.
|
||||
|
||||
```python
|
||||
from batdetect2 import api
|
||||
|
||||
AUDIO_FILE = "example_data/audio/20170701_213954-MYOMYS-LR_0_0.5.wav"
|
||||
|
||||
# Process a whole file
|
||||
results = api.process_file(AUDIO_FILE)
|
||||
|
||||
# Load audio and compute spectrograms
|
||||
audio = api.load_audio(AUDIO_FILE)
|
||||
spec = api.generate_spectrogram(audio)
|
||||
|
||||
# Process the audio or the spectrogram with the model
|
||||
detections, features, spec = api.process_audio(audio)
|
||||
detections, features = api.process_spectrogram(spec)
|
||||
|
||||
# You can integrate the detections or the extracted features
|
||||
# to your custom analysis pipeline
|
||||
# ...
|
||||
```
|
||||
|
||||
## Training the model on your own data
|
||||
Take a look at the steps outlined in fintuning readme [here](bat_detect/finetune/readme.md) for a description of how to train your own model.
|
||||
|
||||
|
||||
`AUDIO_DIR` is the path on your computer to the audio wav files of interest.
|
||||
`ANN_DIR` is the path on your computer where the model predictions will be saved. The model will output both `.csv` and `.json` results for each audio file.
|
||||
`DETECTION_THRESHOLD` is a number between 0 and 1 specifying the cut-off threshold applied to the calls. A smaller number will result in more calls detected, but with the chance of introducing more mistakes.
|
||||
|
||||
There are also optional arguments, e.g. you can request that the model outputs features (i.e. estimated call parameters) such as duration, max_frequency, etc. by setting the flag `--spec_features`. These will be saved as `*_spec_features.csv` files:
|
||||
`python run_batdetect.py example_data/audio/ example_data/anns/ 0.3 --spec_features`
|
||||
|
||||
You can also specify which model to use by setting the `--model_path` argument. If not specified, it will default to using a model trained on UK data e.g.
|
||||
`python run_batdetect.py example_data/audio/ example_data/anns/ 0.3 --model_path models/Net2DFast_UK_same.pth.tar`
|
||||
## Data and annotations
|
||||
The raw audio data and annotations used to train the models in the paper will be added soon.
|
||||
The audio interface used to annotate audio data for training and evaluation is available [here](https://github.com/macaodha/batdetect2_GUI).
|
||||
|
||||
|
||||
### Training the model on your own data
|
||||
Take a look at the steps outlined in fintuning readme [here](bat_detect/finetune/readme.md) for a description of how to train your own model.
|
||||
|
||||
|
||||
### Data and annotations
|
||||
The raw audio data and annotations used to train the models in the paper will be added soon.
|
||||
The audio interface used to annotate audio data for training and evaluation is available [here](https://github.com/macaodha/batdetect2_GUI).
|
||||
|
||||
|
||||
### Warning
|
||||
## Warning
|
||||
The models developed and shared as part of this repository should be used with caution.
|
||||
While they have been evaluated on held out audio data, great care should be taken when using the model outputs for any form of biodiversity assessment.
|
||||
Your data may differ, and as a result it is very strongly recommended that you validate the model first using data with known species to ensure that the outputs can be trusted.
|
||||
|
||||
|
||||
### FAQ
|
||||
For more information please consult our [FAQ](faq.md).
|
||||
## FAQ
|
||||
For more information please consult our [FAQ](faq.md).
|
||||
|
||||
|
||||
### Reference
|
||||
## Reference
|
||||
If you find our work useful in your research please consider citing our paper which you can find [here](https://www.biorxiv.org/content/10.1101/2022.12.14.520490v1):
|
||||
```
|
||||
@article{batdetect2_2022,
|
||||
@ -67,8 +127,8 @@ If you find our work useful in your research please consider citing our paper wh
|
||||
}
|
||||
```
|
||||
|
||||
### Acknowledgements
|
||||
Thanks to all the contributors who spent time collecting and annotating audio data.
|
||||
## Acknowledgements
|
||||
Thanks to all the contributors who spent time collecting and annotating audio data.
|
||||
|
||||
|
||||
### TODOs
|
||||
|
Loading…
Reference in New Issue
Block a user