Minor fixes to readme and updated requirements.

This commit is contained in:
macaodha 2022-12-13 17:56:45 +00:00
parent 895c93c022
commit 20218e023c
6 changed files with 49 additions and 55 deletions

View File

@ -1,38 +1,46 @@
# BatDetect2 # BatDetect2
<img align="left" width="64" height="64" src="ims/bat_icon.png"> <img align="left" width="64" height="64" src="ims/bat_icon.png">
Code for detecting and classifying bat echolocation calls in high frequency audio recordings. Code for detecting and classifying bat echolocation calls in high frequency audio recordings.
### Getting started ### Getting started
1) Install the Anaconda Python 3.9 distribution for your operating system from [here](https://www.continuum.io/downloads). 1) Install the Anaconda Python 3.10 distribution for your operating system from [here](https://www.continuum.io/downloads).
2) Download this code from the repository (by clicking on the green button on top right) and unzip it. 2) Download this code from the repository (by clicking on the green button on top right) and unzip it.
3) Create a new environment and install the required packages: 3) Create a new environment and install the required packages:
`conda create -y --name batdetect python==3.9` `conda create -y --name batdetect2 python==3.10`
`conda activate batdetect` `conda activate batdetect2`
`conda install --file requirements.txt` `conda install --file requirements.txt`
### Try the model on Colab ### Try the model
Click [here](https://colab.research.google.com/github/macaodha/batdetect2/blob/master/batdetect2_notebook.ipynb) to run run the model using Colab. Click [here](https://colab.research.google.com/github/macaodha/batdetect2/blob/master/batdetect2_notebook.ipynb) to run the model using Google Colab.
You can also run this notebook locally.
### Running the model on your own data ### Running the model on your own data
After following the above steps you can run the model on your own data by opening the command line where the code is located and typing: After following the above steps to install the code you can run the model on your own data by opening the command line where the code is located and typing:
`python run_batdetect.py AUDIO_DIR ANN_DIR DETECTION_THRESHOLD` `python run_batdetect.py AUDIO_DIR ANN_DIR DETECTION_THRESHOLD`
`AUDIO_DIR` is the path on your computer to the files of interest. `AUDIO_DIR` is the path on your computer to the audio wav files of interest.
`ANN_DIR` is the path on your computer where the detailed predictions will be saved. The model will output both `.csv` and `.json` results for each audio file. `ANN_DIR` is the path on your computer where the model predictions will be saved. The model will output both `.csv` and `.json` results for each audio file.
`DETECTION_THRESHOLD` is a number between 0 and 1 specifying the cut-off threshold applied to the calls. A smaller number will result in more calls detected, but with the chance of introducing more mistakes: `DETECTION_THRESHOLD` is a number between 0 and 1 specifying the cut-off threshold applied to the calls. A smaller number will result in more calls detected, but with the chance of introducing more mistakes:
`python run_batdetect.py example_data/audio/ example_data/anns/ 0.3` `python run_batdetect.py example_data/audio/ example_data/anns/ 0.3`
There are also optional arguments e.g. you can request that the model outputs features (i.e. call parameters) such as duration, max_frequency, etc. by setting the flag `--spec_features`. These will be saved as `*_spec_features.csv` files: There are also optional arguments, e.g. you can request that the model outputs features (i.e. estimated call parameters) such as duration, max_frequency, etc. by setting the flag `--spec_features`. These will be saved as `*_spec_features.csv` files:
`python run_batdetect.py example_data/audio/ example_data/anns/ 0.3 --spec_features` `python run_batdetect.py example_data/audio/ example_data/anns/ 0.3 --spec_features`
You can also specify which model to use by setting the `--model_path` argument. If not specified, it will default to using a model trained on UK data. You can also specify which model to use by setting the `--model_path` argument. If not specified, it will default to using a model trained on UK data.
### Requirements ### Data and annotations
The code has been tested using Python3.9 with the following package versions described in `requirements.txt`. The raw audio data and annotations used to train the models in the paper will be added soon.
### Warning
Note the models developed and shared as part of this repository should be used with caution.
While they have been evaluated on held out audio data, great care should be taken when using the models for any form of biodiversity assessment.
Your data may differ, and as a result it is very strongly recommended that you validate the model first using data with known species to ensure that the outputs can be trusted.
### FAQ ### FAQ
@ -43,12 +51,12 @@ For more information please consult our [FAQ](faq.md).
If you find our work useful in your research please consider citing our paper: If you find our work useful in your research please consider citing our paper:
``` ```
@article{batdetect2_2022, @article{batdetect2_2022,
author = {TODO}, title = {Towards a General Approach for Bat Echolocation Detection and Classification},
title = {TODO}, author = {Mac Aodha, Oisin and Mart\'{i}nez Balvanera, Santiago and Damstra, Elise and Cooke, Martyn and Eichinski, Philip and Browning, Ella and Barataudm, Michel and Boughey, Katherine and Coles, Roger and Giacomini, Giada and MacSwiney G., M. Cristina and K. Obrist, Martin and Parsons, Stuart and Sattler, Thomas and Jones, Kate E.},
journal = {TODOD}, journal = {bioRxiv},
year = {2022} year = {2022}
} }
``` ```
### Acknowledgements ### Acknowledgements
TODO Thanks to all the contributors who spent time collecting and annotating audio data.

View File

@ -74,7 +74,7 @@ def run_nms(outputs, params, sampling_rate):
def non_max_suppression(heat, kernel_size): def non_max_suppression(heat, kernel_size):
# kernel can be an int or list/tuple # kernel can be an int or list/tuple
if type(kernel_size) is int: if type(kernel_size) is int:
kernel_size_h = kernel_size kernel_size_h = kernel_size
kernel_size_w = kernel_size kernel_size_w = kernel_size
@ -94,7 +94,7 @@ def get_topk_scores(scores, K):
topk_scores, topk_inds = torch.topk(scores.view(batch, -1), K) topk_scores, topk_inds = torch.topk(scores.view(batch, -1), K)
topk_inds = topk_inds % (height * width) topk_inds = topk_inds % (height * width)
topk_ys = (topk_inds // width).long() topk_ys = torch.div(topk_inds, width, rounding_mode='floor').long()
topk_xs = (topk_inds % width).long() topk_xs = (topk_inds % width).long()
return topk_scores, topk_ys, topk_xs return topk_scores, topk_ys, topk_xs

View File

@ -77,7 +77,7 @@ def load_audio_file(audio_file, time_exp_fact, target_samp_rate, scale=False):
# resample - need to do this after correcting for time expansion # resample - need to do this after correcting for time expansion
sampling_rate_old = sampling_rate sampling_rate_old = sampling_rate
sampling_rate = target_samp_rate sampling_rate = target_samp_rate
audio_raw = librosa.resample(audio_raw, sampling_rate_old, sampling_rate, res_type='polyphase') audio_raw = librosa.resample(audio_raw, orig_sr=sampling_rate_old, target_sr=sampling_rate, res_type='polyphase')
# convert to float32 and scale # convert to float32 and scale
audio_raw = audio_raw.astype(np.float32) audio_raw = audio_raw.astype(np.float32)
@ -135,7 +135,7 @@ def gen_mag_spectrogram(x, fs, ms, overlap_perc):
step = nfft - noverlap step = nfft - noverlap
# compute spec # compute spec
spec, _ = librosa.core.spectrum._spectrogram(x, power=1, n_fft=nfft, hop_length=step, center=False) spec, _ = librosa.core.spectrum._spectrogram(y=x, power=1, n_fft=nfft, hop_length=step, center=False)
# remove DC component and flip vertical orientation # remove DC component and flip vertical orientation
spec = np.flipud(spec[1:, :]) spec = np.flipud(spec[1:, :])

File diff suppressed because one or more lines are too long

View File

@ -1,10 +1,8 @@
librosa==0.8.1 librosa==0.9.2
matplotlib==3.3.4 matplotlib==3.6.2
numpy==1.20.2 numpy==1.23.4
pandas==1.3.4 pandas==1.5.2
scikit_learn==1.0.1 scikit_learn==1.2.0
scipy==1.5.3 torch==1.13.0
six==1.16.0 torchaudio==0.13.0
torch==1.9.0 torchvision==0.14.0
torchaudio==0.9.0a0+33b2469
torchvision==0.10.0

View File

@ -37,10 +37,10 @@ def main(args):
if __name__ == "__main__": if __name__ == "__main__":
info_str = '\nBatDetect - Detection and Classification\n' + \ info_str = '\nBatDetect2 - Detection and Classification\n' + \
' Assumes audio files are mono, not stereo.\n' + \ ' Assumes audio files are mono, not stereo.\n' + \
' Spaces in the input paths will throw an error. Wrap in quotes "".\n' + \ ' Spaces in the input paths will throw an error. Wrap in quotes "".\n' + \
' Input files should be short in duration e.g. < 1 minute.\n' ' Input files should be short in duration e.g. < 30 seconds.\n'
print(info_str) print(info_str)
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
@ -58,7 +58,7 @@ if __name__ == "__main__":
parser.add_argument('--save_preds_if_empty', action='store_true', default=False, dest='save_preds_if_empty', parser.add_argument('--save_preds_if_empty', action='store_true', default=False, dest='save_preds_if_empty',
help='Save empty annotation file if no detections made.') help='Save empty annotation file if no detections made.')
parser.add_argument('--model_path', type=str, default='', parser.add_argument('--model_path', type=str, default='',
help='Path to trained BatDetect model') help='Path to trained BatDetect2 model')
args = vars(parser.parse_args()) args = vars(parser.parse_args())
args['spec_slices'] = False # used for visualization args['spec_slices'] = False # used for visualization