The baRulho package is intended to facilitate acoustic analysis of (animal) sound transmission experiments. Such studies typically aim to quantify changes in signal structure when transmitted in a given habitat by broadcasting and re-recording animal sounds at increasing distances. We will refer to these changes in signal structure ‘degradation’ for the sake of simplicity. The package offers a workflow with functions to prepare the data set for analysis as well as to calculate and visualize several degradation metrics. baRulho builds upon functions and data formats from the warbleR and seewave packages, so some experience with these packages is advised.

The main features of the package are:

  • The use of loops to apply tasks through acoustic signals referenced in a selection table (sensu warbleR)
  • The production of image files with graphic representations of sound in time and/or frequency that let users verify acoustic analyses
  • The use of extended selection tables (sensu warbleR) as the object format to input acoustic data and annotations (except for atmospheric_attenuation()) and to output results
  • The use of parallelization to distribute tasks among several cores to improve computational efficiency

The package can be install/load from CRAN as follows:

# From CRAN would be
install.packages("baRulho")

# load package
library(baRulho)

To install the latest developmental version from github you will need the R package devtools:

# From github
devtools::install_github("maRce10/baRulho")

# load package
library(baRulho)

# also set a working directory, for this example we will use a temporary
# directory
td <- tempdir()

For this vignette we will also need a few more packages:

library(warbleR)
library(ggplot2)
library(viridis)

Inputting acoustic data and annotations

The package requires the data to be input as extended selection tables. An extended selection table is an object class in R that contains both the annotations (locations of signals in time and frequency) and the corresponding acoustic data as wave objects. Therefore, these are self-contained objects since the original sound files are no longer needed to perform acoustic analyses. These objects are created by the selection_table() function from warbleR. Take a look at the intro to warbleR vignette for more details.

Glossary

-Model signal: signal in which transmission properties will be studied, usually found in the original field recordings or synthetic sound files.

-Reference signal: signal to use as a pattern to compare against. Usually created by re-recording a model signal broadcast at 1 m from the source (speaker).

-Signal type: signal category. For instance song types (e.g. A, B, C), call types (alert, foraging, etc).

-Ambient noise: energy from background sounds in the recording, excluding signals of interest.

-Test signal: signals re-recorded far from the source to test for transmission/degradation (also refer to as ‘re-recorded’ signals).

-Degradation: term used to describe any changes in the structure of a signal when transmitted in a given habitat (note that there is no agreement on this terminology in the scientific community).

 


Workflow of sound processing and analysis

A common sequence of steps to experimentally test hypotheses related to signal transmission is depicted in the following diagram:

analysis workflow

 

baRulho offers functions for critical steps in this workflow (those in black, including ‘checks’) that required acoustic data manipulation and analysis. Additional functions from warbleR can be used (and are used in this vignette) to complement functions in baRulho. All these tools will be presented following the above workflow.

 

Synthesize sounds

We often want to figure out how transmission properties vary across a range of frequencies. For instance, Tobias et al (2010) studied whether acoustic adaptation (a special case of sensory drive; Morton 1975), could explain song evolution in Amazonian avian communities. To test this the authors created synthetic pure tone sounds that were used as playback and re-recorded in different habitats. This is the actual procedure of creating synthetic sounds as they described it:

“Tones were synthesized at six different frequencies (0.5, 1.0, 2.0, 3.0, 4.0, and 5.0 kHz) to encompass the range of maximum avian auditory sensitivity (Dooling 1982). At each frequency, we generated two sequences of two 100-msec tones. One sequence had a relatively short interval of 150 msec, close to the mean internote interval in our sample (152± 4 msec). The other sequence had a longer interval of 250 msec, close to the mean maximum internote interval in our sample (283± 74 msec). The first sequence reflects a fast-paced song and the second a slower paced song (sensu Slabbekoorn et al. 2007). The master file (44100 Hz/16 bit WAV) thereby consisted of a series of 12 pairs of artificial 100-ms constant-frequency tones at six different frequencies (0.5, 1.0, 2.0, 3.0, 4.0, and 5.0 kHz).”

We can synthesize the same pure tones using the function sim_songs() from the package warbleR. The function requires 1) the number of tones to synthesize (argument n), 2) the duration of the tones (durs, in seconds), 3) the duration of the intervals (gaps, in seconds) and 4) the frequencies for each tone to be synthesized (freqs, in kHz). In addition, the argument diff.fun should be set to “pure.tone” and the argument harm to 1 to remove harmonics. In our case we need six tones of 100 ms at 0.5, 1, 2, 3, 4, and 5 kHz separated by intervals of 150 ms (at least for the first synthetic file described in Tobias et al 2010). We can also get a selection table (sensu warbleR) with the information about the time and frequency location of every sound. This would be required in order to make the master sound file. To get the selection table we need to set the argument selec.table = TRUE. This can be done as follows:

# synthesize
synth.l <- sim_songs(n = 6, durs = 0.1, freqs = c(0.5, 1:5), harms = 1, gaps = 0.15, 
    diff.fun = "pure.tone", selec.table = TRUE, path = td)

# plot spectro
spectro(synth.l$wave, scale = FALSE, palette = reverse.topo.colors, grid = FALSE, 
    flim = c(0, 6), collevels = seq(-20, 0, 1))

spectrogram syntehtic sounds

 

The function returns a list in which the first element is the selection table and the second one the wave object:

class(synth.l)
## [1] "list"
names(synth.l)
## [1] "selec.table" "wave"
synth.l$selec.table
sound.files selec start end bottom.freq top.freq
2020-01-09_17:00:50.wav 1 0.15 0.25 0.5 0.5
2020-01-09_17:00:50.wav 2 0.40 0.50 1.0 1.0
2020-01-09_17:00:50.wav 3 0.65 0.75 2.0 2.0
2020-01-09_17:00:50.wav 4 0.90 1.00 3.0 3.0
2020-01-09_17:00:50.wav 5 1.15 1.25 4.0 4.0
2020-01-09_17:00:50.wav 6 1.40 1.50 5.0 5.0

 

The function also saves the associated ‘.wav’ file in the working directory (in this example tempdir()).

list.files(path = td, pattern = "\\.wav$")
## character(0)
## [1] "2020-01-09_17:00:50.wav"

 

Create master sound file for playback

The function master_sound_file() creates a master sound file (as you probably guessed) for playback experiments. The function takes wave objects from an extended selection table containing the model signals and concatenates them in a single sound file (with some silence in between signals which length can be modified). master_sound_file() adds acoustic markers at the start and end of the playback that can be used to time-sync re-recorded signals, which streamlines quantification of acoustic degradation. The following example shows how to create a master sound file using the synthetic sounds generated above. For the synthetic sounds we need to add a little space between the top and bottom frequency because sim_songs() make those values exactly the same for pure tones:

# extract selection table
st <- synth.l$selec.table

# add freq range (0.5 kHz)
st$bottom.freq <- st$bottom.freq - 0.25
st$top.freq <- st$top.freq + 0.25

# make an extended selection table
synth.est <- selection_table(X = st, extended = TRUE, pb = FALSE, confirm.extended = FALSE, 
    path = td)

# create master sound file
synth.master.sf <- master_sound_file(X = synth.est, file.name = "synthetic_master", 
    dest.path = td, gap.duration = 0.15)

 

The function saves the master sound file as a wave file and returns a selection table in the R environment with the time and frequency ‘coordinates’ of the signals in that file. We can look at the spectrogram of the output file using the warbleR function spectrograms() as follows:

# plot spectro (saved in working directory)
spectrograms(synth.master.sf, path = td, by.song = "sound.files", xl = 3, collevels = seq(-60, 
    0, 5), osci = TRUE)