| 15:40 |
-
Kikeriki - Audiomerkmale für die empfundene Lästigkeit und Unangenehmheit krähender Hähne
Christoph Reuter, Isabella Czedik-Eysenberg, Anja-Xiaoxing Cui, Marik Roos, Sarah Ambros, Jörg Jewanski, Matthias Eder, Jörg Mühlhans, Felix Klooss, Dijana Popovic, Veronika Weber, Matthias Bertsch, Michael Oehler
[Abstract]
Mit einem Schallpegel von 142 dB(SPL) (am Hahnenohr gemessen, Claes et al. 2017) gehört der (nicht nur) zum Sonnenaufgang
krähende Hahn (Shimmura/Yoshimura 2013) zu den lautesten Haustieren schlechthin, dessen Stimmgewalt auch schon bei den
”Bremer Stadtmusikanten” legendenbildend war (Grimm/Grimm 1819). Selbst in weiterer Entfernung gilt sein Krähen häufig als
ruhestörend und ist Ursache von Rechtsstreitigkeiten (§12/§3(2) Landesimmissionsschutzgesetz).
Welche Klangmerkmale sind es neben dem Schallpegel, die zur empfundenen Lästigkeit und Unangenehmheit von krähenden
Hähnen beitragen?
Zur Klärung dieser Frage wurde das Krähen von 50 Hähnen von 51 Versuchspersonen (26m/25w, Alter: 21-81, Mittelwert: 46) auf
zwei Skalen zwischen 1 und 100 auf Unangenehmheit und Lästigkeit bewertet. Die beurteilten Klänge wurden mit Signalanalyse-
Toolboxen auf 180 Klangmerkmale analysiert, die mit den Hörer*innenbewertungen auf Korrelationen untersucht wurden.
Es zeigte sich, dass die Einschätzung der Lästigkeit ähnlich wie die der Unangenehmheit vor allem mit einer starken Rauigkeit
und einem hohen spektralen Energieanteil bei 4000Hz einhergeht. Hieraus lässt sich ein Regressionsmodell erstellen, nach dem
sich die Lästigkeit zu 61% aus Rauigkeit (beta=0,404) und aus den Amplituden um 4000Hz (beta=0,630) vorhersagen lässt
(p<0,001). Bemerkenswert ist hierbei auch, dass ältere Personen (>=40 Jahre,n=26) Hähne als lästiger/unangenehmer empfinden
als jüngere (n=25, Mann-Whitney-U-Test: Lästigkeit: U=229,0;p=0,036; Unangenehmheit: U=224,5;p=0,029).
|
| 15:48 |
-
Effects of Irrelevant Speech on Immediate Serial Recall: The Role of Phonological Complexity
Abdullah Jelelati, Larissa Leist, Thomas Lachmann, Maria Klatte
[Abstract]
The ”irrelevant sound effect” (ISE) is defined as the impairment of serial recall performance for visually presented verbal items by
task-irrelevant
background sounds. The most prominent characteristic of the ISE is the ”changing-state effect” (CSE), which suggests that the ISE
is more pronounced
when the irrelevant sound is composed of changing auditory tokens when compared to single repetitive ones. The CSE has been
attributed to the pre-
attentive, obligatory processing of auditory streams, which, in case of changing-state streams, evoke serial order representations
that interfere with the
deliberate serial rehearsal of the list items. The disruptive potential of speech in the irrelevant sound paradigm has made it a
subject matter of several
investigations, yet only few have focused on which speech-like features are crucial to produce disruption. In this experiment, we
explored the role of
phonological complexity and CSE of the background speech in the ISE. Participants’ serial recall performance for visually
presented digits was assessed
under 5 sound conditions: 1.silent control, 2.consonant-vowel (CV) steady-state syllables, 3.CV changing-state syllables,
4.CCVCC steady-state
syllables, and 5.CCVCC changing-state syllables. Results confirmed a significant impact of CSE, but statistically there was no
difference in ISE
magnitude between sequences of simple and complex speech tokens.
|
| 15:56 |
-
Position-dependent Emergence of the Auditory Looming Bias
Tobias Greif, Karolina Ignatiadis, Regina Pfennigschmidt, Brigitta Tóth, Robert Baumgartner
[Abstract]
The auditory looming bias denotes an increased salience of approaching sounds, compared to receding ones. Previous work suggests that this bias
constitutes an innate warning mechanism to potentially threatening stimuli that possibly developed due to evolutionary pressures. However, as of yet, it
has not been investigated whether the position or direction a sound is approaching from- or receding to influences the auditory looming bias. Here, we
manipulated the spectral properties of sounds to artificially generate looming or receding percepts from four different positions (front-up, front-down,
back-up, back-down). Fourteen listeners discriminated between looming and receding sounds while electroencephalography (EEG) was recorded.
Exploratory cluster-based permutation tests revealed that the auditory looming bias on a neural level was only elicited by stimuli that were presented
from the back, and not from other positions. We discuss potential reasons for this finding in light of the hypothesized evolutionary origin.
|
| 16:04 |
-
Examining the role of the pupil-linked arousal system in perceptual decision-making using a dynamic sound localization task
Fabian Dorok, David Meijer, Burcu Bayram, Roberto Barumerli, Tobias Greif, Michelle Spierings, Ulrich Pomper, Robert Baumgartner
[Abstract]
There is broad consensus that perceptual organisms combine sensory input with prior beliefs about the environment to make
precise perceptual judgements about hidden states in the sensory world. In dynamic environments where relevant statistical
properties can readily change, beliefs need to be updated constantly to remain reliable and relevant. Empirical evidence suggests
that the pupil-linked locus coeruleus noradrenaline (LC-NA) arousal system is a key physiological mechanism behind this updating
process. However, in the context of sound localization, it remains uncertain whether arousal-mediated belief updating can only be
observed when predictions are explicitly requested and when stimuli contain visual spatial information too. We seek to close this
gap by investigating the role of the pupil-linked arousal system in a dynamic sound localization task with auditory and audio-visual
stimulation conditions. Preliminary results drawn from behavioural data and latent variables extracted from a Bayesian inference
model provide first evidence that signatures of constant belief updating can indeed also be observed for implicit and automatic
perceptual judgements. Furthermore, the process seems to create weaker beliefs based on purely auditory stimuli, but it does not
strictly depend on visual spatial information.
|
| 16:12 |
-
Task-irrelevant Speech and Environmental Sounds Differentially Affect Serial Verbal and Spatial Short-term Memory in Children and Adults
Larissa Leist, Thomas Lachmann, Maria Klatte
[Abstract]
Short-term memory for visually presented material is reliably impaired by task-irrelevant sounds that the participants are instructed to ignore. This so-
called Irrelevant Sound Effect (ISE) has been attributed to attentional capture, and to specific interference between preattentive, automatic sound
processing and deliberate processes involved in retention of the memory lists. The ISE is stronger with changing speech tokens (e.g., words, syllables)
when compared to repetitions of single tokens (changing-state effect).
Aiming to further explore the roles of attention control and specific interference in the ISE, we analyzed the effects of changing-state and steady-state
syllables (Exp.1) and narrative, foreign speech and environmental sounds (Exp.2) on serial order reconstruction of visually presented verbal and spatial
items in children (n=218) and adults (n=178).
For the verbal task, the results show a greater disruption with syllable sequences in children when compared to adults (Exp.1) but age-equivalent
impairments due to narrative speech and non-speech sounds (Exp.2). For the spatial task, performance was unaffected in adults. The children were
affected due to narrative speech and non-speech sounds (Exp. 2) but not by steady-state or changing-state syllables (Exp.1).
These findings indicate different mechanisms underlying the effects of background speech, changing-state syllables and environmental sounds.
|
| 16:20 |
-
Modellierung der zeitlichen Gewichtung der Lautheit und der zeitlichen Lautheitsintegration
Martin Gottschalk, Jan Hots, Daniel Oberfeld-Twistel, Jesko Verhey
[Abstract]
Die Lautheit (wahrgenommene Intensität) eines Geräusches hängt von grundlegenden Signalparametern wie Pegel, Bandbreite und Dauer ab. So nimmt beispielsweise die Lautheit mit zunehmender Dauer zu, was als zeitliche Integration der Lautheit bezeichnet wird. Neuere Messungen zeigen außerdem sehr konsistent, dass die verschiedenen zeitlichen Anteile eines Signals sich unterschiedlich stark auf das Lautheitsurteil auswirken. Zum Beispiel ist der Signalanfang wichtiger für das Lautheitsurteil als der Rest des Signals (Primacy-Effekt). Es wurde gezeigt, dass dieser Primacy-Effekt frequenzspezifisch ist, dass also für spektral entfernte Signalanteile die Primacy-Effekte jeweils unabhängig voneinander auftreten. Etablierte dynamische Lautheitsmodelle sind bisher nicht in der Lage, den Primacy-Effekt zufriedenstellend vorherzusagen. Eine wesentliche Problematik ist, dass zur Vorhersage der zeitlichen Integration angenommen wird, dass die Zeitfunktion der Instantanlautheiten mit einem Tiefpass geglättet wird und dass das Maximum dieser geglätteten Funktion die Gesamtlautheit bestimmt. Im vorliegenden Beitrag wird eine Erweiterung eines etablierten Lautheitsmodells beschrieben, das eine frequenzgruppenspezifische zeitliche Gewichtungsfunktion enthält, die es ermöglicht, den frequenzspezifischen Primacy-Effekt vorherzusagen. Hierzu ist es notwendig, statt des Maximums oder eines Perzentils den Mittelwert als Schätzung der Gesamtlautheit zu nehmen. Es wird gezeigt, dass es durch entsprechende Modifikationen des etablierten Lautheitsmodells weiterhin möglich ist, den Effekt von anderen Signalparametern wie der Dauer auf die Lautheit vorherzusagen.
|
| 16:28 |
-
Can Semantic Context Information Mask Rollover in Hearing-Impaired Listeners?
Lukas Jürgensen, Kenneth Meilstrup Jacobsen, Michal Fereczkowski, Tobias Neher
[Abstract]
The main purpose of hearing aids is to improve speech audibility by providing level-dependent gain. At low levels, more audibility usually leads to better
speech intelligibility. However, this is not necessarily the case at high levels, where increasing the presentation level can lead to poorer intelligibility, i.e.,
ërollover’. This rollover effect has been observed in both normal-hearing and hearing-impaired listeners. While rollover typically first occurs at relatively high levels when speech is presented in quiet, it has also been observed at conversational levels under
noisy conditions. Additionally, top-down processes, such as the use of semantic context information, have been found to mask rollover in normal-hearing
listeners. This study investigated the influence of semantic context information on rollover in hearing-impaired listeners. Speech intelligibility measurements were
performed with three speech materials differing in terms of semantic context (monosyllabic words, context-free and context-rich sentences) at three
speech levels (55, 65 and 75 dB SPL) with individually determined noise levels. To ensure audibility, a wearable hearing-aid simulator providing individual
linear amplification was used. It is expected that clear rollover will be found for the speech materials without semantic context but not for the context-rich sentences.
|