Questions tagged [signal-detection]

Signal Detection Theory (SDT) explains how a receiver detects a signal in noise as a function of the receiver's sensitivity to the signal & the receiver's bias or tendency to assert the presence of the signal whether it is there or not.

Signal Detection Theory (SDT) explains how a receiver detects a signal in noise as a function of the receiver's sensitivity to the signal & the receiver's bias or tendency to assert the presence of the signal whether it is there or not.

Given that a signal may be present or not, and the receiver may assert that the signal is present or not, there are four possibilities:

                                         Signal:
                                  Present     Not present
            Receiver:          ---------------------------
                              |             |             |
                'Present'     |     Hit     | False alarm |
                              |             |             |
                               ---------------------------
                              |             |             |
                'Not present' |    Miss     |   Correct   |
                              |             |  rejection  |
                               ---------------------------

The number of Hits divided by (Hits + Misses) is the hit rate ($h$), and the number of False alarms divided by (False alarms + Correct rejections) is the false alarm rate ($fa$). These can be decomposed into the sensitivity ($d'$) and bias ($c$) of the receiver:
\begin{align} d' &= \Phi^{-1}(h) - \Phi^{-1}(fa) \\\ \ \\\ c &= \frac{\Phi^{-1}(h) + \Phi^{-1}(fa)}{2} \end{align}

Note that the "four possibilities" above constitutes a confusion matrix, and that the hit and false alarm rate are related to many standard metrics for confusion matrices. Specifically, the hit rate is the same as sensitivity, and the false alarm rate is the same as $1-$specificity.

98 questions
38
votes
3 answers

What percentage of a population needs a test in order to estimate prevalence of a disease? Say, COVID-19

A group of us got to discussing what percentage of a population needs to be tested for COVID-19 in order to estimate the true prevalence of the disease. It got complicated, and we ended the night (over zoom) arguing about signal detection and…
14
votes
3 answers

Is it valid to analyze signal detection data without employing metrics derived from signal detection theory?

A signal detection experiment typically presents the observer (or diagnostic system) with either a signal or a non-signal, and the observer is asked to report whether they think the presented item is a signal or non-signal. Such experiments yield…
Mike Lawrence
  • 12,691
  • 8
  • 40
  • 65
13
votes
1 answer

Connections between $d^\prime$ (d-prime) and AUC (Area Under the ROC Curve); underlying assumptions

In machine learning we may use the area under the ROC curve (often abbreviated AUC, or AUROC) to summarise how well a system can discriminate between two categories. In signal detection theory often the $d'$ (sensitivity index) is used for a similar…
Dan Stowell
  • 1,262
  • 1
  • 12
  • 22
11
votes
2 answers

d prime with 100% hit rate probability and 0% false alarm probability

I would like to calculate d prime for a memory task that involves detecting old and new items. The problem I have is that some of the subjects have hit rate of 1 and/or false alarm rate of 0, which makes the probabilities 100% and 0%, respectively. …
A.Rainer
  • 217
  • 1
  • 2
  • 8
10
votes
2 answers

Assessing peaks in time series of cell signal data

I am measuring for the existence of response in cell signal measurements. What I did was first apply a smoothing algorithm (Hanning) to the time series of data, then detect peaks. What I get is this: If I wanted to make the detection of the…
Radek
  • 201
  • 2
  • 5
10
votes
5 answers

Detecting parts of a song

Hopefully this is not too subjective... I'm looking for some direction in efforts to detect the different "parts" of a song, regardless of musical style. I have no idea where to look, but trusting in the power of the other StackOverflow sites, I…
themirror
  • 201
  • 1
  • 2
8
votes
2 answers

Frequency jump detection

It is generally known that 'jumps' in frequency data are difficult to estimate. In the current literature, many different techniques for estimating such jumps have been tested and often with satisfactory results. A summarizing paper about some of…
Jean-Paul
  • 647
  • 7
  • 21
6
votes
1 answer

Why use d-prime instead of percent correct?

In signal detection theory, people often use $d'$ to assess performance. Apart from the fact that $d'$ is in $z$ units (units of measurement transformed to standard deviation units, i.e., $z$ scores), making it comparable regardless of the original…
user41270
  • 215
  • 1
  • 3
  • 8
6
votes
1 answer

Are Cohen's d (effect size) and d prime from the signal detection theory measuring the same thing?

Are d prime (d') in signal detection theory and Cohen's d (mainly reported in the context of the general linear model) measures for the same thing (i.e., the difference of the means in SD-units), and just termed differently? Or is there any…
6
votes
3 answers

What is the two alternative forced choice paradigm

Is a two alternative forced choice paradigm (2AFC) an experimental design?
6
votes
4 answers

How to calculate confidence intervals for Precision & Recall (from a signal detection matrix)?

I built a detector to detect a binary outcome and then took a random sample from the population. From this, I can create a signal detection/confusion matrix (hit, miss, false alarm, correct rejection) [aka: TP, FP, FN, TN] and then calculate…
5
votes
4 answers

Expected value of q given y is weighted average of mean q and and y

It is assumed that: 1) $y=q+u$ Where $q$ is productivity and $y$ a testscore that measures true productivity. $u$ is a normally distributed error term, independent of $q$, with zero mean and constant variance; $q$ is also assumed to be normally…
Fusscreme
  • 321
  • 1
  • 2
  • 9
5
votes
1 answer

Trend Analysis of feature importance over time in R

I'm running an experiment on a Streaming Classification Model (an Online Random Forest) that I've created. If that is a completely foreign concept to you here is a presentation I did on it recently:…
5
votes
0 answers

How to improve estimation of a deconvolved density

I have the following problem: Y = X + e with Y = Total reaction time (noisy signal) X = selection time (signal) e = discrimination time (noise) I am interestend in the distribution for X and have only samples for Y and e. The experiment had unpaired…
5
votes
1 answer

Neural networks and signal-to-noise ratio

My guess is that neural networks do not work very well in noisy environments, i.e. the lower the signal-to-noise ratio, the worse the result of a neural network, if compared to other statistical modeling tools. Thus, for example, neural networks are…
1
2 3 4 5 6 7