Is there a constraint on the sum of the type-I & type II error probabilities?











up vote
3
down vote

favorite












Is it true that if $H_0$ and $H_a$ are complementary hypotheses of the Binomial trial, i.e., the negation of $H_0$ is $H_a$ then the type-I error $alpha$ plus type-II error $beta$ equals 1? Or is that sum always less then 1, or can it be sometimes even greater then 1?










share|cite|improve this question
























  • If you mean probability of the errors, then no. Type 1 error and Type 2 error are not complementary events in general.
    – StubbornAtom
    Nov 29 at 14:51










  • OK. Is that sum always less then 1, or can it be sometimes even greater then 1?
    – user2925716
    Nov 29 at 14:52










  • Please mention 'that sum' of the probabilities if that's what you mean, instead of the sum of the two errors.
    – StubbornAtom
    Nov 29 at 14:59










  • I really mean $alpha+beta$ as usually denoted these two errors. $alpha$ is the probability that we reject $H_0$ if $H_0$ is true and the other one accept $H_0$ if $H_1$ is true.
    – user2925716
    Nov 29 at 15:02












  • Okay. But that's not what your post says - that's what I am saying.
    – StubbornAtom
    Nov 29 at 15:04















up vote
3
down vote

favorite












Is it true that if $H_0$ and $H_a$ are complementary hypotheses of the Binomial trial, i.e., the negation of $H_0$ is $H_a$ then the type-I error $alpha$ plus type-II error $beta$ equals 1? Or is that sum always less then 1, or can it be sometimes even greater then 1?










share|cite|improve this question
























  • If you mean probability of the errors, then no. Type 1 error and Type 2 error are not complementary events in general.
    – StubbornAtom
    Nov 29 at 14:51










  • OK. Is that sum always less then 1, or can it be sometimes even greater then 1?
    – user2925716
    Nov 29 at 14:52










  • Please mention 'that sum' of the probabilities if that's what you mean, instead of the sum of the two errors.
    – StubbornAtom
    Nov 29 at 14:59










  • I really mean $alpha+beta$ as usually denoted these two errors. $alpha$ is the probability that we reject $H_0$ if $H_0$ is true and the other one accept $H_0$ if $H_1$ is true.
    – user2925716
    Nov 29 at 15:02












  • Okay. But that's not what your post says - that's what I am saying.
    – StubbornAtom
    Nov 29 at 15:04













up vote
3
down vote

favorite









up vote
3
down vote

favorite











Is it true that if $H_0$ and $H_a$ are complementary hypotheses of the Binomial trial, i.e., the negation of $H_0$ is $H_a$ then the type-I error $alpha$ plus type-II error $beta$ equals 1? Or is that sum always less then 1, or can it be sometimes even greater then 1?










share|cite|improve this question















Is it true that if $H_0$ and $H_a$ are complementary hypotheses of the Binomial trial, i.e., the negation of $H_0$ is $H_a$ then the type-I error $alpha$ plus type-II error $beta$ equals 1? Or is that sum always less then 1, or can it be sometimes even greater then 1?







hypothesis-testing statistical-significance mathematical-statistics p-value type-i-and-ii-errors






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 29 at 15:47









gung

105k34255518




105k34255518










asked Nov 29 at 14:44









user2925716

1335




1335












  • If you mean probability of the errors, then no. Type 1 error and Type 2 error are not complementary events in general.
    – StubbornAtom
    Nov 29 at 14:51










  • OK. Is that sum always less then 1, or can it be sometimes even greater then 1?
    – user2925716
    Nov 29 at 14:52










  • Please mention 'that sum' of the probabilities if that's what you mean, instead of the sum of the two errors.
    – StubbornAtom
    Nov 29 at 14:59










  • I really mean $alpha+beta$ as usually denoted these two errors. $alpha$ is the probability that we reject $H_0$ if $H_0$ is true and the other one accept $H_0$ if $H_1$ is true.
    – user2925716
    Nov 29 at 15:02












  • Okay. But that's not what your post says - that's what I am saying.
    – StubbornAtom
    Nov 29 at 15:04


















  • If you mean probability of the errors, then no. Type 1 error and Type 2 error are not complementary events in general.
    – StubbornAtom
    Nov 29 at 14:51










  • OK. Is that sum always less then 1, or can it be sometimes even greater then 1?
    – user2925716
    Nov 29 at 14:52










  • Please mention 'that sum' of the probabilities if that's what you mean, instead of the sum of the two errors.
    – StubbornAtom
    Nov 29 at 14:59










  • I really mean $alpha+beta$ as usually denoted these two errors. $alpha$ is the probability that we reject $H_0$ if $H_0$ is true and the other one accept $H_0$ if $H_1$ is true.
    – user2925716
    Nov 29 at 15:02












  • Okay. But that's not what your post says - that's what I am saying.
    – StubbornAtom
    Nov 29 at 15:04
















If you mean probability of the errors, then no. Type 1 error and Type 2 error are not complementary events in general.
– StubbornAtom
Nov 29 at 14:51




If you mean probability of the errors, then no. Type 1 error and Type 2 error are not complementary events in general.
– StubbornAtom
Nov 29 at 14:51












OK. Is that sum always less then 1, or can it be sometimes even greater then 1?
– user2925716
Nov 29 at 14:52




OK. Is that sum always less then 1, or can it be sometimes even greater then 1?
– user2925716
Nov 29 at 14:52












Please mention 'that sum' of the probabilities if that's what you mean, instead of the sum of the two errors.
– StubbornAtom
Nov 29 at 14:59




Please mention 'that sum' of the probabilities if that's what you mean, instead of the sum of the two errors.
– StubbornAtom
Nov 29 at 14:59












I really mean $alpha+beta$ as usually denoted these two errors. $alpha$ is the probability that we reject $H_0$ if $H_0$ is true and the other one accept $H_0$ if $H_1$ is true.
– user2925716
Nov 29 at 15:02






I really mean $alpha+beta$ as usually denoted these two errors. $alpha$ is the probability that we reject $H_0$ if $H_0$ is true and the other one accept $H_0$ if $H_1$ is true.
– user2925716
Nov 29 at 15:02














Okay. But that's not what your post says - that's what I am saying.
– StubbornAtom
Nov 29 at 15:04




Okay. But that's not what your post says - that's what I am saying.
– StubbornAtom
Nov 29 at 15:04










3 Answers
3






active

oldest

votes

















up vote
4
down vote



accepted










For an arbitrarily chosen decision rule -- meaning that the decision rule can be anything that you make up just for the heck of it, it doesn't need to be sensible in any sense of the word -- the arithmetic sum of the Type I and Type II error probabilities can be any number in $[0,2]$ as the answer by @Bjorn points out.



Example 1: The observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $0$ and so their sum is also $0$.



Example 2: As in Example 1, the observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. But now the decision rule is



$$X begin{array}{c} H_a\gtrless\{H_0}end{array} 0$$



which is exactly bass ackwards from the decision rule in Example 1 (Hey, I said upfront that we are going to consider arbitrary decision rules, not necessarily only sensible ones!!). Now, since the OP asks in a comment for an explanation of my assertion (in a previous version of this answer) that "both the Type I and Type II error probabilities have value $1$ and so their sum is $2$.", here goes.




If the null hypothesis is true, then in our model, the observation $X$ is always a positive number. But the decision rule is that whenever the observation has positive value, we are going to reject the null hypothesis. Continuing to remember that the null hypothesis is true, we see that our decision rule tells us to always reject the null hypothesis (when it is true). So, what is the probability that we reject the null hypothesis when in fact the null hypothesis is true? $100%$, right? A similar argument applies to the Type II error probability. If the null hypothesis is actually false, the observation must have negative value in our model. But the decision rule perversely insists that we must refuse to reject the null when the observation has negative value which happens only when the null hypothesis is false. So, the probability of failing to reject the null hypothesis when in fact the null hypothesis is false (which is what a Type II error is) must be $100%$ too, right?



Thus, both the Type I error probability and the Type II error probability have value 1 for this (admittedly contrived) example of a decision rule, and so their arithmetic sum must be 2.




Hopefully, the above is enough of a "basic computation of this fact" as the OP desires, or is necessary to resort to the Peano axioms to prove that $1+1=2$?



Example 3: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $frac{2}{3}$ and so their sum is $frac 43$.



Example 4: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. But now the decision rule is



$$X begin{array}{c}H_a\gtrless\{H_0}end{array} 0$$



which is more sensible than the decision rule in Example 3, and it leads to both the Type I and Type II error probabilities having value $frac{1}{3}$ and so their sum is $frac 23 < 1$.






share|cite|improve this answer























  • So why @AdamO says: what's wrong with his observation? > P(decomposing corpse | dead) + P(looking alright | alive) > 1 Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.
    – user2925716
    Nov 29 at 18:01








  • 3




    @user2925716 You are the one interested in asking what the arithmetic sum of the Type I and Type II error probabilities is. This arithmetic sum has no meaning at all in decision making; it is a weighted sum of the two error probabilities that is of interest in Bayesian decision making, and so AdamO is quite correct in his assertion that the fact that sum that you are interested in can have value greater than $1$ is neither surprising nor interesting.
    – Dilip Sarwate
    Nov 29 at 18:11












  • Yes, but what about his argument that one of these two probabilities is always zero so it never can be the case that that sum is larger than $1$ ? In fact, I do not understand why your example yields number $2$, can you explain this to a beginner (me)?
    – user2925716
    Nov 29 at 18:23










  • @DilipSarwate: Strictly, if you're going into Bayesianism, then you don't have "probability of Type I/II error" at all, because those terms presuppose a Frequentist approach. But then OP's question is meaningless in the first instance, so I fear we have to stay on the Frequentist side of the fence.
    – Kevin
    Nov 29 at 18:23






  • 2




    @user2925716: AdamO is not claiming that "either $alpha =0$ or $beta = 0$." He is claiming (correctly) that both $alpha$ and $beta$ are conditional probabilities that cannot be legally summed.
    – Kevin
    Nov 29 at 18:25




















up vote
5
down vote













Case 1: the null hypothesis is true. The type II error is 0. The type I error is less than the nominal size of the test unless the test is biased. It can be as high as 1 if the test decision is "reject the null every time".



Case 2: the null hypothesis is false. The type I error is 0. the type II error can be as high as 1 if the test decision is "do not reject the null any time".



To conflate Bayesian and frequentist terminology : you can't speak of the Pr(Type 1 error) without "conditioning" or knowing H_0 is true. A nice bit of frequentist notation is this: $P_{H_0}(text{Event})$ to refer to probabilities of events or outcomes under the probability model where the null is true, or $P_{theta = theta_0}(text{Event})$ equivalently.



If you want to be crazy and sum together probabilities that don't make sense, you can conceive of two values of $theta ne theta_0$ and $theta=theta_0$ for which the Type 1 and Type 2 errors add to more than 1. For instance:




P(decomposing corpse | dead) + P(looking alright | alive) > 1




Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.






share|cite|improve this answer























  • Is it so irrational to add $alpha$ and $beta$ and seek for their sum minimum?
    – user2925716
    Nov 29 at 15:54






  • 1




    Not necessarily, provided you specify (a) what the specific alternative hypothesis is at which $beta$ will be evaluated and (b) what you are varying in order to minimize the sum. The sum could be viewed as a proxy for a combination of losses and prior probabilities associated with two hypotheses.
    – whuber
    Nov 29 at 16:01










  • @user2925716 I wouldn't. Even in the context of a power analysis, where we speculate as to the possible value(s) of $theta$ where the alternative may hold, this "probability of error" statement only makes sense when the costs of Type 1 and Type 2 errors are the same. But they are not. I report them separately, and discuss the implications of each error.
    – AdamO
    Nov 29 at 16:23








  • 2




    @whuber what does it mean to "specify the alternative hypothesis"? if that is the probability model for the outcome then there is no type I error because the null is wrong. The point is being explicit about what you're comparing. You can add the probabilities together, but as I argue the sum does not represent a probability.
    – AdamO
    Nov 29 at 18:16










  • Adam, I'm not trying to suggest the sum represents a probability. I am only responding constructively to the OP's query concerning "is it so irrational." Within the limited space permitted by a comment, I was trying to suggest that such a sum can be interpreted as (proportional to) a posterior expected loss. "Specify the alternative hypothesis" means to stipulate a specific distribution within the set known as "$H_A,$" which is usually not just one distribution. Unless one does so, $beta$ is undefined.
    – whuber
    Nov 29 at 20:18




















up vote
2
down vote













It is true that with a standard hypothesis test you either reject the null hypothesis or you do not. I.e. type II error + power = 1 under $H_A$ and non-rejection probability + type I error = 1 under $H_0$.



However, the statement the way you phrase it is not true. Type I and type II errors cannot happen under the same scenario within the traditional frequentist hypothesis testing paradigm. I.e. either $H_0$ is true, in which can you can either wrongly reject (type I error) or not reject the null hypothesis, or $H_a$ is true, in which case you can either correctly reject $H_0$ (how often you do this on average is the power) or wrongly not reject (type II error).






share|cite|improve this answer





















  • Please see the last added line in the question.
    – user2925716
    Nov 29 at 14:57










  • Regarding the last line: The sum can be any number in (0, 2). It even depends on where exactly you are within $H_A$.
    – Björn
    Nov 29 at 16:05










  • In the other answer they've just proved that the sum $alpha+beta$ is in $[0,1]$...
    – user2925716
    Nov 29 at 16:07










  • It's unclear to which answer you refer--I cannot find any such proof.
    – whuber
    Dec 1 at 14:42











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f379423%2fis-there-a-constraint-on-the-sum-of-the-type-i-type-ii-error-probabilities%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
4
down vote



accepted










For an arbitrarily chosen decision rule -- meaning that the decision rule can be anything that you make up just for the heck of it, it doesn't need to be sensible in any sense of the word -- the arithmetic sum of the Type I and Type II error probabilities can be any number in $[0,2]$ as the answer by @Bjorn points out.



Example 1: The observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $0$ and so their sum is also $0$.



Example 2: As in Example 1, the observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. But now the decision rule is



$$X begin{array}{c} H_a\gtrless\{H_0}end{array} 0$$



which is exactly bass ackwards from the decision rule in Example 1 (Hey, I said upfront that we are going to consider arbitrary decision rules, not necessarily only sensible ones!!). Now, since the OP asks in a comment for an explanation of my assertion (in a previous version of this answer) that "both the Type I and Type II error probabilities have value $1$ and so their sum is $2$.", here goes.




If the null hypothesis is true, then in our model, the observation $X$ is always a positive number. But the decision rule is that whenever the observation has positive value, we are going to reject the null hypothesis. Continuing to remember that the null hypothesis is true, we see that our decision rule tells us to always reject the null hypothesis (when it is true). So, what is the probability that we reject the null hypothesis when in fact the null hypothesis is true? $100%$, right? A similar argument applies to the Type II error probability. If the null hypothesis is actually false, the observation must have negative value in our model. But the decision rule perversely insists that we must refuse to reject the null when the observation has negative value which happens only when the null hypothesis is false. So, the probability of failing to reject the null hypothesis when in fact the null hypothesis is false (which is what a Type II error is) must be $100%$ too, right?



Thus, both the Type I error probability and the Type II error probability have value 1 for this (admittedly contrived) example of a decision rule, and so their arithmetic sum must be 2.




Hopefully, the above is enough of a "basic computation of this fact" as the OP desires, or is necessary to resort to the Peano axioms to prove that $1+1=2$?



Example 3: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $frac{2}{3}$ and so their sum is $frac 43$.



Example 4: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. But now the decision rule is



$$X begin{array}{c}H_a\gtrless\{H_0}end{array} 0$$



which is more sensible than the decision rule in Example 3, and it leads to both the Type I and Type II error probabilities having value $frac{1}{3}$ and so their sum is $frac 23 < 1$.






share|cite|improve this answer























  • So why @AdamO says: what's wrong with his observation? > P(decomposing corpse | dead) + P(looking alright | alive) > 1 Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.
    – user2925716
    Nov 29 at 18:01








  • 3




    @user2925716 You are the one interested in asking what the arithmetic sum of the Type I and Type II error probabilities is. This arithmetic sum has no meaning at all in decision making; it is a weighted sum of the two error probabilities that is of interest in Bayesian decision making, and so AdamO is quite correct in his assertion that the fact that sum that you are interested in can have value greater than $1$ is neither surprising nor interesting.
    – Dilip Sarwate
    Nov 29 at 18:11












  • Yes, but what about his argument that one of these two probabilities is always zero so it never can be the case that that sum is larger than $1$ ? In fact, I do not understand why your example yields number $2$, can you explain this to a beginner (me)?
    – user2925716
    Nov 29 at 18:23










  • @DilipSarwate: Strictly, if you're going into Bayesianism, then you don't have "probability of Type I/II error" at all, because those terms presuppose a Frequentist approach. But then OP's question is meaningless in the first instance, so I fear we have to stay on the Frequentist side of the fence.
    – Kevin
    Nov 29 at 18:23






  • 2




    @user2925716: AdamO is not claiming that "either $alpha =0$ or $beta = 0$." He is claiming (correctly) that both $alpha$ and $beta$ are conditional probabilities that cannot be legally summed.
    – Kevin
    Nov 29 at 18:25

















up vote
4
down vote



accepted










For an arbitrarily chosen decision rule -- meaning that the decision rule can be anything that you make up just for the heck of it, it doesn't need to be sensible in any sense of the word -- the arithmetic sum of the Type I and Type II error probabilities can be any number in $[0,2]$ as the answer by @Bjorn points out.



Example 1: The observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $0$ and so their sum is also $0$.



Example 2: As in Example 1, the observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. But now the decision rule is



$$X begin{array}{c} H_a\gtrless\{H_0}end{array} 0$$



which is exactly bass ackwards from the decision rule in Example 1 (Hey, I said upfront that we are going to consider arbitrary decision rules, not necessarily only sensible ones!!). Now, since the OP asks in a comment for an explanation of my assertion (in a previous version of this answer) that "both the Type I and Type II error probabilities have value $1$ and so their sum is $2$.", here goes.




If the null hypothesis is true, then in our model, the observation $X$ is always a positive number. But the decision rule is that whenever the observation has positive value, we are going to reject the null hypothesis. Continuing to remember that the null hypothesis is true, we see that our decision rule tells us to always reject the null hypothesis (when it is true). So, what is the probability that we reject the null hypothesis when in fact the null hypothesis is true? $100%$, right? A similar argument applies to the Type II error probability. If the null hypothesis is actually false, the observation must have negative value in our model. But the decision rule perversely insists that we must refuse to reject the null when the observation has negative value which happens only when the null hypothesis is false. So, the probability of failing to reject the null hypothesis when in fact the null hypothesis is false (which is what a Type II error is) must be $100%$ too, right?



Thus, both the Type I error probability and the Type II error probability have value 1 for this (admittedly contrived) example of a decision rule, and so their arithmetic sum must be 2.




Hopefully, the above is enough of a "basic computation of this fact" as the OP desires, or is necessary to resort to the Peano axioms to prove that $1+1=2$?



Example 3: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $frac{2}{3}$ and so their sum is $frac 43$.



Example 4: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. But now the decision rule is



$$X begin{array}{c}H_a\gtrless\{H_0}end{array} 0$$



which is more sensible than the decision rule in Example 3, and it leads to both the Type I and Type II error probabilities having value $frac{1}{3}$ and so their sum is $frac 23 < 1$.






share|cite|improve this answer























  • So why @AdamO says: what's wrong with his observation? > P(decomposing corpse | dead) + P(looking alright | alive) > 1 Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.
    – user2925716
    Nov 29 at 18:01








  • 3




    @user2925716 You are the one interested in asking what the arithmetic sum of the Type I and Type II error probabilities is. This arithmetic sum has no meaning at all in decision making; it is a weighted sum of the two error probabilities that is of interest in Bayesian decision making, and so AdamO is quite correct in his assertion that the fact that sum that you are interested in can have value greater than $1$ is neither surprising nor interesting.
    – Dilip Sarwate
    Nov 29 at 18:11












  • Yes, but what about his argument that one of these two probabilities is always zero so it never can be the case that that sum is larger than $1$ ? In fact, I do not understand why your example yields number $2$, can you explain this to a beginner (me)?
    – user2925716
    Nov 29 at 18:23










  • @DilipSarwate: Strictly, if you're going into Bayesianism, then you don't have "probability of Type I/II error" at all, because those terms presuppose a Frequentist approach. But then OP's question is meaningless in the first instance, so I fear we have to stay on the Frequentist side of the fence.
    – Kevin
    Nov 29 at 18:23






  • 2




    @user2925716: AdamO is not claiming that "either $alpha =0$ or $beta = 0$." He is claiming (correctly) that both $alpha$ and $beta$ are conditional probabilities that cannot be legally summed.
    – Kevin
    Nov 29 at 18:25















up vote
4
down vote



accepted







up vote
4
down vote



accepted






For an arbitrarily chosen decision rule -- meaning that the decision rule can be anything that you make up just for the heck of it, it doesn't need to be sensible in any sense of the word -- the arithmetic sum of the Type I and Type II error probabilities can be any number in $[0,2]$ as the answer by @Bjorn points out.



Example 1: The observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $0$ and so their sum is also $0$.



Example 2: As in Example 1, the observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. But now the decision rule is



$$X begin{array}{c} H_a\gtrless\{H_0}end{array} 0$$



which is exactly bass ackwards from the decision rule in Example 1 (Hey, I said upfront that we are going to consider arbitrary decision rules, not necessarily only sensible ones!!). Now, since the OP asks in a comment for an explanation of my assertion (in a previous version of this answer) that "both the Type I and Type II error probabilities have value $1$ and so their sum is $2$.", here goes.




If the null hypothesis is true, then in our model, the observation $X$ is always a positive number. But the decision rule is that whenever the observation has positive value, we are going to reject the null hypothesis. Continuing to remember that the null hypothesis is true, we see that our decision rule tells us to always reject the null hypothesis (when it is true). So, what is the probability that we reject the null hypothesis when in fact the null hypothesis is true? $100%$, right? A similar argument applies to the Type II error probability. If the null hypothesis is actually false, the observation must have negative value in our model. But the decision rule perversely insists that we must refuse to reject the null when the observation has negative value which happens only when the null hypothesis is false. So, the probability of failing to reject the null hypothesis when in fact the null hypothesis is false (which is what a Type II error is) must be $100%$ too, right?



Thus, both the Type I error probability and the Type II error probability have value 1 for this (admittedly contrived) example of a decision rule, and so their arithmetic sum must be 2.




Hopefully, the above is enough of a "basic computation of this fact" as the OP desires, or is necessary to resort to the Peano axioms to prove that $1+1=2$?



Example 3: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $frac{2}{3}$ and so their sum is $frac 43$.



Example 4: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. But now the decision rule is



$$X begin{array}{c}H_a\gtrless\{H_0}end{array} 0$$



which is more sensible than the decision rule in Example 3, and it leads to both the Type I and Type II error probabilities having value $frac{1}{3}$ and so their sum is $frac 23 < 1$.






share|cite|improve this answer














For an arbitrarily chosen decision rule -- meaning that the decision rule can be anything that you make up just for the heck of it, it doesn't need to be sensible in any sense of the word -- the arithmetic sum of the Type I and Type II error probabilities can be any number in $[0,2]$ as the answer by @Bjorn points out.



Example 1: The observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $0$ and so their sum is also $0$.



Example 2: As in Example 1, the observation $X$ always has positive value when $H_0$ is true and always has negative value when $H_a$ is true. But now the decision rule is



$$X begin{array}{c} H_a\gtrless\{H_0}end{array} 0$$



which is exactly bass ackwards from the decision rule in Example 1 (Hey, I said upfront that we are going to consider arbitrary decision rules, not necessarily only sensible ones!!). Now, since the OP asks in a comment for an explanation of my assertion (in a previous version of this answer) that "both the Type I and Type II error probabilities have value $1$ and so their sum is $2$.", here goes.




If the null hypothesis is true, then in our model, the observation $X$ is always a positive number. But the decision rule is that whenever the observation has positive value, we are going to reject the null hypothesis. Continuing to remember that the null hypothesis is true, we see that our decision rule tells us to always reject the null hypothesis (when it is true). So, what is the probability that we reject the null hypothesis when in fact the null hypothesis is true? $100%$, right? A similar argument applies to the Type II error probability. If the null hypothesis is actually false, the observation must have negative value in our model. But the decision rule perversely insists that we must refuse to reject the null when the observation has negative value which happens only when the null hypothesis is false. So, the probability of failing to reject the null hypothesis when in fact the null hypothesis is false (which is what a Type II error is) must be $100%$ too, right?



Thus, both the Type I error probability and the Type II error probability have value 1 for this (admittedly contrived) example of a decision rule, and so their arithmetic sum must be 2.




Hopefully, the above is enough of a "basic computation of this fact" as the OP desires, or is necessary to resort to the Peano axioms to prove that $1+1=2$?



Example 3: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. The decision rule is



$$X begin{array}{c}H_0\gtrless\{H_a}end{array} 0$$



leading to both the Type I and Type II error probabilities having value $frac{2}{3}$ and so their sum is $frac 43$.



Example 4: The observation $X sim U[-2,1]$ whenever $H_0$ is true while $X sim U[-1,2]$ whenever $H_a$ is true. But now the decision rule is



$$X begin{array}{c}H_a\gtrless\{H_0}end{array} 0$$



which is more sensible than the decision rule in Example 3, and it leads to both the Type I and Type II error probabilities having value $frac{1}{3}$ and so their sum is $frac 23 < 1$.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Nov 29 at 19:28

























answered Nov 29 at 17:37









Dilip Sarwate

29.5k252146




29.5k252146












  • So why @AdamO says: what's wrong with his observation? > P(decomposing corpse | dead) + P(looking alright | alive) > 1 Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.
    – user2925716
    Nov 29 at 18:01








  • 3




    @user2925716 You are the one interested in asking what the arithmetic sum of the Type I and Type II error probabilities is. This arithmetic sum has no meaning at all in decision making; it is a weighted sum of the two error probabilities that is of interest in Bayesian decision making, and so AdamO is quite correct in his assertion that the fact that sum that you are interested in can have value greater than $1$ is neither surprising nor interesting.
    – Dilip Sarwate
    Nov 29 at 18:11












  • Yes, but what about his argument that one of these two probabilities is always zero so it never can be the case that that sum is larger than $1$ ? In fact, I do not understand why your example yields number $2$, can you explain this to a beginner (me)?
    – user2925716
    Nov 29 at 18:23










  • @DilipSarwate: Strictly, if you're going into Bayesianism, then you don't have "probability of Type I/II error" at all, because those terms presuppose a Frequentist approach. But then OP's question is meaningless in the first instance, so I fear we have to stay on the Frequentist side of the fence.
    – Kevin
    Nov 29 at 18:23






  • 2




    @user2925716: AdamO is not claiming that "either $alpha =0$ or $beta = 0$." He is claiming (correctly) that both $alpha$ and $beta$ are conditional probabilities that cannot be legally summed.
    – Kevin
    Nov 29 at 18:25




















  • So why @AdamO says: what's wrong with his observation? > P(decomposing corpse | dead) + P(looking alright | alive) > 1 Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.
    – user2925716
    Nov 29 at 18:01








  • 3




    @user2925716 You are the one interested in asking what the arithmetic sum of the Type I and Type II error probabilities is. This arithmetic sum has no meaning at all in decision making; it is a weighted sum of the two error probabilities that is of interest in Bayesian decision making, and so AdamO is quite correct in his assertion that the fact that sum that you are interested in can have value greater than $1$ is neither surprising nor interesting.
    – Dilip Sarwate
    Nov 29 at 18:11












  • Yes, but what about his argument that one of these two probabilities is always zero so it never can be the case that that sum is larger than $1$ ? In fact, I do not understand why your example yields number $2$, can you explain this to a beginner (me)?
    – user2925716
    Nov 29 at 18:23










  • @DilipSarwate: Strictly, if you're going into Bayesianism, then you don't have "probability of Type I/II error" at all, because those terms presuppose a Frequentist approach. But then OP's question is meaningless in the first instance, so I fear we have to stay on the Frequentist side of the fence.
    – Kevin
    Nov 29 at 18:23






  • 2




    @user2925716: AdamO is not claiming that "either $alpha =0$ or $beta = 0$." He is claiming (correctly) that both $alpha$ and $beta$ are conditional probabilities that cannot be legally summed.
    – Kevin
    Nov 29 at 18:25


















So why @AdamO says: what's wrong with his observation? > P(decomposing corpse | dead) + P(looking alright | alive) > 1 Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.
– user2925716
Nov 29 at 18:01






So why @AdamO says: what's wrong with his observation? > P(decomposing corpse | dead) + P(looking alright | alive) > 1 Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.
– user2925716
Nov 29 at 18:01






3




3




@user2925716 You are the one interested in asking what the arithmetic sum of the Type I and Type II error probabilities is. This arithmetic sum has no meaning at all in decision making; it is a weighted sum of the two error probabilities that is of interest in Bayesian decision making, and so AdamO is quite correct in his assertion that the fact that sum that you are interested in can have value greater than $1$ is neither surprising nor interesting.
– Dilip Sarwate
Nov 29 at 18:11






@user2925716 You are the one interested in asking what the arithmetic sum of the Type I and Type II error probabilities is. This arithmetic sum has no meaning at all in decision making; it is a weighted sum of the two error probabilities that is of interest in Bayesian decision making, and so AdamO is quite correct in his assertion that the fact that sum that you are interested in can have value greater than $1$ is neither surprising nor interesting.
– Dilip Sarwate
Nov 29 at 18:11














Yes, but what about his argument that one of these two probabilities is always zero so it never can be the case that that sum is larger than $1$ ? In fact, I do not understand why your example yields number $2$, can you explain this to a beginner (me)?
– user2925716
Nov 29 at 18:23




Yes, but what about his argument that one of these two probabilities is always zero so it never can be the case that that sum is larger than $1$ ? In fact, I do not understand why your example yields number $2$, can you explain this to a beginner (me)?
– user2925716
Nov 29 at 18:23












@DilipSarwate: Strictly, if you're going into Bayesianism, then you don't have "probability of Type I/II error" at all, because those terms presuppose a Frequentist approach. But then OP's question is meaningless in the first instance, so I fear we have to stay on the Frequentist side of the fence.
– Kevin
Nov 29 at 18:23




@DilipSarwate: Strictly, if you're going into Bayesianism, then you don't have "probability of Type I/II error" at all, because those terms presuppose a Frequentist approach. But then OP's question is meaningless in the first instance, so I fear we have to stay on the Frequentist side of the fence.
– Kevin
Nov 29 at 18:23




2




2




@user2925716: AdamO is not claiming that "either $alpha =0$ or $beta = 0$." He is claiming (correctly) that both $alpha$ and $beta$ are conditional probabilities that cannot be legally summed.
– Kevin
Nov 29 at 18:25






@user2925716: AdamO is not claiming that "either $alpha =0$ or $beta = 0$." He is claiming (correctly) that both $alpha$ and $beta$ are conditional probabilities that cannot be legally summed.
– Kevin
Nov 29 at 18:25














up vote
5
down vote













Case 1: the null hypothesis is true. The type II error is 0. The type I error is less than the nominal size of the test unless the test is biased. It can be as high as 1 if the test decision is "reject the null every time".



Case 2: the null hypothesis is false. The type I error is 0. the type II error can be as high as 1 if the test decision is "do not reject the null any time".



To conflate Bayesian and frequentist terminology : you can't speak of the Pr(Type 1 error) without "conditioning" or knowing H_0 is true. A nice bit of frequentist notation is this: $P_{H_0}(text{Event})$ to refer to probabilities of events or outcomes under the probability model where the null is true, or $P_{theta = theta_0}(text{Event})$ equivalently.



If you want to be crazy and sum together probabilities that don't make sense, you can conceive of two values of $theta ne theta_0$ and $theta=theta_0$ for which the Type 1 and Type 2 errors add to more than 1. For instance:




P(decomposing corpse | dead) + P(looking alright | alive) > 1




Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.






share|cite|improve this answer























  • Is it so irrational to add $alpha$ and $beta$ and seek for their sum minimum?
    – user2925716
    Nov 29 at 15:54






  • 1




    Not necessarily, provided you specify (a) what the specific alternative hypothesis is at which $beta$ will be evaluated and (b) what you are varying in order to minimize the sum. The sum could be viewed as a proxy for a combination of losses and prior probabilities associated with two hypotheses.
    – whuber
    Nov 29 at 16:01










  • @user2925716 I wouldn't. Even in the context of a power analysis, where we speculate as to the possible value(s) of $theta$ where the alternative may hold, this "probability of error" statement only makes sense when the costs of Type 1 and Type 2 errors are the same. But they are not. I report them separately, and discuss the implications of each error.
    – AdamO
    Nov 29 at 16:23








  • 2




    @whuber what does it mean to "specify the alternative hypothesis"? if that is the probability model for the outcome then there is no type I error because the null is wrong. The point is being explicit about what you're comparing. You can add the probabilities together, but as I argue the sum does not represent a probability.
    – AdamO
    Nov 29 at 18:16










  • Adam, I'm not trying to suggest the sum represents a probability. I am only responding constructively to the OP's query concerning "is it so irrational." Within the limited space permitted by a comment, I was trying to suggest that such a sum can be interpreted as (proportional to) a posterior expected loss. "Specify the alternative hypothesis" means to stipulate a specific distribution within the set known as "$H_A,$" which is usually not just one distribution. Unless one does so, $beta$ is undefined.
    – whuber
    Nov 29 at 20:18

















up vote
5
down vote













Case 1: the null hypothesis is true. The type II error is 0. The type I error is less than the nominal size of the test unless the test is biased. It can be as high as 1 if the test decision is "reject the null every time".



Case 2: the null hypothesis is false. The type I error is 0. the type II error can be as high as 1 if the test decision is "do not reject the null any time".



To conflate Bayesian and frequentist terminology : you can't speak of the Pr(Type 1 error) without "conditioning" or knowing H_0 is true. A nice bit of frequentist notation is this: $P_{H_0}(text{Event})$ to refer to probabilities of events or outcomes under the probability model where the null is true, or $P_{theta = theta_0}(text{Event})$ equivalently.



If you want to be crazy and sum together probabilities that don't make sense, you can conceive of two values of $theta ne theta_0$ and $theta=theta_0$ for which the Type 1 and Type 2 errors add to more than 1. For instance:




P(decomposing corpse | dead) + P(looking alright | alive) > 1




Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.






share|cite|improve this answer























  • Is it so irrational to add $alpha$ and $beta$ and seek for their sum minimum?
    – user2925716
    Nov 29 at 15:54






  • 1




    Not necessarily, provided you specify (a) what the specific alternative hypothesis is at which $beta$ will be evaluated and (b) what you are varying in order to minimize the sum. The sum could be viewed as a proxy for a combination of losses and prior probabilities associated with two hypotheses.
    – whuber
    Nov 29 at 16:01










  • @user2925716 I wouldn't. Even in the context of a power analysis, where we speculate as to the possible value(s) of $theta$ where the alternative may hold, this "probability of error" statement only makes sense when the costs of Type 1 and Type 2 errors are the same. But they are not. I report them separately, and discuss the implications of each error.
    – AdamO
    Nov 29 at 16:23








  • 2




    @whuber what does it mean to "specify the alternative hypothesis"? if that is the probability model for the outcome then there is no type I error because the null is wrong. The point is being explicit about what you're comparing. You can add the probabilities together, but as I argue the sum does not represent a probability.
    – AdamO
    Nov 29 at 18:16










  • Adam, I'm not trying to suggest the sum represents a probability. I am only responding constructively to the OP's query concerning "is it so irrational." Within the limited space permitted by a comment, I was trying to suggest that such a sum can be interpreted as (proportional to) a posterior expected loss. "Specify the alternative hypothesis" means to stipulate a specific distribution within the set known as "$H_A,$" which is usually not just one distribution. Unless one does so, $beta$ is undefined.
    – whuber
    Nov 29 at 20:18















up vote
5
down vote










up vote
5
down vote









Case 1: the null hypothesis is true. The type II error is 0. The type I error is less than the nominal size of the test unless the test is biased. It can be as high as 1 if the test decision is "reject the null every time".



Case 2: the null hypothesis is false. The type I error is 0. the type II error can be as high as 1 if the test decision is "do not reject the null any time".



To conflate Bayesian and frequentist terminology : you can't speak of the Pr(Type 1 error) without "conditioning" or knowing H_0 is true. A nice bit of frequentist notation is this: $P_{H_0}(text{Event})$ to refer to probabilities of events or outcomes under the probability model where the null is true, or $P_{theta = theta_0}(text{Event})$ equivalently.



If you want to be crazy and sum together probabilities that don't make sense, you can conceive of two values of $theta ne theta_0$ and $theta=theta_0$ for which the Type 1 and Type 2 errors add to more than 1. For instance:




P(decomposing corpse | dead) + P(looking alright | alive) > 1




Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.






share|cite|improve this answer














Case 1: the null hypothesis is true. The type II error is 0. The type I error is less than the nominal size of the test unless the test is biased. It can be as high as 1 if the test decision is "reject the null every time".



Case 2: the null hypothesis is false. The type I error is 0. the type II error can be as high as 1 if the test decision is "do not reject the null any time".



To conflate Bayesian and frequentist terminology : you can't speak of the Pr(Type 1 error) without "conditioning" or knowing H_0 is true. A nice bit of frequentist notation is this: $P_{H_0}(text{Event})$ to refer to probabilities of events or outcomes under the probability model where the null is true, or $P_{theta = theta_0}(text{Event})$ equivalently.



If you want to be crazy and sum together probabilities that don't make sense, you can conceive of two values of $theta ne theta_0$ and $theta=theta_0$ for which the Type 1 and Type 2 errors add to more than 1. For instance:




P(decomposing corpse | dead) + P(looking alright | alive) > 1




Is this surprising or interesting? No. IRL one of those error probabilities will always be 0 and the other is less than or equal to 1 depending on how good or stupid the test is.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Nov 29 at 15:43

























answered Nov 29 at 15:21









AdamO

32.3k257138




32.3k257138












  • Is it so irrational to add $alpha$ and $beta$ and seek for their sum minimum?
    – user2925716
    Nov 29 at 15:54






  • 1




    Not necessarily, provided you specify (a) what the specific alternative hypothesis is at which $beta$ will be evaluated and (b) what you are varying in order to minimize the sum. The sum could be viewed as a proxy for a combination of losses and prior probabilities associated with two hypotheses.
    – whuber
    Nov 29 at 16:01










  • @user2925716 I wouldn't. Even in the context of a power analysis, where we speculate as to the possible value(s) of $theta$ where the alternative may hold, this "probability of error" statement only makes sense when the costs of Type 1 and Type 2 errors are the same. But they are not. I report them separately, and discuss the implications of each error.
    – AdamO
    Nov 29 at 16:23








  • 2




    @whuber what does it mean to "specify the alternative hypothesis"? if that is the probability model for the outcome then there is no type I error because the null is wrong. The point is being explicit about what you're comparing. You can add the probabilities together, but as I argue the sum does not represent a probability.
    – AdamO
    Nov 29 at 18:16










  • Adam, I'm not trying to suggest the sum represents a probability. I am only responding constructively to the OP's query concerning "is it so irrational." Within the limited space permitted by a comment, I was trying to suggest that such a sum can be interpreted as (proportional to) a posterior expected loss. "Specify the alternative hypothesis" means to stipulate a specific distribution within the set known as "$H_A,$" which is usually not just one distribution. Unless one does so, $beta$ is undefined.
    – whuber
    Nov 29 at 20:18




















  • Is it so irrational to add $alpha$ and $beta$ and seek for their sum minimum?
    – user2925716
    Nov 29 at 15:54






  • 1




    Not necessarily, provided you specify (a) what the specific alternative hypothesis is at which $beta$ will be evaluated and (b) what you are varying in order to minimize the sum. The sum could be viewed as a proxy for a combination of losses and prior probabilities associated with two hypotheses.
    – whuber
    Nov 29 at 16:01










  • @user2925716 I wouldn't. Even in the context of a power analysis, where we speculate as to the possible value(s) of $theta$ where the alternative may hold, this "probability of error" statement only makes sense when the costs of Type 1 and Type 2 errors are the same. But they are not. I report them separately, and discuss the implications of each error.
    – AdamO
    Nov 29 at 16:23








  • 2




    @whuber what does it mean to "specify the alternative hypothesis"? if that is the probability model for the outcome then there is no type I error because the null is wrong. The point is being explicit about what you're comparing. You can add the probabilities together, but as I argue the sum does not represent a probability.
    – AdamO
    Nov 29 at 18:16










  • Adam, I'm not trying to suggest the sum represents a probability. I am only responding constructively to the OP's query concerning "is it so irrational." Within the limited space permitted by a comment, I was trying to suggest that such a sum can be interpreted as (proportional to) a posterior expected loss. "Specify the alternative hypothesis" means to stipulate a specific distribution within the set known as "$H_A,$" which is usually not just one distribution. Unless one does so, $beta$ is undefined.
    – whuber
    Nov 29 at 20:18


















Is it so irrational to add $alpha$ and $beta$ and seek for their sum minimum?
– user2925716
Nov 29 at 15:54




Is it so irrational to add $alpha$ and $beta$ and seek for their sum minimum?
– user2925716
Nov 29 at 15:54




1




1




Not necessarily, provided you specify (a) what the specific alternative hypothesis is at which $beta$ will be evaluated and (b) what you are varying in order to minimize the sum. The sum could be viewed as a proxy for a combination of losses and prior probabilities associated with two hypotheses.
– whuber
Nov 29 at 16:01




Not necessarily, provided you specify (a) what the specific alternative hypothesis is at which $beta$ will be evaluated and (b) what you are varying in order to minimize the sum. The sum could be viewed as a proxy for a combination of losses and prior probabilities associated with two hypotheses.
– whuber
Nov 29 at 16:01












@user2925716 I wouldn't. Even in the context of a power analysis, where we speculate as to the possible value(s) of $theta$ where the alternative may hold, this "probability of error" statement only makes sense when the costs of Type 1 and Type 2 errors are the same. But they are not. I report them separately, and discuss the implications of each error.
– AdamO
Nov 29 at 16:23






@user2925716 I wouldn't. Even in the context of a power analysis, where we speculate as to the possible value(s) of $theta$ where the alternative may hold, this "probability of error" statement only makes sense when the costs of Type 1 and Type 2 errors are the same. But they are not. I report them separately, and discuss the implications of each error.
– AdamO
Nov 29 at 16:23






2




2




@whuber what does it mean to "specify the alternative hypothesis"? if that is the probability model for the outcome then there is no type I error because the null is wrong. The point is being explicit about what you're comparing. You can add the probabilities together, but as I argue the sum does not represent a probability.
– AdamO
Nov 29 at 18:16




@whuber what does it mean to "specify the alternative hypothesis"? if that is the probability model for the outcome then there is no type I error because the null is wrong. The point is being explicit about what you're comparing. You can add the probabilities together, but as I argue the sum does not represent a probability.
– AdamO
Nov 29 at 18:16












Adam, I'm not trying to suggest the sum represents a probability. I am only responding constructively to the OP's query concerning "is it so irrational." Within the limited space permitted by a comment, I was trying to suggest that such a sum can be interpreted as (proportional to) a posterior expected loss. "Specify the alternative hypothesis" means to stipulate a specific distribution within the set known as "$H_A,$" which is usually not just one distribution. Unless one does so, $beta$ is undefined.
– whuber
Nov 29 at 20:18






Adam, I'm not trying to suggest the sum represents a probability. I am only responding constructively to the OP's query concerning "is it so irrational." Within the limited space permitted by a comment, I was trying to suggest that such a sum can be interpreted as (proportional to) a posterior expected loss. "Specify the alternative hypothesis" means to stipulate a specific distribution within the set known as "$H_A,$" which is usually not just one distribution. Unless one does so, $beta$ is undefined.
– whuber
Nov 29 at 20:18












up vote
2
down vote













It is true that with a standard hypothesis test you either reject the null hypothesis or you do not. I.e. type II error + power = 1 under $H_A$ and non-rejection probability + type I error = 1 under $H_0$.



However, the statement the way you phrase it is not true. Type I and type II errors cannot happen under the same scenario within the traditional frequentist hypothesis testing paradigm. I.e. either $H_0$ is true, in which can you can either wrongly reject (type I error) or not reject the null hypothesis, or $H_a$ is true, in which case you can either correctly reject $H_0$ (how often you do this on average is the power) or wrongly not reject (type II error).






share|cite|improve this answer





















  • Please see the last added line in the question.
    – user2925716
    Nov 29 at 14:57










  • Regarding the last line: The sum can be any number in (0, 2). It even depends on where exactly you are within $H_A$.
    – Björn
    Nov 29 at 16:05










  • In the other answer they've just proved that the sum $alpha+beta$ is in $[0,1]$...
    – user2925716
    Nov 29 at 16:07










  • It's unclear to which answer you refer--I cannot find any such proof.
    – whuber
    Dec 1 at 14:42















up vote
2
down vote













It is true that with a standard hypothesis test you either reject the null hypothesis or you do not. I.e. type II error + power = 1 under $H_A$ and non-rejection probability + type I error = 1 under $H_0$.



However, the statement the way you phrase it is not true. Type I and type II errors cannot happen under the same scenario within the traditional frequentist hypothesis testing paradigm. I.e. either $H_0$ is true, in which can you can either wrongly reject (type I error) or not reject the null hypothesis, or $H_a$ is true, in which case you can either correctly reject $H_0$ (how often you do this on average is the power) or wrongly not reject (type II error).






share|cite|improve this answer





















  • Please see the last added line in the question.
    – user2925716
    Nov 29 at 14:57










  • Regarding the last line: The sum can be any number in (0, 2). It even depends on where exactly you are within $H_A$.
    – Björn
    Nov 29 at 16:05










  • In the other answer they've just proved that the sum $alpha+beta$ is in $[0,1]$...
    – user2925716
    Nov 29 at 16:07










  • It's unclear to which answer you refer--I cannot find any such proof.
    – whuber
    Dec 1 at 14:42













up vote
2
down vote










up vote
2
down vote









It is true that with a standard hypothesis test you either reject the null hypothesis or you do not. I.e. type II error + power = 1 under $H_A$ and non-rejection probability + type I error = 1 under $H_0$.



However, the statement the way you phrase it is not true. Type I and type II errors cannot happen under the same scenario within the traditional frequentist hypothesis testing paradigm. I.e. either $H_0$ is true, in which can you can either wrongly reject (type I error) or not reject the null hypothesis, or $H_a$ is true, in which case you can either correctly reject $H_0$ (how often you do this on average is the power) or wrongly not reject (type II error).






share|cite|improve this answer












It is true that with a standard hypothesis test you either reject the null hypothesis or you do not. I.e. type II error + power = 1 under $H_A$ and non-rejection probability + type I error = 1 under $H_0$.



However, the statement the way you phrase it is not true. Type I and type II errors cannot happen under the same scenario within the traditional frequentist hypothesis testing paradigm. I.e. either $H_0$ is true, in which can you can either wrongly reject (type I error) or not reject the null hypothesis, or $H_a$ is true, in which case you can either correctly reject $H_0$ (how often you do this on average is the power) or wrongly not reject (type II error).







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Nov 29 at 14:53









Björn

9,1431834




9,1431834












  • Please see the last added line in the question.
    – user2925716
    Nov 29 at 14:57










  • Regarding the last line: The sum can be any number in (0, 2). It even depends on where exactly you are within $H_A$.
    – Björn
    Nov 29 at 16:05










  • In the other answer they've just proved that the sum $alpha+beta$ is in $[0,1]$...
    – user2925716
    Nov 29 at 16:07










  • It's unclear to which answer you refer--I cannot find any such proof.
    – whuber
    Dec 1 at 14:42


















  • Please see the last added line in the question.
    – user2925716
    Nov 29 at 14:57










  • Regarding the last line: The sum can be any number in (0, 2). It even depends on where exactly you are within $H_A$.
    – Björn
    Nov 29 at 16:05










  • In the other answer they've just proved that the sum $alpha+beta$ is in $[0,1]$...
    – user2925716
    Nov 29 at 16:07










  • It's unclear to which answer you refer--I cannot find any such proof.
    – whuber
    Dec 1 at 14:42
















Please see the last added line in the question.
– user2925716
Nov 29 at 14:57




Please see the last added line in the question.
– user2925716
Nov 29 at 14:57












Regarding the last line: The sum can be any number in (0, 2). It even depends on where exactly you are within $H_A$.
– Björn
Nov 29 at 16:05




Regarding the last line: The sum can be any number in (0, 2). It even depends on where exactly you are within $H_A$.
– Björn
Nov 29 at 16:05












In the other answer they've just proved that the sum $alpha+beta$ is in $[0,1]$...
– user2925716
Nov 29 at 16:07




In the other answer they've just proved that the sum $alpha+beta$ is in $[0,1]$...
– user2925716
Nov 29 at 16:07












It's unclear to which answer you refer--I cannot find any such proof.
– whuber
Dec 1 at 14:42




It's unclear to which answer you refer--I cannot find any such proof.
– whuber
Dec 1 at 14:42


















draft saved

draft discarded




















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f379423%2fis-there-a-constraint-on-the-sum-of-the-type-i-type-ii-error-probabilities%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

Mangá

 ⁒  ․,‪⁊‑⁙ ⁖, ⁇‒※‌, †,⁖‗‌⁝    ‾‸⁘,‖⁔⁣,⁂‾
”‑,‥–,‬ ,⁀‹⁋‴⁑ ‒ ,‴⁋”‼ ⁨,‷⁔„ ‰′,‐‚ ‥‡‎“‷⁃⁨⁅⁣,⁔
⁇‘⁔⁡⁏⁌⁡‿‶‏⁨ ⁣⁕⁖⁨⁩⁥‽⁀  ‴‬⁜‟ ⁃‣‧⁕‮ …‍⁨‴ ⁩,⁚⁖‫ ,‵ ⁀,‮⁝‣‣ ⁑  ⁂– ․, ‾‽ ‏⁁“⁗‸ ‾… ‹‡⁌⁎‸‘ ‡⁏⁌‪ ‵⁛ ‎⁨ ―⁦⁤⁄⁕