By Bayes’ laws, brand new rear likelihood of y = step one will likely be conveyed because:

(Failure of OOD detection under invariant classifier) Consider an out-of-distribution input which contains the environmental feature: ? out ( x ) = M inv z out + M e z e , where z out ? ? inv . Given the invariant classifier (cf. Lemma 2), the posterior probability for the OOD input is p ( y = 1 ? ? out ) = ? ( 2 p ? z e ? + log ? / ( 1 ? ? ) ) , where ? is the logistic function. Thus for arbitrary confidence 0 < c : = P ( y = 1 ? ? out ) < 1 , there exists ? out ( x ) with z e such that p ? z e = 1 2 ? log c ( 1 ? ? ) ? ( 1 ? c ) .

Facts. Think an aside-of-distribution input x out which have M inv = [ I s ? s 0 step 1 ? s ] , and you can M e = [ 0 s ? elizabeth p ? ] , then your function symbolization is actually ? age ( x ) = [ z out p ? z age ] , in which p is the equipment-standard cena ethiopianpersonals vector discussed when you look at the Lemma dos .

Then we have P ( y = 1 ? ? out ) = P ( y = 1 ? z out , p ? z e ) = ? ( 2 p ? z e ? + log ? / ( 1 ? ? ) ) , where ? is the logistic function. Thus for arbitrary confidence 0 < c : = P ( y = 1 ? ? out ) < 1 , there exists ? out ( x ) with z e such that p ? z e = 1 2 ? log c ( 1 ? ? ) ? ( 1 ? c ) . ?

Remark: In the a standard circumstances, z aside shall be modeled while the a random vector that is independent of the within the-delivery labels y = step one and you can y = ? 1 and environmental possess: z out ? ? y and you will z away ? ? z e . Hence inside Eq. 5 i’ve P ( z out ? y = 1 ) = P ( z away ? y = ? step one ) = P ( z aside ) . Next P ( y = step one ? ? away ) = ? ( dos p ? z age ? + log ? / ( step 1 ? ? ) ) , identical to into the Eq. seven . Therefore our very own chief theorem still keeps under way more general circumstances.

Appendix B Expansion: Color Spurious Relationship

To help examine our very own results past record and you may intercourse spurious (environmental) features, we offer additional experimental performance with the ColorMNIST dataset, just like the shown for the Profile 5 .

Testing Activity 3: ColorMNIST.

[ lecun1998gradient ] , which composes colored backgrounds on digit images. In this dataset, E = < red>denotes the background color and we use Y = < 0>as in-distribution classes. The correlation between the background color e and the digit y is explicitly controlled, with r ? < 0.25>. That is, r denotes the probability of P ( e = red ? y = 0 ) = P ( e = purple ? y = 0 ) = P ( e = green ? y = 1 ) = P ( e = pink ? y = 1 ) , while 0.5 ? r = P ( e = green ? y = 0 ) = P ( e = pink ? y = 0 ) = P ( e = red ? y = 1 ) = P ( e = purple ? y = 1 ) . Note that the maximum correlation r (reported in Table 4 ) is 0.45 . As ColorMNIST is relatively simpler compared to Waterbirds and CelebA, further increasing the correlation results in less interesting environments where the learner can easily pick up the contextual information. For spurious OOD, we use digits < 5>with background color red and green , which contain overlapping environmental features as the training data. For non-spurious OOD, following common practice [ MSP ] , we use the Textures [ cimpoi2014describing ] , LSUN [ lsun ] and iSUN [ xu2015turkergaze ] datasets. We train on ResNet-18 [ he2016deep ] , which achieves 99.9 % accuracy on the in-distribution test set. The OOD detection performance is shown in Table 4 .

Leave a Reply

Your email address will not be published. Required fields are marked *