for a classifier in an ensemble is typically of the form wk = log 1 ✏k ✏k where ✏k is some expected measure of loss for the kth classifier. How do the expert weights affect the loss when being tested on an unknown distribution? Using Domain Adaptation to Clarify the Bound Combining the works by Ben-David et al. (2010) and Ditzler et al. (2014) gives us: ET ⇥` ( H, fT ) ⇤ t X k= 1 wk,t ✓Ek ⇥` ( hk, fk ) ⇤ + T,k + 1 2 ˆ d H H ( UT , Uk ) + O 0 B B B B B @ r⌫ log m m 1 C C C C C A ◆ where T,k is a measure of disagreement between fk and fT ( a bit unfortunate) Weighted sum of: training loss + disagreement of fk and fT + divergence of Dk and DT T,k encapsulates real-drift , where are as ˆ d H H is virtual drift. More over, existing algorithms using the loss on the most recent labelled distribution are missing out on the other changes that could occur. 1G. Ditzler, G. Rosen, and R. Polikar, “Domain adaptation bounds for multiple expert systems under concept drift,” in International Joint Conference on Neural Networks, 2014. (Best Student Paper). IJCNN 2016 A Study of an Incremental Spectral Meta-Learner for Nonstationary Environments