In J. Quiñonero-Candela, M. Sugiyama, A. Schwaighofer, & N. Lawrence (Eds.), Dataset Shift in Machine Learning (pp. 29-38). 2008. Cambridge, MA, USA: MIT Press, (article; mpg site).
This chapter shows how the problem of dataset shift has been addressed by different philosophical schools under the concept of “projectability.” When philosophers tried to formulate scientific reasoning with the resources of predicate logic and a Bayesian inductive logic, it became evident how vital background knowledge is to allow us to project confidently into the future, or to a different place, from previous experience. To transfer expectations from one domain to another, it is important to locate robust causal mechanisms. An important debate concerning these attempts to characterize background knowledge is over whether it can all be captured by probabilistic statements. Having placed the problem within the wider philosophical perspective, the chapter turns to machine learning, and addresses a number of questions: Have machine learning theorists been sufficiently creative in their efforts to encode background knowledge? Have the frequentists been more imaginative than the Bayesians, or vice versa? Is the necessity of expressing background knowledge in a probabilistic framework too restrictive? Must relevant background knowledge be handcrafted for each application, or can it be learned?
This was based on an invited talk (video) at the NIPS Workshops, Whistler 2006.
Last revised on May 17, 2024 at 08:43:08. See the history of this page for a list of all contributions to it.