Understanding from Machine Learning Models

Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this article, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.

1.  Understanding from Minimal and Complex Models

2.  Algorithms, Explanatory Questions, and Understanding

3.  Black Boxes

3.1.  Implementation black boxes

3.2.  Levels of implementation black boxes

4.  The Black Boxes of Deep Neural Networks

4.1.  Deep neural network structure

4.2.  Deep neural network modelling process

4.3.  Levels of deep neural network black boxes

5.  Understanding, Explanation, and Link Uncertainty

5.1.  Deep neural networks and how-possibility explanations

5.2.  Deep neural networks and link uncertainty

5.3.  Differences in understanding; differences in link uncertainty

留言 (0)

沒有登入
gif