If our goal in Artificial Intelligence in Medicine (AIM) is to engineer systems health-care providers will both use and, in the process, improve their performance, we must concentrate on the development of causal theories of knowledge and problem solving. One broad direction in pursuing this goal is understanding the relationships between existing models of rationality and bounded rationality for similar tasks. Models of rationality refer to those approaches in which the optimal properties of the models are deductively provable, i.e. in which the processing is rational. Representative models of rationality used in AIM are deductive logical models, statistical models such as Bayesian inference models, and decision-analytic models. Models of bounded rationality are those which do not guarantee such optimal properties nor yield to deductive correctness proofs. These models have their roots in cognitive psychology. In this article we show how explicating the relationship between models of rationality and bounded rationality might be done in the case of abductive tasks in medicine. This is done by positioning these modeling approaches within the same framework (an abstract computational model) and interpreting in this context both computational complexity results concerning the nature of the task and empirical results studies of human problem-solving behavior.