Adapting an information extraction application to a new domain (e.g., new categories of narrative text) typically requires re-training the application with the new narratives. But could previous training from the original domain alleviate this adaptation? After having developed an NLP-based application to extract congestive heart failure treatment performance measures from echocardiogram reports (i.e., the source domain), we adapted it to a large variety of clinical documents (i.e., the target domain). We wanted to reuse the machine learning trained models from the source domain, and experimented with several popular domain adaptation approaches such as reusing the predictions from the source model, or applying a linear interpolation. As a result, we measured higher recall and precision (92.4% and 95.3% respectively) than when training with the target domain only.