Repeated biomarker measurements are often taken over time to help assess risk of disease progression and guide clinical decision-making, such as whether to start treatment. Unfortunately, gold standard methodologies for measuring biomarkers are often prohibitively expensive or unavailable in resource-limited settings. For example, the costs of monitoring HIV-infected subjects to decide when to start or change treatments are a significant burden for many countries, often exceeding the costs of treatments. A major issue concerns how to evaluate changes in timing of key clinical decisions if a new, simpler or less expensive technology were used instead of the gold standard. We develop a framework for addressing this problem and apply it to the case of monitoring CD4 counts in HIV-infected patients. We focus on the practically important situation in which longitudinal natural history data are available for the gold standard (flow cytometry for CD4 counts), but where the first data expected for a new technology will come from a cross-sectional method comparison study, allowing for estimation of variability and systematic differences (bias) between the two technologies. In a case study, we illustrate how a combination of statistical modeling and simulation study might be used to evaluate the potential impact of using a new technology on treatment starting times in a population of HIV-infected subjects. This gives developers of new CD4 measurement technologies insight into what might constitute acceptable increases in variability and/or bias for novel methods. We finish with a discussion of our findings and some statistical problems that need further work.