We test the robustness of a closed-loop treatment scheduling method to realistic HIV viral load measurement error. The purpose of the algorithm is to allow the accurate detection of an induced viral load minimum with a reduced number of samples. Therapy must be switched at or near the viral-load minimum to achieve optimal therapeutic benefit; therapeutic benefit decreases logarithmically with increased viral load at the switching time. The performance of the algorithm is characterized using a number of metrics. These include the number of samples saved vs. fixed-rate sampling, the risk-reduction achieved vs. the risk-reduction possible with frequent sampling, and the difference between the switching time vs. the theoretical optimal switching time. The algorithm is applied to simulated patient data generated from a family of data-driven patient models and corrupted by experimentally confirmed levels of log-normal noise.