Limited access to expert tutors is a problem that can be addressed by using tutors from different stages of medical or non-medical (under-, post-) graduate education. To address whether such differences in qualification affect the results of process evaluation by participants or their learning outcome (exam results), we analysed the data of a 4-year prospective study performed with 787 3rd-year medical students (111 groups of 5-10 participants) taking an obligatory problem-based learning (PbL)-course of basic pharmacology. We compared peer tutors (undergraduate medical students, >/=4th year), non-expert (junior) staff tutors (physicians, pharmacists, veterinarians, biologists, or chemists during postgraduate education), and expert (senior) staff tutors (completed postgraduate education). Evaluation scores related to PbL gave the highest values for senior staff-led groups. The tutor's performance score of peer-led groups did not differ from those of staff-led groups, but the score obtained from groups tutored by junior staff was lower than that obtained with senior staff tutors. Students' weekly preparation time tended to be lower in peer-led groups, while learning time spent specifically on exam preparation seemed to be increased compared to PbL-groups of staff tutors. As a putative confounding variable, tutors' experience in coaching PbL-groups was also investigated. Groups led by experienced tutors, defined as tutors with at least one term of previous PbL tutoring, were found to have significantly higher evaluation scores. Interestingly, neither tutors' subject-matter expertise (peer students, junior staff, or senior staff) nor their teaching-method expertise showed any influence on PbL-groups' mean test scores in a written exam. This indicates that the effect of tutor expertise on the learning process is not associated with a difference in learning outcome when just factual knowledge is assessed by traditional methods.