Recordings of transient-evoked otoacoustic emissions (TEOAEs) suffer from two main sources of contamination: Random noise and the stimulus artifact. The stimulus artifact can be substantially reduced by using a derived non-linear recording paradigm. Three such paradigms are analyzed, called here the level derived non-linear (LDNL), the double-evoked (DE), and the rate derived non-linear (RDNL) paradigms. While these methods successfully reduce the stimulus artifact, they lead to an increase in contamination by random noise. In this study, the signal-to-noise ratio (SNR) achievable by these three paradigms is compared using a common theoretical framework. This analysis also allows the optimization of the parameters of the RDNL paradigm to achieve the maximum SNR. Calculations based on the analysis with typical parameters used in practice suggest that when ranked in terms of their SNR for a given averaging time, RDNL performs best followed by the LDNL and DE paradigms.