Gain modulation, in which the sensitivity of a neural response to one input is modified by a second input, is studied at single-neuron and network levels. At the single neuron level, gain modulation can arise if the two inputs are subject to a direct multiplicative interaction. Alternatively, these inputs can be summed in a linear manner by the neuron and gain modulation can arise, instead, from a nonlinear input-output relationship. We derive a mathematical constraint that can distinguish these two mechanisms even though they can look very similar, provided sufficient data of the appropriate type are available. Previously, it has been shown in coordinate transformation studies that artificial neurons with sigmoid transfer functions can acquire a nonlinear additive form of gain modulation through learning-driven adjustment of synaptic weights. We use the constraint derived for single-neuron studies to compare responses in this network with those of another network model based on a biologically inspired transfer function that can support approximately multiplicative interactions.