In-sensor and near-sensor computing architectures enable multiply accumulate operations to be carried out directly at the point of sensing. In-sensor architectures offer dramatic power and speed improvements over traditional von Neumann architectures by eliminating multiple analog-to-digital conversions, data storage, and data movement operations. Current in-sensor processing approaches rely on tunable sensors or additional weighting elements to perform linear functions such as multiply accumulate operations as the sensor acquires data. This work implements in-sensor computing with an oscillatory retinal neuron device that converts incident optical signals into voltage oscillations. A computing scheme is introduced based on the frequency shift of coupled oscillators that enables parallel, frequency multiplexed, nonlinear operations on the inputs. An experimentally implemented 3 × 3 focal plane array of coupled neurons shows that functions approximating edge detection, thresholding, and segmentation occur in parallel. An example of inference on handwritten digits from the MNIST database is also experimentally demonstrated with a 3 × 3 array of coupled neurons feeding into a single hidden layer neural network, approximating a liquid-state machine. Finally, the equivalent energy consumption to carry out image processing operations, including peripherals such as the Fourier transform circuits, is projected to be <20 fJ/OP, possibly reaching as low as 15 aJ/OP.
Keywords: in-sensor computing; negative differential resistance; oscillator; oscillatory retinal neurons; parallel computing; ultralow power computing.