With conventional CMOS image sensor architectures plateauing around 20 Gpixel/sec, i.e. 20 kfps for a megapíxel sensor, alternative architectures strive to achieve higher pixel and frame rates. In application where it is important to get the precise timing of a relatively small number of hits/spikes, sensors are designed where pixels are capable of triggering on these hits/spikes indipendently and measure their occurence timing with resolution of the order of better than 1 ns. When full images are required, two main alternative architectures are seen. In the first one, the pixel itself is fairly conventional, but it is driven in such a way that the image is recoreded only during a short time. In order to achieve ultra-high speed video capability, the sensor has to have a number of pixels Ntot = Nformat x Ntime, where Nformat is the numebr of pixels in the image, i.e. the format of the image, and Ntime is the number of recorded frames. In the second alternative architectures, frames are stored in a memory directed connected to a specific pixel. The storage can be in the pixel or in the periphery of the sensor, and it can be done either with voltage or with charge. These are the so-called framing image sensors. Framing sensors already exist, like the one used in the Shimadzu HPVX or Kirana. This paper will review where the limitation of framing sensors, both for the voltage and the charge architecture come from and what kind of performance is it expected to achieve with this architecture.
|Title||Review and future outlook on ultra-high speed CMOS image sensor|
IMASENIC S.L., Barcelona, Spain
|Session||3. Sensors I|