As cameras and sensors dive deeper into the realm of computational photographer, we’re going to see sensor technology change in exciting new ways to support what has becomes possible with faster real-time processing.
One such technology is the Quad Bayer Sensor that Sony has developed and can be already found in the Mavic Air II. They have also developed a 48MP full-frame version that we should expect to see very soon in an actual full-frame camera.
The Quad Bayer sensor is a pretty cool advancement in sensor technology and it has the potential to be a true game-changer for our hybrid photo/video systems.
What Is A Quad Bayer Sensor?
It’s basically a sensor where each photosite is divided into 4, without offsetting the RGB filter.
What this means is each color section can function in two ways.
- As a low megapixel sensor where the quad sensor sites act as one to give better performance when the photon coherency is much lower, like in low light situations. This improves sensitivity and efficiency.
- As a high megapixel sensor where each photosite pulls it’s own color data which is useful in high photon coherency situations like if direct sunlight where the photon coherency.
Applications For Photography
The Quad Bayer sensor is not necessarily that useful for still photography where there is no reason to not always capture with the higher resolution then scale the image for better low light performance.
This was a common misconception with the A7sII vs the A7rII for example. People would assume the A7sII was better in low light for stills, however, if you scaled the image from an A7rII, the results were the same.
I would actually be curious to see a comparison between a 48MP Quad Bayer vs a standard 48MP Bayer and see if they produce the same results at the high resolution, or if the quad array creates artifacts within the interpolation.
Applications For Videography
The Quad Bayer sensor is technically a high-resolution high pixel pitch sensor, but with intelligent processing in the analog to the digital conversion, the sensor can play two different roles.
- Superior 4k low light performance
- A high pixel pitch sensor that can produce 8k images
For video, a quad Bayer sensor can be very useful because it would allow you to adjust your resolution based on your lighting situation. If you are in full bright light, you could capture 8k by using every single photosite as its own, or in low light, you can combine each color quad section for better low light shooting.
Until we get to using pure RAW digital signals for our video (which would take up a ton of data), this will be incredibly useful.
You could even create other configurations for widescreen applications to stretch the image by taking the sensor data from a 1×2 or 2×1 configuration. If you remember back in the early and mid-2000s many of our digital video cameras were not square pixel like they are today.
Why You Need Bigger Photosites For Low Light?
The reason you need bigger photosites for low light is that as the light decreases, so does the density or the amount of photons. Having a bigger photosite allows the surface to capture a signal from a larger area to maintain an acceptable voltage.
In daylight, there are 11,400 photons in a cubic millimeter. Energy from the sun creates 3.10 electron volts, well the most energetic light, purple light. Orange light is 2.06eV. If you decrease the coherency of the photons in low light, you need a much bigger surface to capture the same amount of energy since the density of photons has decreased.
I’m not about to do the math on photon coherency and ISO, but I think you get the point. Cut the light by 1/64 and you’ll need a much larger surface area to absorb the same amount of energy. Light has a sort of finite resolution to it in the form of how much energy it can create when interacting with electrons.
This is sort of why there is little point of buying a high megapixel camera if you’re mostly going to be shooting in low light with fast shutters because you’re still limited by the coherency and energy of the available light. More pixels in a smaller configuration is less efficient at capturing this energy. Have a Quad Bayer sensor gives you the best of both worlds as long as there are no strange artifacts that come from having a 2×2 configuration for each color pixel array.
Here is a cool video you can watch if you want to learn more about photon density. https://www.youtube.com/watch?v=EB572jVnluo
I’ve been trying to figure out for some time what Sony could do that would be gaming changing for the A7sIII. I think this is it.
The Potentials Of Quad Bayer Sensor In A Future Camera Like The Sony A7sIII
The whole life and soul of the A7sII was low light performance. Having a Quad Bayer sensor will allow the camera to maintain its amazing 4k low light performance by using the quad configuration data as a single pixel, but it would also give the camera the ability to produce an 8k image when in well-lit situations by using the 1×1 pixel data.
What else is cool about shooting 8k is you can shoot at a lower subsampling like a 4:2:0 but increase the subsampling by scaling down in post-production. So 8k is not useless. You can always pull a better image by scaling down something larger. It would be really cool if the camera could do that in real-time. Some sort of option to shoot enhanced 4k with an improved subsampling. Although I think it might take a few more years before processors could do this in realtime.
A Quad Bayer sensor would make the Sony A7sIII an incredibly versatile tool with some insane capabilities unmatched by the competition.
I know a lot of people are tempted by the Canon R5, but like I always say, don’t just switch to the latest new thing, just wait. Your favorite camera brand will do something cool soon enough.
You can read more about the Qua-Bayer image sensor on Sony’s semiconductor website.