Almost all my technical posts are based on research and testing. This one is different: it is entirely speculation.
Yesterday I upgraded the firmware on my Sony a9 to version 5.0, which had just been made available for download by Sony. It is much more than any other Sony firmware release that I’ve ever seen. Sony usually just fixes bugs in their FW releases, occasionally introducing a new feature like their malicious-obedience uncompressed raw support. In that regard, they’ve been like Nikon, and not at all like Fuji, who routinely roll out new features via firmware upgrades.
FW 5.0 for the a9 in invasive. It wipes out all your settings, including date and time. There’s a reason for that; the menu structure is heavily reworked, and there are small improvements and new things spread throughout the new menu system. It still feels like a Sony, which in my mind is not a good thing, but it still getting better. The custome key assignment now works like that on the GFX, and that’s a big step forward. Focus area limits can be set by focus mode, with exquisite granularity.
But the big change is in autofocus. There are new modes, including one that does a great job of tracking moving subjects, and an Eye-AF implementation that is far less cumbersome than the previous one that required assigning and then holding down a button other than the shutter release. You can pick whether the Eye-AF focuses on the right or left eye, or leave it in automatic, which was the only choice before. It’s going to take me a while to understand all the improvements, but playing around with the new autofocus makes me think that Sony has taken a camera that was already no slouch in the AF department to an entirely different level.
If you have an a6400, you’ve got a lot — maybe all, for all I know — of the new AF trickery, but if you have an a7III or a7RIII, for now at least, you’re out of luck. I’m thinking that part of the reason may be that it’s hard on those cameras to read the sensor and the PDAF pixels fast enough. I know the a6400 doesn’t have a stacked sensor, so that runs counter to my guess, but I don’t know how well the a6400 AF tracking works compared to the a9’s new abilities. To make a camera that identifies and focuses continuously and accurately on particular things in the scene — like eyes or like a moving subject — you need to to read the sensor with high refresh rate and low latency. That’s something that the a9 does spectacularly — maybe uniquely — well, even before the FW upgrade.
Sony says that the new FW uses “AI-based object recognition”. I don’t know exactly what that means. Does it mean that it uses neural networks? Or does it mean that the algorithm was derived through training on real-world images, and that, like so many such programs are opaque in their machinations, even to their developers? Whatever I means, I’m intuiting that, like so many similar programs, the more data it gets, the better it will work, so the camera’s ability to read scene luminance (through the normal pixels) and scene depth (kind of, through the PDAF pixels) densely and rapidly will be a boon.
This puts new emphasis on stacked sensors. Sure, they’re great for high frame rates. We’ve seen in the a9 that that’s necessary for good AF, since with a MILC the AF system is blind while the actual photographs is being read off the sensor, and the AF system needs to make up for that by reading the sensor very quickly when it does get a look at the scene. But reading the scene densely enough to make area-based PDAF work and reading it well enough to recognize and track objects — even if they are momentarily obscured — is quite another thing. I’m guessing that this kind of subject recognition will become more prevalent in the future, and that it will be one more spur in the sides of the sensor manufacturers to develop more stacked sensors.
Which brings me to a question: the a9 first shipped almost two years ago. In all that time we’ve not seen another full-frame stacked sensor, right? How come? Is it because the a9 sensor is more expensive to make than originally thought? That the yields are so low that Sony doesn’t have the stomach to do more of those kinds of sensors? That those sensors are not getting rapidly cheaper to build over time? Or that they just don’t see the benefits?