As the Pro Max introduces a shift in the size of Apple’s camera sensor, PetaPixel interviewed two Apple executives who elaborated on the company’s camera development vision and design principles.”
During an interview with Apple’s Product Line Manager for iPhone, Francesca Sweet, and Vice President of Camera Software Engineering, Jon McCormack, both emphasized the company’s holistic approach to camera development. They underscored that it encompasses not only the sensor and lenses but also incorporates elements such as Apple’s A14 Bionic chip, image signal processing, and the software driving its computational photography.”
Design Philosphy
Apple’s primary aim for smartphone photography revolves around enabling individuals to live their lives and capture moments without being encumbered by technology.
Jon McCormack, elaborating on this philosophy, stated, “As photographers, we often find ourselves preoccupied with factors like ISO and subject motion. Apple’s objective is to eliminate these concerns, allowing people to stay immersed in the moment, effortlessly snap a great photo, and promptly return to their activities.”
He clarified that while more dedicated photographers may prefer capturing an image and subsequently engaging in an editing process to personalize it, Apple strives to condense this procedure into a single, seamless action – the act of capturing a frame. The ultimate goal is to minimize any distractions that might detract from the present moment.
Jon McCormack further explained, “We aim to mimic the photographer’s post-processing actions as closely as possible. When capturing a photo, there are two aspects to consider: the initial exposure and the subsequent development during post-production. While we heavily rely on computational photography for exposure, we increasingly automate post-processing tasks. The objective here is to create photographs that closely resemble real-life scenes, capturing the essence of being present in that moment.”
McCormack mentions that Apple employs machine learning to dissect a scene into more digestible components. He elaborated, saying, “We handle elements like the background, foreground, eyes, lips, hair, skin, clothing, and skies separately, much like how you would apply local adjustments in Lightroom. We fine-tune parameters such as exposure, contrast, and saturation for each element and then combine them.”
Regarding Apple’s Smart HDR technology, McCormack explained how we can already witness the advantages of this computational photography approach. He pointed out that achieving realistic skies can be particularly challenging, but Smart HDR 3 enables the isolation and independent treatment of the sky. Afterward, it is seamlessly integrated to faithfully recreate the original scene experience.
Restaurants or bars present another demanding setting.
Photographers find the mixture of natural ambient light in such places to be quite troublesome. The presence of mixed, dim lighting conditions can distort colors. However, we have the capability to recognize the true appearance of food and adjust color and saturation to better capture its essence.
McCormack consistently emphasized the objective of faithfully replicating the real-life experience, underscoring its significance for the smartphone company.
Sweet was eager to highlight the strides Apple has taken in the realm of low-light photography, all thanks to Apple’s Night Mode, which is an extension of Smart HDR.
She commented, “The enhanced image fusion algorithms in the new wide camera result in reduced noise and enhanced detail. With the Pro Max, we can take this even further because the larger sensor enables us to gather more light in a shorter period, resulting in superior motion freeze capability during nighttime photography.”
The New Sensor
When inquired about the new sensor and the criticism that it took Apple a significant amount of time to introduce a larger one, McCormack provided insight into Apple’s viewpoint.
He expressed, “Discussing a single aspect or specification of an image or camera system is no longer our primary focus. When we design a camera system, we consider all these factors comprehensively and then delve into the potential software enhancements.”
Deep Fusion significantly bolstered Apple’s noise reduction capabilities, and even though the option of a larger sensor was available, Apple’s approach was to explore all possible improvements in the overall image processing before contemplating changes to the physical components.
McCormack explained Apple’s perspective, suggesting that one option is to opt for a larger sensor, despite potential form factor challenges. Alternatively, they encourage considering the entire system to explore innovative ways to achieve their objectives. He stressed that their goal isn’t merely to boast about a larger sensor, but rather to enhance the ability to capture beautiful photos in various conditions, leading to the development of features like deep fusion, night mode, and temporal image signal processing.
McCormack highlighted that Apple’s holistic approach, where they develop the entire system from lenses to GPUs and CPUs, allows them to view things differently. Instead of focusing on a single hardware modification leading to magical results, they identify multiple points within the system where innovation can occur.