Environment:
-
Devices: iPad Pro 11″ M4, iPad Air 11″ M3, iPad Pro 11″ Gen2/3/4
-
Language: Swift
-
Framework: AVFoundation
-
Front camera: UltraWide (M4/M3), TrueDepth (Gen2–4)
-
Video gravity:
.resizeAspectFill
Background
I am setting an exposure point of interest using coordinates defined in captured image pixel space.
- Input point: (1170, 1370)
Image sizes:
-
Gen2/3/4: 2316 × 3088
-
M3/M4: 3024 × 4032
Preview sizes:
-
Gen2/3/4: 834 × 1194
-
M4: 834 × 1210
-
M3: 820 × 1180
What I do
First, I convert image pixel coordinates to preview layer coordinates, then use captureDevicePointConverted(fromLayerPoint:).
let devicePoint = previewLayer.captureDevicePointConverted(fromLayerPoint: layerPoint)
Reading back exposure point
After capture, I convert back:
let layerPoint = previewLayer.layerPointConverted(fromCaptureDevicePoint: exposurePoint)
This results in:
Observation
It seems that captureDevicePointConverted(fromLayerPoint:) does not perform a linear mapping when using .resizeAspectFill.
My understanding is that:
Questions
-
Does
captureDevicePointConverted(fromLayerPoint:)account for.resizeAspectFillcropping, making it unsuitable for direct pixel mapping? -
Is it correct to compute exposure points directly using normalized coordinates (pixel / image size) instead of using preview layer conversion?
-
Is
exposurePointOfInterestalways expressed in full sensor normalized coordinates (0–1), independent of preview settings? -
Does this behavior differ between UltraWide (M3/M4) and TrueDepth cameras?
-
Is there official documentation describing correct coordinate mapping for this scenario?
