The DOF discussion has not really tempted me, for a number of reasons, maintaining mental sanity being one. With this comment, I will present a few statements that may challenge someone's deeply rooted knowledge or prejudices. I offer them as means to broaden the understanding of the depth-of-field concept.
Well aware that there may be details where I draw incorrect conclusions, I'd with certainty say most of all this is true.
First, I'll deal with focal systems as opposed to afocal systems. Focal systems project a real image onto a surface, usually film or a digital sensor. The science of depth-of-field in focal systems is very thoroughly researched. It is not unusual with camera lenses with coloured lines for different apertures, to aid when setting focus on the hyperfocal distance in order to achieve maximum DOF.
The result is convincing when you look at a print where your eyes can roam over a frozen-in-time two-dimensional representation of a three-dimensional reality to see full edge sharpness, apparently at a considerably closer distance than the sharp horizon in the distant.
The actual size of the aperture (not the f/ number) and the imaging scale are the only two quantifiable factors to decide the DOF, unless a large format camera with a tilting film plane is used.
You may argue that the lens's curvature of field also could serve to increase DOF. This is not necessarily wrong, but is it DOF?
This brings us to the first point that needs clarification, or rather an agreement upon where to assess DOF – using test targets as close as possible to the optical axis, or targets placed on the ground plane where closer targets (usually) will appear increasingly closer towards the lower part of the image.
The latter can mean a huge difference in apparent DOF when compared to the first method, should the lens have considerable curvature of field. I own an '80s Mamiya roof prism that's virtually progressive, the edges focus a lot closer than the center.
It is very important to be clear that real aperture size and imaging scale are the determining factors for DOF. A real world example would be an image shot with a 100 mm f/2.8 at a certain distance and at full aperture, compared to another shot with a 50 mm f/1.4 under the same circumstances.
If prints are made, and the ”50 mm” prints are printed twice as large, the size of objects in the image will not only be identically large, but also exhibit the same apparent DOF. Grain or digital noise will of course differ. Conversely, a print from a camera with a small sensor will exhibit a deeper DOF despite the same image scale, but then because of the small aperture of the small lens with its short focal length. A smaller aperture produces smaller circles of confusion, and the smaller, the sharper appears the image. (Here lies the only reason small binoculars can produce a greater perceived DOF, namely when the instrument's exit pupil is smaller than the observer's pupil and acts as a mask.)
End of the part dealing with focal instruments.
This second part may appear in stark contrast to the first part, that's dealing with (largely) quantifiable factors. I claim there isn't really a thing as depth-of field. Rather, DOF is, actually like anything concerning vision and perception, a hallucination the majority agrees is useful.
It relies upon standardised visual acuity, and is essentially defined as a defocus too small to be detected by a normal eye. This is extremely important to keep in mind.
DOF is (again) depending on image scale and the observer's visual acuity, and is inversely proportional to both. 'The better we see, the worse we see'.
If this seems counter-intuitive, imagine smearing vaseline on your ocular lens. The general sharpness will of course plummet, but it is the central sharpness that takes the big hit. Outside the central macula of the retina where visual acuity decreases rapidly going further out on the peripheral retina, there's not so much to lose. So the difference in acuity, or focus, between the worse and the better parts, is not as big, which in turn means DOF is greater when VA, or the instrument's resolution, or both, are worse. This also applies to detection of resolution differences between central and peripheral parts of lenses or lens systems, i.e. across the image plane.
Conversely, on the opposite side of the scale, we have the situation where an observer has an insanely high visual acuity. VA is essentially the same thing as resolution, and is defined by the smallest gap between lines that can be seen. If the point of absolute focus contains the absolutely smallest line pair this observer can see, they will also fail to detect it with a minimal amount of defocus (due to distance differences), while an observer who can only resolve a bigger line pair may also be able to detect their line pair despite a slight defocus. This is because the circle of confusion, that has an absolute size determined by the aperture and dioptrical defocus, is relatively smaller compared to the bigger line pair than to the small line pair.
Increased magnification will reveal defocus regardless of whether using telescopes, microscopes, shortened viewing distance, zooming on a screen or printing a larger print. Even going closer to a print will show lack of focus. Naturally, the grain or pixel size puts a hard stop somewhere along the line.
A high visual acuity offers the possibility to detect defocus and otherwise subpar performance of an instrument. However, the observer may also be able to dismiss these minute differences and blend it all to a ”clearly sharp enough” judgement. 'The better you see, the worse you see' holds a reality when critically approaching the limits, but in daily life, good vision simplifies things as there's a wide margin.
Keeping the counterintuitive nature of DOF, and the implications of DOF being a perceptual rather than a technical concept, in mind, I'll now finally with a few words mention apparent DOF in afocal instruments.
This subject has been penetrated thoroughly in many good posts and then again others, but I feel the two parts I mentioned above have largely been omitted.
The afocal instrument's objective lens creates a real image, that could have been projected on a surface, but instead we watch it with a loupe, the eyepiece. Generally speaking, we let diverging or parallel rays enter the objective, and we collect parallel, or sometimes diverging rays leaving the ocular lens. For objects near infinity (optically speaking), it's parallel rays in and parallel rays out, unless you're an uncorrected myope, where you must focus the instrument beyond infinity.
Typically, we can see at infinity because rays are parallel, and closer because accommodation ensures that the diverging rays from nearby objects are refracted in the lens crystallina, not to parallel, but converging to a real image on the retina just as parallel rays from infinity are after entering the eye.
We can not process converging rays from outside the eye, such as those that form a real image.
As I showed above, DOF is a dirty concept. Initially, it may seem technical, but it is entirely perceptual, and, as such, entirely depending on the observer. There is to my knowledge no evidence that focal ratio of an afocal instrument has any quantifiable effect on the perceived DOF.
The evasive nature of DOF as a perceptual concept, in particular weighing in the individual's continual accommodation effort, their likewise incessantly changing pupil diameter, the illumination, whether or not a big AFOV produces a smaller pupil due to greater area of the retina getting illuminated, if the individual has astigmatism, has or has not different refractive status of the right and the left eye, the eye's axial length, whether or not spectacles are used, or whether or not the diopter is used, what focussing technique they use (approaching the focus target from near or from far away), and so on ad nauseam, makes this impenetrable.
If, theoretically, photography through binoculars was performed, if the photos all were taken near the center of the field, and then any minuscule DOF differences could be detected that consistently followed the configurations, e.g. 10x25 vs 10x50, with several samples of each configuration, each model and each brand, it would actually still be quite useless, because in the end, the ever-changing compound effect of our eyes, our perception, our brain, our current physical and mental condition would trump those differences with several magnitudes. As far as I know, no convincing arguments of real DOF differences within a set magnification have been presented to date.
Discussion on DOF usually means we bite off more than we can chew. Discussing colour reproduction or rolling ball is a child's play compared to depth of field.
I have no intention to get the last word, but even this post, that cost me a Saturday's evening, is no more than a ripple on a deep ocean of ignorance, and I have no expectation that the shore will appear before our eyes. Maybe I helped someone look in the right direction.
//L