illumination cameras microscopy / objectives filters / accessories liquid lens / specialty telecentric fixed focal length resource guide
The following examples show test images taken with the same highresolution
50 mm focal length lens and the same lighting conditions
on three diff erent camera sensors. Each image is then compared to
the lens’s nominal, on-axis MTF curve (blue curve). Only the on-axis
curve is used in this case because the region of interest where contrast
was measured only covered a small portion of the center of the sensor.
Figure 6.8a shows the performance of the 50 mm lens when paired
with a ½ ,5” ON Semiconductor MT9P031 with 2,2 μm pixels, when at a
magnifi cation of 0,177X. Using Equation 2.4 from Section 2.2: Resolution,
the sensor’s Nyquist resolution (ξSensor) is 227,7 lp/mm, meaning that
the smallest object that the system could theoretically image when at a
magnifi cation of 0,177X is 12,4 μm (using an alternate form of Equation
2.4 from Section 2.2: Resolution).
μm
mm
6.9 1000
2 × 2.2μm ξSensor = = 227,7lp/mm
Keep in mind that these calculations have no contrast value associated
with them. The left side of Figure 6.8a shows the images of two
elements on a USAF 1951 target; the left image shows two pixels per
feature, and the right image shows one pixel per feature. At the Nyquist
frequency of the sensor (227 lp/mm), the system images the target with
8.8% contrast, which is below the recommended 20% minimum contrast
for a reliable imaging system. Note that by increasing the feature
size by a factor of two to 24,8 μm, the contrast is increased by nearly a
factor of three. In a practical sense, the imaging system would be much
more reliable at half the Nyquist frequency.
The conclusion that the imaging system could not reliably image an
object feature that is 12,4 μm in size is in direct opposition to what the
equations in Section 2.2: Resolution show, as mathematically the objects
fall within the capabilities of the system. This contradiction highlights
that fi rst-order calculations and approximations are not enough to determine
whether an imaging system can achieve a particular resolution.
Additionally, a Nyquist frequency calculation is not a solid metric on
which to lay the foundation of the resolution capabilities of a system
and should only be used as a guideline of the limitations that a system
will have. A contrast of 8,8% is too low to be considered accurate since
minor fl uctuations in conditions could easily drive contrast down to
unresolvable levels.
Figures 6.8b and 6.8c show similar images to those on the MT9P031
though the sensors used were the Sony ICX655 (3,45 μm pixels) and
ON Semiconductor KAI-4021 (7,4 μm pixels). The left images in each
fi gure show two pixels per feature and the right images show one pixel
per feature. The major diff erence between the 3 images is that all of
the image contrasts for Figures 6.8b and 6.8c are above 20%, meaning
(at fi rst glance) that they would be reliable at resolving features of that
size. Of course, the minimum sized objects they can resolve are larger
when compared to the 2,2 μm pixels in Figure 6.8a. However, imaging
at the Nyquist frequency is still ill-advised as slight movements in the
object could shift the desired feature between two pixels, making the
object unresolvable. Note that as the pixel sizes increase from 2,2 μm, to
3,45 μm, to 7,4 μm, the respective increases in contrast from one pixel
per feature to two pixels per feature are less impactful. On the ICX655
(3,45 μm pixels), the contrast changes by just under a factor of 2; this
eff ect is further diminished with the KAI-4021 (7,4 μm pixels).
An important discrepancy in Figure 6.8 is the diff erence between
the nominal lens MTF and the real-world contrast in an actual image.
The MTF curve of the lens on the top of Figure 6.8a shows that the
lens should achieve approximately 24% contrast at the frequency of
227 lp/mm, when the contrast value produced was 8,8%. There are two
main contributors to this diff erence: sensor MTF and lens tolerances.
Most sensor companies do not publish MTF curves for their sensors, but
they have the same general shape that the lens has. Since system-level
MTF is a product of the MTFs of all of the components of a system, the
lens and the sensor MTFs must be multiplied together to provide a more
accurate conclusion of the overall resolution capabilities of a system.
42 +44 (0) 1904 788600 | Edmund Optics® targets 5 MP 1/2.5 Inch
2.2 micron pixel
30.3% Contrast
8.8% Contrast
5MP 2/3 Inch
3.45 micron pixel
48.0% Contrast
24.6% Contrast
4MP 1.2 Inch
7.4 micron pixel
55.3% Contrast
33.6% Contrast
Figure 6.9: Images taken with the same lens and lighting conditions
on three diff erent camera sensors with three diff erent pixel sizes. The
top images are taken with four pixels per feature, and the bottom images
are taken with two pixels per feature.
As mentioned above, a toleranced MTF of a lens is also a departure
from the nominal. All of these factors combine to change the expected
resolution of a system, and on its own, a lens MTF curve is not an accurate
representation of system-level resolution.
As seen in the images in Figure 6.9, the best system-level contrast is
in the images taken with the larger pixels. As the pixel size decreases,
the contrast drops considerably. A good best practice is to use 20% as
a minimum contrast in a machine vision system, as any contrast value
below that is too susceptible to fl uctuations in noise coming from temperature
variations or crosstalk in illumination. The image taken with
the 50 mm lens and the 2,2 μm pixel in Figure 6.8a has a contrast of
8,8% and is too low to rely on the image data for object feature sizes
corresponding to the 2,2 μm pixel size because the lens is on the brink
of becoming the limiting factor in the system. Sensors with pixels much
smaller than 2,2 μm certainly exist and are quite popular, but much
below that size becomes nearly impossible for optics to resolve down
to the individual pixel level. This means that the equations described in
Section 2.2: Resolution become functionally meaningless for helping to
determine system-level resolution, and images similar to those taken in
the aforementioned fi gures would be impossible to capture. However,
these tiny pixels still have a use – just because optics cannot resolve the
entire pixel does not render them useless. For certain algorithms, such
as blob analysis or optical character recognition (OCR), it is less about
whether the lens can actually resolve down to the individual pixel level
and more about how many pixels can be placed over a particular feature.
With smaller pixels, subpixel interpolation can be avoided, which
will add to the accuracy of any measurement done with it. Additionally,
there is less of a penalty in terms of resolution loss when switching to
a color camera with a Bayer pattern fi lter.
Another important point to remember is that jumping from one pixel
per feature to two pixels per feature gives a substantial amount of contrast
back, particularly on the smaller pixels. Although by halving the
frequency, the minimum resolvable object eff ectively doubles in size.
If it is absolutely necessary to view down to the single pixel level, it is
often better to double the optics’ magnifi cation and halve the FOV.
This will cause the feature size to cover twice as many pixels and
the contrast will be much higher. The downside to this solution is that
less of the overall fi eld will be visible. From the image sensor perspective,
the best thing to do is to maintain the pixel size and double the
format size of the image sensor. For example, an imaging system with
a 1X magnifi cation using a ½” sensor with a 2,2 μm pixel will have the
same FOV and spatial resolution as a 2X magnifi cation system using a
1” sensor with a 2,2 μm pixel, but with the 2X system, the contrast is
theoretically doubled.
Unfortunately, doubling the sensor size creates additional problems
for lenses. One of the major cost drivers of an imaging lens is the format
size for which it was designed. Designing an objective lens for a
larger format sensor takes more individual optical components; those
components need to be larger and the tolerancing of the system needs
to be tighter. Continuing from the example above, a lens designed for a
1” sensor may cost fi ve times as much as a lens designed for a ½” sensor,
even if it cannot hit the same pixel-limited resolution specifi cations.
ξ 6.10 Object Space = ξSensor × m = 227lp/mm × 0,177 = 40,3lp/mm = 12,4 μm