Recognizing The Truth:
Where Camera-Based Recognition Falls Short
01
JULY, 2020
Camera-based image recognition systems are built on a simple premise- they are electronic “eyes” capable of recognizing unique items and capturing data. The unique item can be everything from a license plate number to a human face. Once recognized and captured, the resultant data can be applied to a wide variety of use cases, such as parking, traffic, or law enforcement. This simple premise is easy to grasp, and its outcome is a highly-desirable one. But as with many technologies, what works in theory under controlled conditions and what works in practice in real-world situations aren’t always the same.
The real world is chaotic—situations arise that defy logic and reason. So when camera-based image recognition is applied to the rigors of everyday living, many of the deficiencies of such systems become evident. Outdoors, extreme temperatures, winds, precipitation, and more pose a constant challenge to camera visibility and overall function, whether using a single-camera or multiple-camera construct. Lighting can vary from blinding sunlight to complete darkness. Bird nests and spider webs can (and often do) emerge overnight. The motion of pedestrians, vehicles, cyclists, and more act to clutter sightlines and obscure data capture. At its core, image recognition is essentially photography. Therefore, all the inherent limitations of photography apply in image-based recognition at the camera level; getting a “clean shot” indoors or outdoors is often not as easy as it sounds.
Many systems rely on multiple camera constructs to get around incidents of “occlusion”- logic follows that the more opportunities to read the image and capture the data, the higher your chance of doing so. Plus, the potential of more accurate tracking results through the use of multiple views for better target identification is appealing. Disadvantages of this setup, including prohibitive expense, programming challenges, privacy concerns, and, however, can often outweigh the positives. The reality is that multiple-camera systems are exponentially more expensive than single-cam setups. Then once procured, calibrating a multi-cam setup to recognize spatial dimensions in a useful way that takes true advantage of the redundancies is another complex and ever-changing obstacle. Programming needs can be substantial during setup, and in many cases, additional supporting infrastructure is necessary to make use of the resultant data.
It’s well-founded that camera-based recognition of any kind has become a controversial topic that poses a host of privacy concerns. License Plate Recognition is certainly no exception. LPR uses optical character recognition to automatically read license plate characters, ostensibly to manage parking facilities, collect tolls, control traffic, enforce laws, and more. Yet once harvested, that data must be handled, managed, and protected diligently to avoid leaks or compromises of any sort. Data protection policies are rare at the local level, making privacy an ongoing challenge.
Then there are the license plates themselves. LPR systems can struggle to get a “clean read,” as dirt, moisture, sun-shielding, glare, etc. can impact readability. Custom plates of any kind can present additional obstacles, as can vehicles that carry bicycles, trailer hitches, bumper stickers, or other bumper-obscuring objects. Plate length, size, character uniformity- all vary based on locale and require software to decipher and read them- if the LPR can get that far. Yet another way in which accuracy is potentially tempered.
Vehicle color can pose another challenge. White vehicles, in particular, can refract additional light that, at times, makes a clean read problematic, thereby affecting capture rate and accuracy. Last year, 38% of new vehicles purchased were white in color. That means that over 1/3rd of all vehicles can be problematic.
Assuming a plate number is accurately captured, what happens next? LPR systems generally employ either cloud-based/server processing, in which videos and/or snapshots are streamed to the cloud or an on-premise server, or on-board processing, in which the system recognizes vehicles locally, sending the parking images and events to a centralized location when required by a violation or similar. The issue with both is the high operating costs, processing power, and physical support they need in order to function optimally. To install and run the system, a human operator must first be trained; then, they must train the system to account for all the unique variables that their parking structure contains. This means that every system must be built-to-suit by an internal operator, whose expertise can be variable, and whose expense can be daunting.
Non-LPR camera-based systems present their fair share of issues in this space as well. Most require either heavy data processing at the camera to reduce bandwidth, which can mean they necessitate processor-heavy computations at the edge, usually requiring line power, or they have onerous bandwidth requirements to stream footage back to a data center for processing. In either case, power and bandwidth accommodations must be made in order for the cameras to function correctly. This brings a host of service adjustments into the fray, as this is never a “plug and play” type scenario, and often can lead to a cascading set of tasks, costs, and responsibilities.
While reading a license plate number is largely seen as far less obtrusive than identifying people by facial recognition, society’s comfort level with LPR varies. Numerous companies are already offering tech to offset or disable LPR as a way to retain one’s privacy. Once mined, the personal data that’s collected can linger in the database and become unnecessarily vulnerable. Existing inconsistency and effectiveness concerns aside, the potential ethical implications of LPR are what many find most off-putting.
This is not a matter of camera-based image recognition or nothing, however. Solutions exist. Single-space sensor systems, for example, convert behavior into data in a way that not only circumvents privacy issues but is more effective across the board. Situated “in-street” below the pavement, sidewalk, or roadway, single-space sensors can measure occupancy, duration, movement, and more without compromising identity or interfering with life above street level. These small modules rely on extreme long-life batteries and can communicate wirelessly with a minimum of bandwidth and infrastructure requirements. Weather, extreme temperatures, wind, occlusions—none of these factors interfere with a single-space sensor’s ability to provide accurate and actionable data on a daily basis. The many problems camera-based image recognition and LPR present can categorically be solved with single-space sensors.
Do camera-based image recognition systems deliver on their simple premise? The unequivocal answer is- sometimes. Questions of reliability and consistency remain, as do the ethical concerns posed by this technology. When everything works, the resultant data is useful. Getting that data reliably, consistently, and ethically will remain a challenge, however, one that will ultimately determine whether camera-based image recognition becomes more than a flawed, passing fad.
Innovating Parking in Columbus: An interview with Robert Ferrin
The Columbus Division of Parking Services embraces the spirit of innovation and the use of technology to create an optimal on-street parking experience in its urban neighborhoods.
Fybr Promotes Linnell Gorden to Executive Vice President, Software Engineering.
Fybr proudly announces Linnell Gorden will be assuming the position of Senior Vice President, Software Engineering. Beginning with Fybr as a consultant in 2014, Gorden was initially brought on board to support the Java-based infrastructure in place at the time.
Tomorrow’s Cities Viewed In A New Light
Tomorrow’s Cities Viewed In A New Light22JUNE, 2020The world’s population is projected to reach 9.8 billion by 2050, with as much as 70% of that population likely to reside in urban areas. As populations increase, urban infrastructures will need to evolve. How cities...