The software stack in a family car now exceeds that of many aircraft, yet a child stepping behind a bumper can still escape its digital gaze. The paradox lies less in raw computational power and more in how that power is organized, tested and constrained on real streets.
A passenger jet runs tightly scoped avionics software on certified hardware, with every sensor, actuator and data path defined by systems engineering and functional safety rules such as hazard analysis and fault tolerance. A car, by contrast, stitches together dozens of electronic control units from different suppliers, each with its own firmware, update cadence and proprietary interfaces. The result is software entropy: millions of lines of code, but fragmented, latency-prone data flows between cameras, radar and braking systems.
Detecting a child is also harder than it sounds. Short-range sensors operate close to the ground, where clutter, occlusion and adverse weather degrade signal-to-noise ratios and confuse machine learning models trained on idealized datasets. To avoid false positives and nuisance braking, many systems bias toward inaction unless confidence thresholds are met, an implicit marginal effect that favors driver comfort over edge-case safety. Regulatory frameworks still lag this reality, focusing on component compliance rather than end-to-end perception performance in dense, messy environments, where one small figure can still fall through the gaps of a very large codebase.