A modern car now carries more processing power than the guidance computer that first steered humans toward the Moon, yet a child stepping off a curb can still confuse it. The paradox sits at the heart of autonomous driving and exposes a gap between raw computation and reliable perception.
The original lunar guidance systems solved a narrow problem: apply Newtonian mechanics and control theory to a spacecraft in a mostly empty vacuum, where variables were few and noise was limited. Today’s vehicles run dense stacks of computer vision and sensor fusion on edge computing hardware, parsing video, lidar returns, radar echoes and map data in real time. The entropy of the environment explodes at a city intersection, where behavior, lighting, weather and road markings shift faster than training data can keep up.
Pedestrian intent is not a clean variable in a differential equation; it is a moving target inferred from posture, gaze and context, then squeezed through probabilistic models and safety constraints. Even with powerful GPUs and neural networks, marginal effects from rare edge cases dominate the risk profile. The result is a system that can simulate a Moon trajectory with ease, yet still hesitates before a single uncertain human on the crosswalk.