A blocky horizon of oversized pixels can feel more absorbing than a meticulously rendered 3D valley. The paradox rests less on resolution and more on how the brain allocates attention. Sparse visual input reduces sensory entropy and frees neural resources to simulate detail, emotion and narrative on its own.
Psychologists describe this as a trade‑off between bottom‑up sensation and top‑down inference. High‑fidelity 3D scenes saturate low‑level visual processing with texture maps, shaders and motion blur, leaving less bandwidth for introspection and memory retrieval. Pixel landscapes, by contrast, operate with deliberate under‑sampling: flat colors and chunky silhouettes invite the viewer to fill gaps using episodic memory and mental imagery, increasing subjective presence.
The effect resembles audio compression, whose psychoacoustic model exploits how the auditory cortex reconstructs missing frequencies. In vision, the visual cortex and hippocampus collaborate in a similar predictive coding loop, effectively hallucinating context over minimal cues. Low‑resolution worlds turn that loop into a feature, not a limitation, aligning aesthetic restraint with the brain’s own generative machinery.