A single digital snowflake decides how real an imaginary yeti feels on screen. In modern animation, every clump of fur, each footprint, and every burst of powder is driven by the same physical laws that govern real hair and snow, only translated into code.
The process starts with hair simulation, where thousands of guide strands follow equations for elasticity and damping, close cousins of what a physics textbook calls Hooke’s law and viscous drag. Solvers approximate how strands bend, collide, and tangle, then instance millions of hairs from that behavior. Around the creature, a separate snow simulation treats flakes as granular material, governed by friction coefficients and cohesion, so footprints compress layers, shed loose grains, and leave edges that crumble over time.
Lighting then unifies everything through physically based rendering, which relies on energy conservation and a bidirectional reflectance distribution function to decide how light scatters in fur and snow. Ray tracing sends virtual photons through the scene, allowing subsurface scattering in snow and multiple internal bounces inside translucent hairs. When the yeti stomps, rigid body dynamics move chunks of ice, fluid solvers handle kicked-up powder, and all these systems feed into the same shading and light transport model, so the imaginary creature sits inside a coherent, measurable version of the natural world.