Making a Monster

How Stink Studios created John Lewis ‘Moz the Monster — Monster Maker’

Stink Studios
10 min readFeb 15, 2018

Matt Greenhalgh, Technical Director

Moz the Monster

Technical background

Bringing Moz the Monster, the star of John Lewis 2017 Christmas campaign, to life in a web page presented Stink Studios with a number of technical challenges:

  • First and foremost, we had to recreate the look and behaviour of the character as closely as possible.
  • Secondly, we wanted to allow children to create their own Monster and to be able to change its size, shape, colour and other aspects of its appearance.
  • Finally, we wanted to create an experience that could scale seamlessly from mid-range mobile devices all the way to high-powered desktop devices, taking advantage of the additional processing power to improve the look of the monster as it scaled.

Below we look at how we approached these objectives in more detail…

Real-time fur

One of the immediate challenges trying to recreate Moz the Monster is that he is furry. Not just a consistent uniform fur but a matted haphazard fur. He has a different coloured brown patch on his belly and different shades of grey elsewhere. His fur has differing lengths in different areas but is long enough in places to be influenced by momentum and inertia.

Fur rendering is a well-established technique in offline rendering. Generally it uses particle strands to define length, style and direction for regions of hair and then fills in the gaps using many child hair strands that interpolate values between the parent strands. While this creates realistic looking fur, with a great deal of control, it also creates many tens of thousands of strands so is too expensive for real-time rendering.

An alternative technique was developed for real-time use and is presented in this NVIDIA White Paper. One characteristic of the technique is that it creates many shell copies of the base mesh. The greater the number of these shells and the smaller the gaps between them, the finer and more realistic the fur. We elected to create a base mesh for Moz the Monster with ~5,000 vertices and allow for up to 15 shell layers for a total maximum vertex count of 80,000. Numbers that should comfortably render at both the low end on older mobile devices and at highest quality on higher end GPUs with all shell layers switched on.

Early fur tests were… early

PBR and custom fur maps

Having implemented the basic fur rendering technique we needed to adapt it to create the specific look of the monster’s fur. We were going to use Three.js to provide an interface to WebGL to render the monster in the browser. Three.js offers support for PBR textures through its MeshStandardMaterial shader pipeline. So we knew we could go some way towards supporting the look of fur by creating a BaseColor image to impart the grey and brown colour of the fur and support this with both Roughness and Normal maps to give the correct lighting response to surrounding lights and provide a base level of detail for the underlying mesh. We complemented this with both Ambient Occlusion maps and Metalness maps, the first to provide additional shading in occluded areas and the second to subtly change the look of the Monster’s teeth, eyes and nails. These maps were based on a sculpt created in Pixologic ZBrush that was then painted in Allegorithmic Substance Painter

8K Texture incorporating BaseColor, Normal, MetallicRoughnessAO, and custom ‘TransFur’ maps

To provide the kind of control over the look of the fur we needed we created three additional maps: The first was a simple mask that assisted during the fur painting component of the experience and identified which regions were and weren’t fur. Secondly we created a Fur Length map that indicated how long strands of fur should be in any particular place and prevent Moz the Monster from having hairy eyeballs — a disconcerting look. Finally we created a ‘Mottled’ map that imparted subtle shifts in hue to the base colour of the fur to keep it looking organic, even during the interactive painting process.

With these maps layered together we had all we needed to render Moz the Monster with fur. But to bring out the highlights in the strands and give it a volumetric feel we complemented the Ambient and obvious Spotlight above Moz with two small Environment Maps. One HDR 16bit cubemap to provide soft Rim lighting to the fur and one tiny greyscale 8bit map to provide additional catch light reflections in his eyes.

As a final enhancement, enabled only on high end devices, we implemented a physics response to the movement of the mesh such that individual fur layers received a per-vertex force impulse from underlying bone movement. Thus the monster’s fur can be seen to react to world space movement as he animates on high end devices.

Deformable Rig and Morph Targets

The monster was originally modelled and animated in Lightwave. Partly due to workflow requirements, and partly due to team skillset, we also used Blender as a gateway tool to combine items, add data textures and manage export to Three.js

Animating Moz the Monster was a fairly simple exercise in exporting a skinned mesh with underlying armature based animation. We created eight additional blend shapes/morph targets (Blender/Three.js vernacular) to support conventional things like eye blinks as well as more advanced features like adding additional teeth to the monsters mouth and supporting a thin as well as fat version. On their own, these features didn’t introduce too many complexities but a key Creative requirement was that children should be able to create monsters of different shapes and sizes. We were faced with a choice: create many meshes and armatures but have a limited set of animations for each or create one mesh and armature and spend animation effort on creating lots of interesting animations, at the expense of a more limited scope to deform this base mesh.

We opted for the latter option as it offered us the best chance to bring Moz the Monster to life but with the possibility to scale his limbs and apply morph target deformation to create shape variances.

Scalable limbs definitely pushed the mesh integrity to its limits. Bone scale in Three.js cascades down through the bone hierarchy so changes made to a shoulder bone would have to be inverted in the upper and lower arm bones and all the bones of the hands. With multiple compound scaling being applied across the armature this, needless to say, became quite complicated.

Additionally Three.js bone scaling applies uniformly in all three local axes directions. So while we could make legs and arms larger we couldn’t readily make them long and thin. To achieve this desired effect we created thin limb morph targets which selectively counteracted the lateral scaling applied to bones.

Real-time dynamic texture painting

Another key part of the experience required us to provide children with the opportunity to paint colours and patterns on their monster. Creating this possibility required raycasting against a UV texture to provide Mouse/Touch input picking coordinates on our UV map for the monster. With these in hand we were then able to add colour into an empty texture using a brush mask. This texture was then composited in the Fragment shader with the other maps — AO, and our custom Mottled map in particular to retain the organic patchy look of Moz the Monster.

Custom fur texture painted over the original BaseColor texture

Binary gLTF

In addition to the fairly sizeable textures, we knew the basic mesh and animation data for the Monster could quickly add up to problematically large file sizes. We did some early research on available formats to provide data interchange between our 3D modelling packages and Three.js. gLTF was an early favourite. This relatively new format promised low file sizes with support for all of the features we were looking for and there was both an Exporter available for Blender and an Importer available for Three.js. Early tests of skinned mesh animation and morph target support looked good so we pressed on with it into production.

Midway through the project however we encountered significant challenges obtaining error free exports from Blender. It seems the exporter causes animation discrepancies on export if the layer order in the Dopesheet doesn’t exactly match the Blend Shape order in the Shape Keys window.

F-Curve order must match Shape Key order

Additionally the gLTF exporter didn’t, at time of development, provide support for independently addressable animation within a single gLTF file. We didn’t want to save out each animation as a separate file as this would have required the mesh data to be included along with every export — very inefficient. Instead we settled on adding the animations to a single timeline with each one starting at a multiple of 500 frames. This meant we could trigger individual animations by jumping the playback position to a specified frame checked against a lookup table. Not ideal but effective enough.

The exported animations also suffered through a strange repositioning of the mesh registration to the geometry center such that many animations exhibited moon walking or ice skating problems as the feet were no longer grounded at the world origin. Attempts to solve this by baking keyframes for the main mesh proved effective but, due to the many empty frames between each animation, increased file size substantially. We finally solved the problem by parenting the armature to an Empty and only keyframed the Empty’s position to the world origin. This resulted in a solid, fixed position for the export and no increase in file size.

A glance at this fact sheet will show that gLTF is an amazing format, showing lots of promise. It’s creators were very responsive to the issues we raised during development. At the time of writing (December 2017), the Blender exporter is still in development however. We’d use it again in future projects as the file size savings over text formats is well worthwhile. But, we’d definitely devote additional R&D time to fully testing the specific features we wanted to use before committing to their implementation.

Optimisation for mobile devices

Say the words ‘real-time fur’ and GPUs everywhere quiver. The technique is fundamentally intensive for any device. Our early proof of concept work had demonstrated that we could get close to 30fps on iPhone 6 generation devices as long as we kept the shell count low. We ultimately decided to do some behind-the-scenes profiling of the host device to determine what kind of WebGL features it supported. We then grouped devices into bands — low, medium and high — and progressively enabled more detailed fur rendering and lighting complexity with each step up in band.

As we finalised our specification a couple of issues came to light. One immediately obvious issue was that we had inadvertently stepped beyond 8 textures to define the monster’s various parameters. Many mobile and tablet devices have a limit of 8 texture units per mesh. We had to find a way to pack our textures into a smaller unit. But we had already packed our Ambient, Roughness and Metallic maps into separate R, G and B channels and similarly for our custom Transparency/Fur texture. The solution was to combine the BaseColor, Normal, MetallicRoughness and TransFur maps into a single massive 8K texture with one in each quadrant. We could then use UV references in the range 0 to 0.5 and 0.5 to 1 to address the correct texture. Or we could if Three.js allowed us to do this, which, off-the-shelf, it doesn’t. One of our WebGL devs spent a particularly heroic train journey home customising Three.js Standard shader to accommodate these custom UV references.

A late-breaking but effective optimisation technique involved dynamically adjusting the ratio of display pixels to physical pixels on devices that had a device pixel ratio of greater than 1. With many recent smartphones supporting a device pixel ratio of 3 or more there was quite a lot of scope to provide effective optimisation while keeping the image looking crisp. We compared the device frame rate at runtime and scaled back the pixel ratio used for rendering the monster until a minimum stable 30fps was achieved. Because changes were ramped smoothly over time, the increasing pixellation was relatively unnoticeable.

All told, mobile WebGL performance is now a realistic proposition and, with some planning upfront, an adaptive mobile experience can be created that is largely consistent with the desktop experience rather than a dedicated 2D-only fallback.

Conclusion

The project represented a significant ambition. Challenging features like realtime fur and armature deformation have not often been attempted in a web browser. But modern devices and advances in software tools and supporting technologies like gLTF make this kind of work more of a possibility. We look forward to creating more of these kind of projects in future and hope you enjoyed the experience as much as we did creating it.

You can read more about the background to the project along with how we built an augmented reality version of Moz the Monster on our Case Study page here:

https://www.stinkstudios.com/work/john-lewis-moz-the-monster-monster-maker

--

--

Stink Studios

A creative advertising and digital experience company.