Each year, at the onset of Ramadan and for the festival of Eid al-Fitr at its conclusion, people send friends and families themed messages as greetings. To help them celebrate these occasions in 2018, Google Brand Studio conceived the ‘Qalam’ project: a series of artworks by renowned regional artists, created in VR, that could be customised and shared with friends and family. Google approached Stink Studios to build the project and help bring the experience to life.
The project began with a series of art capture sessions held in Dubai, London and Turkey. Here, a total of nine artists — whose backgrounds ranged from graffiti artist, through traditional calligraphers, to tattooists — created individual artworks in Tilt Brush, Google’s VR Painting application. These intricate and beautiful artworks then needed to be recreated and made available on mobile platforms for people to customise and share.
This is where the challenge for Stink Studios began. These 3D artworks were composed of:
- Up to half a million vertices
- Files up to 42MB in size
- Complex, custom shaders for unusual brush strokes like Fire, Electricity and ‘Chromatic Wave’
With close to 50 artworks to convert and make available in a streamlined mobile experience, we faced significant challenges on multiple fronts.
Anatomy of a Tilt Brush brush stroke
Tilt Brush is an amazing VR painting application which, conveniently for us, offers an export path from its proprietary .tilt file format to the common .fbx format as well as Base64 encoded JSON.
Opening up the .fbx file revealed that the brush stroke geometry is exported with the vertices ordered in the sequence they were created — with brush colour stored in vertex colours and the particular brush textures cleverly created by UV unwrapping the length of the geometry against compact brush texture map tiles that define the shape of the brush start and end. This information gave us confidence that we had almost all the elements needed to recreate the brush strokes outside of Tilt Brush.
However, the one component that was missing was the specific shader implementation that gave each brush its particular material properties. The look of the brush shaders was a key characteristic of the Tilt Brush art but unfortunately, given we were missing this information from the fbx file, it was going to be very difficult to recreate.
Recreating Brush Shaders in WebGL
Ultimately we intended to deliver the Artwork customisation experience as a web page using WebGL with THREE.js to display the models and a React front-end providing the UI layer. This meant we had to find a way to recreate the particular look of the brushstrokes in WebGL shaders.
The good news was that Google had made a Unity SDK available that could parse the Base64 encoded JSON format export and had faithful recreations of the Tilt Brush shaders exposed in the source. The bad news was that these shaders were written in Unity’s HLSL and so required manual conversion to WebGL GLSL language used for shader development.
There are 36 brush types available in Tilt Brush, and while some of them are simple diffuse-shaded geometry, others are composed of complex animated particle effects. Each brush had to be hand recreated. Exploring the ingenious, but often perplexing ways in which the Tilt Brush authors had created their shaders was an instructive exercise. Why use a vec2 to define your UV coordinates when you can use a vec4 and pack a particle lifetime attribute into the same data structure? Uncovering and translating these kind of intricacies took some considerable development time.
Having recreated the look of the brush shaders, the next big challenge we faced — and by big I mean HUGE — was file size. The raw JSON exports from Tilt Brush were anything between 1 or 2MB to just over 40MB. There were around 50 artworks in total that we needed to make available through the website. Expecting users to wait and pay for ~1GB data over a mobile connection wasn’t exactly reasonable. We set to work looking at ways to reduce this massive payload.
Shared Geometry, GLB and Draco compression
The first thing we looked at was the source JSON file. By parsing the Base64 encoded data we were able to see that the brush geometries could be combined into single geometries for brush strokes that shared the same brush type.
The next thing we looked at was a more compact data format for encoding the 3D information, suited to transfer on the web. We’d had good experience with gLTF and specifically it’s binary version GLB from our work with John Lewis on the Moz the Monster website. So, we re-encoded the .json source to .glb using a custom browser-based encoder.
We were also aware of Google’s own Draco compression technology, which was developed specifically to provide significant compression savings to 3D geometry data. Unfortunately, at the time of development, no official specification for encoding gLTF with Draco compression existed but we were able to build on the work of Don McCurdy at Google and specifically this fork of his three-gltf-viewer in order to apply the Draco compression.
One challenge we faced in this process was that the vertex normal data structures were being used by some of the brushes to encode data that had nothing to do with vertex normals but related to animated particle behaviour. The Draco encoding tool, quite reasonably, thinks you probably want your normals to be normalised so it does this for you as part of the encoding process. This corrupted all this non-standard particle position data. We found we were able to maintain the integrity of this data by copying the normal information over to the tangents data structure, which wasn’t normalised. However the THREE.js gLTF decoder ignores tangent information as they are now calculated in real time by its Standard shader. We had to write a custom version of the THREE.js gLTF decoder that read the tangent information and then copied it back into the normal array consistent with the the brush shader’s expectations.
GZip and MIME Type trickery
The combination of GLB with Draco compression brought significant file savings. Many of our model files were 10% of their original size. But we still had a non-trivial initial load time of 11Mb.
So, to take advantage of the transparent GZip encoding we had to encode our raw assets in Base64, which uses a safe set of characters to encode raw binary data, so that we could use responseType = text. This meant adding 33% of size for the Base 64 encoding, but ending up with a net saving of between 10% to 80% using the GZip encoding.
Although we’d met with some considerable success reducing the size of individual files, the net load for the initial page view was still larger than we would like.
Our initial assumptions for the site loading approach had been based on a small number of artwork models and supporting camera and texture assets. Under these circumstances, downloading assets up front for a subsequently responsive navigation experience seemed like the right trade-off. As the artist VR capture sessions got underway it became apparent that we were going to have many high-quality assets to choose from. Rather than artificially limit our audience’s opportunity to view and select these artworks, we elected to include them in the pool of available assets. However, with each artwork addition came a small increment in the initial download time, until we got to the point where it was unreasonably lengthy.
We discussed implementing a background loading queue that could simply fire off http requests for all the artwork assets in the background until the entire site had downloaded. This felt irresponsible however: for users on a paid-for mobile data plan, downloading tens of MBs of data at their cost was both presumptuous and selfish.
Our solution was to only load the initial homepage ‘Qalam’ artwork and the first artwork in the subsequent chronological sequence. As users navigated through the site we would preload the next artwork in the sequence in the background while they were viewing and customising the current view. This only applied if they were using the chronological arrow navigation. If they used the grid view we would use just-in-time loading for the artwork selected. This solution allowed for quick access to the initial homepage with responsive navigation and without passing the cost on to the end user.
Web Workers & Transferable Objects
Draco compression had been instrumental in reducing file download size and times. However it proved something of a double-edged sword. It had introduced a new overhead in the asset display pipeline: geometry decompression and upload to GPU memory. This task was now the single biggest contributor to the perceived ‘download’ time and, worse still, it introduced a small but significant lock-up of the browser while the geometry was decompressed.
Our solution was to move the decompression task off the main thread using Web Workers. We had to modify the THREE.js Draco decoder to support this bespoke functionality. Our change stopped the main thread from being blocked during decompression but we instantly hit against another problem: the decompressed buffer sat in a different thread from the main thread that required it. Copying this data between threads created an even bigger lock-up while the large amount of data was transferred. We solved this problem by implementing a Transferable Objects solution that changes the buffer owner by pointer reference rather than moving the data between threads.
Our files had been on quite a journey! From 42MB JSON source though geometry re-encoding, GLB conversion, Draco compression, Base64 encoding and gZip compression we were able to get the same file down to 1.9MB without any visible compression artefacts. The brush strokes were near-perfect recreations of their Tilt Brush counterparts, and we hadn’t had to compromise on the artists’ ideas as we brought them to mobile devices.
Our experience with Google, bringing the world of VR to a wider audience has been both instructive and fun. We look forward to new ways of engaging audiences with this exciting medium again soon.
You can read more about the background to the project on our Case Study page.