The origins

This project began in 2015, immediately following the Unexpected gifts project. At that time, my goal was to alternate between animated projects and still image projects, believing that creating a still image would require less production time.

This project was delayed due to intensive 3D pro projects such as the Christmas in Alsace series and other Versus II related work, including short corporate films, interviews, and more.

I also felt more comfortable working on lighter subjects that required less time to create.

As a result, there were pauses during production. Work resumed from May to September 2020 and continued from August 2022 to June 2024.

What began as a simple artwork evolved into a journal reflecting my feelings over these past years.

Naturally, over such an extended production period, I had to adapt my workflows. The project initially started with 3ds Max and Mental Ray, then evolved to 3ds Max, ZBrush, and Arnold, and eventually transitioned to Blender and Cycles. This was an unusual process for a hermit like me.

I can now share this artwork and discuss the entire creative process behind it.

The initial concept, as usual, was to create a scene with characters to enhance my modeling skills for organic models. In 2015, I felt more comfortable directly modeling anything using poly editing. It wasn’t until a few years later that I began a digital sculpt routine to push myself out of my comfort zone.

In the first draft, the couple was already present, but I felt the main character’s emotions weren’t conveyed strongly enough. This is why I began the modeling process by focusing more on props, allowing my mind some space to think about what was missing while still making progress on the models.

Indeed, the very first idea was to create a scene with organic models, clothes, photorealistic models, a crazy amount of details and learn more about lighting and scenery. The real meaning of the artwork came a couple of years ago.

As for the title, it went from “Time travellers” to “Ashita e” (I listened a lot to Fushigi no umi no Nadia original soundtrack composed by Sagisu Shiro while sculpting), and many other ideas. It took some times and a lot of music/songs listening to finally get one, it was found the day I released the artwork.

MODELLING

 

 

CLOTHES

As for Alice project, I started with creating props. When it came to the shoe model and all the clothing, prior to utilizing Substance Designer, I meticulously crafted each texture by hand, channel by channel. Painting the scales was time-consuming, but it resulted in textures that closely look like the original pair of shoes on which the model is based.

 

However, drawing from my experience with previous projects, I was eager to explore modeling cloth using the 3ds Max Cloth modifier and numerous height textures generated from pictures in Bitmap to Material software.

Many of the details on the trousers were derived from baked Ambient Occlusion using splines, which aided in accurately positioning seams on the UVs.

As mentioned earlier, the overall cloth deformation primarily relied on the 3ds Max Cloth modifier. Once the character was posed, I could manipulate polygon positions, pinching and adjusting areas as needed to achieve the desired look.

 

The process for the underwear was similar, with the exception of the laces. The texture for the lace was derived from a real lace piece purchased at a clothing store and then 2D scanned to create alpha and normal maps from it.

Creating simple models, such as the tissue box, actually required considerable time to achieve the desired look. Again, the cloth modifier in 3ds Max played a key role in achieving the desired appearance. Of course, with today’s technology, tools like ZBrush cloth brushes could have expedited this process even further.

One of the current challenges in creating realistic 3D objects, specifically in mainstream 3D software, is achieving lifelike 3D paper balls. Although there are only a few paper balls and wrinkled papers in the scene, I aimed to refine the processes I had used in previous projects, such as my fully CGI desk in the Unexpected Gifts project.

The secret lies in the topology of the meshes used in the simulation. For this project, I created approximately 100 different models of paper.

What about some rest?

Managing the bed sheets in the scene was particularly challenging. Once again, I employed the 3ds Max cloth modifier to establish the basic shape of the duvet. I extensively adjusted animated values within the modifier, keyframing rotations of textures during the simulation to achieve a realistic appearance.

Then, the details were added using alphas projected in ZBrush. These alphas were created in two different ways:

  • Within ZBrush, by using cloth brushes on simple planes and utilizing the “Grab Doc Alpha” option.
  • Generated from photographs using normal photography techniques inside Substance Designer.

The Man

 

After completing the cloth processes, I shifted my focus to the characters themselves:

  • First: I refined the original boxy head of the male character through digital sculpting, transforming it from a standard face to a more stylized appearance.
  • Second: I applied a similar process to the female character + UDIM process
  • Third: I chose to incorporate hands into the scene to enhance the overall message conveyed by this artwork.

All the models utilized skin pore alphas purchased from Texturing XYZ and Flipped Normals to incorporate fine details.

The challenging part was seamlessly integrating the random topology of the rock on top of the head with the symmetrical topology of the face.

The eyes were created using a straightforward process: two modified spheres for the cornea and main part. All additional details were added using sculpt from ZBrush and textures from Substance Designer.

The woman

This character also started as a polygon-edited model, then moved into ZBrush to achieve a more organic look. She went through multiple iterations to reach the final shape, and as I’m still studying anatomy, she ended up being the most complex character to create in this scene.

Proportions adjustments

Hands

How to manage such models? I’ve experimented with two different approaches:

  • Directly in Max using path follow modifier
  • By creating an insert mesh brush in Zbrush

Both of these workflows share a common requirement: different hand poses.

Initially, setting up the first workflow was straightforward: I rigged the main hand model with a CAT rig and posed the fingers. To maintain control over the model’s volume, I used a Skin Morph modifier to correct deformations at joints, preventing a “gummy” appearance in the fingers.

With and without skin moprh to correct joint shape
Early animation test

3ds Max workflow

In the second workflow, for the path follow setup, the hands were connected to a long cylindrical geometry to simulate the continuation of the arms and then followed splines for movement.

Zbrush workflow

Using the generated hand models from the initial setup in ZBrush, I divided the models into three separate polygroups. I then utilized curve options to create a library of arms within a custom Insert Multi Mesh brush.

This setup offers the advantage of allowing the arms to follow and interact with the surface of scene objects. Ultimately, I opted for the path follow option for this project.

Props

 

Here, the workflow has been relatively straightforward. Most of the meshes were created using poly editing combined with an OpenSubdiv or subdivision surface approach, aiming to keep the models as low-poly as possible to maintain smooth performance while setting up the final scene.

The only exception was the cardboard with duct tape around the window, which was fully modeled in ZBrush using ZModeler tools.

 

This approach allowed me to quickly generate different shapes and configurations, adding flexibility and realism to the design.

TEXTURING

As mentioned earlier, although the duvet already had a substantial amount of detail from the modeling phase, the final realistic touch was achieved through a procedural material, giving it an authentic duvet appearance. About 80% of the material work was done in Substance Designer, with the remaining details added in Substance Painter.

Some of the materials for the models were quick and easy to set up, while others, like the stone walls and frames in the scene, required a more complex approach. Each of these materials began with a basic tile sampler as the foundation, and then went through a series of transformations using various warp and slope blur nodes. These techniques introduced irregularities and signs of wear, giving the stones an aged, realistic appearance.

The wooden frame of the windows is my favorite material from this project. By setting the UV orientation correctly in 3ds Max before exporting the model to Substance Designer, I was able to achieve a blend of stylized and realistic wood textures. I also experimented with UDIMs, assigning each material: wood, metal, glass, and handle to its own specific UV space. This approach ensured the highest possible resolution in the final files.

Turntables

The most intricate texturing and material setup was for the hands and arms positioned at the bottom of the images.

As mentioned previously, the model consists of two meshes, each with its own UV space and channels. Maintaining the original UVs for the hands was crucial, as all the intricate details were baked from a high-definition model sculpted in ZBrush.

The material setup involved connecting two materials together using a mask or ramp texture to create a smooth transition between two sets of textures. Setting up the Arnold material in 3ds Max was straightforward, but in Blender, I had to refresh my approach and devise a workflow using weight painting and vertex groups. Ultimately, both software environments gave satisfactory results.

The details on the arms were achieved using a triplanar projection of a bump map, ensuring consistent and realistic surface textures across the model.

Before proceeding with color and roughness texturing in Substance Painter, I encountered a significant challenge with baking details from the high-poly models.

Typically, I used Substance Designer for baking details, which worked well for the male character. However, the female character, being a UDIM model, posed difficulties in achieving accurate interpretation and resolution of the height map, especially when considering the rendering process with Catmull Clark subdivision.

The solution was to utilize ZBrush’s Multi Map Exporter options. Its internal computations generate flawless normal and displacement maps, even for UDIM models, ensuring the integrity and quality needed for the project.

A quick note about the clothes material for this character, the opacity/cutout is only applied on the laces parts, what we think is skin seen through the cloth meshes is a visual trick created with masks, falloff and facing ratio maps mixture to give this illusion.

HAIR – part I

What a challenge indeed. Once again, things evolved over time. Initially designed with a common short to mid-length hairstyle, the female character in the final scene now features very long hair.

Initial tests using Ornatrix in 3ds Max yielded promising results. However, as I transitioned to Blender to complete the project (further explained in the RENDER section of this article), I had to manage this intricate step using Geometry nodes.

In Blender, I found that some nodes were analogous to Ornatrix modifiers, allowing me to almost replicate the workflow I had developed in Max.

The initial phase involved drawing shapes and refining them to match the preliminary design. Subsequently, starting from the scalp of the head, I incorporated long hairs with numerous segments and positioned them on the characters and the bed. Afterward, it was a matter of implementing the appropriate nodes such as clump and frizz. The crucial addition was the Shrinkwrap modifier, essential for ensuring the hair adhered correctly and avoided intersecting with the scene. While not flawless, any problematic areas were adjusted during compositing.

LIGHTING & RENDERING

The mood evolved throughout the project until the very end, when I opted to completely overhaul the lighting setup to convey a deeper sense of duality in the final render. The camera position also evolved through time.

Despite starting the project with Mental Ray, transitioning to Arnold was direct due to my familiarity with the render engine from previous projects. However, the decision to shift the entire project from Max to Blender, and ultimately render everything in Cycles, came after facing persistent challenges. By the end of 2023, despite optimizing every aspect of the project, Arnold struggled to handle my scene, even when utilizing the CPU.

During that period, I had been using an older core processor and only 32GB of RAM for several years. Concurrently, I was also learning Blender for teaching purposes, particularly camera tracking. Out of curiosity, I attempted to import my entire Max scene into Blender, as I noticed that Cycles internal computation differed from Arnold’s. After conducting tests, I was pleasantly surprised to find everything stable.

How to optimize a scene

1/ Convert textures

Using jpegs over tga ou png takes less time to load the textures when you launch the render. Same for using 8bits vs 16bits images. Of course you can keep 16 or 32 bits for your normal and displacement maps if you want fine details and avoid stair stepping look.

Converting your images texture files as TX files will also reduce loading times when renders are launched.

Png textures took 4min03 VS Jpeg textures took 3min16 to load and render3ds Max – Arnold – CPU

2/ Normal VS Displacement

Keep medium/fine details for normal maps only

Unless you have extremely closeup shots on a detailed mesh, keep the low and medium details for the displacement maps and high/fine details in your normal maps = you won’t need to subdivide your meshes so high at render time, or try to smartly subdivide your model to get higher polygones density on detailed displacement map areas. You can also use extremely fine details from a 2D bump map.

Stair stepping in the 16 bits (left) VS Smooth results with the 32 bits displacement map (right)

Be smart when using these texture maps to avoid exaggerated results

3/ Decimation VS Subdivision

On static meshes (not deforming with skin modifier) do you really need to subdivide your mesh 7 or 8 times at render time? I don’t, therefore here is the destructive workflow I used on most of my static meshes:

Using a shoe from my scene as example, I manually subdivided the meshes with edge loops to get an homogeneous topology on the surfaces, added a few subdivisions with turbosmooth or opensubdiv (subdivision in Blender) then I imported the mesh Zbrush with GoZ, subdivided the mesh a couple more of time. I applied my displacement map and finally used Decimation master plugin to drastically reduce the amount of polygones while keeping a certain density to feel the displaced effect. Of course you must keep the UVs during this process to be able to use your already created base color, roughness and normal maps.

This way, you physically create the entire rendering process on your mesh but the decimation process give you a lighter mesh. For all the hands in my scene, the subdivision process cranked the polygones count to 47.5 million, the result has been reduced to 2,8 million polygones. The missing details will come at render time with roughness and normal maps.

Rendered in 3ds Max and Arnold – CPU mode

4/ UV Optimization

A brief word about unwrapping. As my project is a still image, I focused on the parts that will be visible in the camera, meaning the non visible parts have tinier UV spaces, this way I could get more details on my textures.

5/ Meshes Optimization

The more modifiers you’ll add on top of your mesh, the more time it will take to handle it at render time. Try to collapse/apply the modifier on your meshes as much as possible. This is the last step before rendering the final scene. In 3ds Max, if you have a non animated noise applied on a mesh, collapse it to an editable poly, same for unwrapping, etc.

Also, if some of your meshes go out of frame and they don’t have a particular impact on what you see in the camera, like a projected shadow or light bounce affect, reduce the poly count by deleting the non visible polygones. This is what I did on the hands, walls and floor meshes because of the decimation process used earlier. If you want to keep the non visible meshes, drastically lower their poly count, keeping the basic volume.

All that won’t be seen in frame is either deleted (arms) or decimated (walls)

The non visible part of the bed (covered by the blanket) was cut and heavily decimated

6/ Results

Less time loading before the render finally starts because there is less subdivision processes on each meshes. Textures load faster (and depending on your specs: you run out of memory less often) = overall shorter render times.

A few months ago, following an upgrade to a more recent core processor, increased RAM, and a powerful GPU, rendering speeds have significantly improved.

Now that my current machine could efficiently compute my scene in Blender, the question arised about returning to Max and Arnold because of the initial hair setup for Ornatrix. I tested the original Max scene that previously crashed, and surprisingly, it worked without issues. However, opting to spend another month to complete the entire scene in Max just to achieve the desired look for the hair isn’t feasible. The initial hair system created with Ornatrix isn’t compatible with the complexity of my scene.

HAIR part II (Kodawari)

Or may be it is feasible? While editing this article I started R&D process on a paid work involving animated hairs, long story short: I discovered a way to get exactly the hair main volume and shapes I had in mind by using the Hair Strips workflow. Unfortunately, combining a Push Away from Surface modifier on a Hair Strips object make the software unstable and not responsive anymore at some point. The secret is, baking the original Hair Strips mesh to an OX Baked Hair object and from there, no more incompatibility issue.

What is one or two weeks more of work on a projet that took so much time to be created?

 

Long hair brushing step by step

 

This process involve some destructive steps by baking hair at some points

Step 1: Create the main hair volume using a few splines, drawn by hand and adjusted to be closer to the scene elements then an Hair&fur modifier is added to fill hairs between the splines turned into guides. Hair amount set to a fairly low amount then all of them are converted to splines.

Step 2: Add an Ox hair from shapes modifier on the converted splines from step 1. Add a bunch of Ox modifiers + the Gravity and Push away from surface combo on top of the stack then convert the hair to guides and bake them.

Step 3: Add a few more Ox modifiers like Strand multiplier to get wider strands here and there. If the 2nd strands gravity + Push away from surface modifiers combo makes the software unstable, Bake the hair from curve modifier to an Ox Baked hair, it should work and you’ll be able to finish you style.

Step 4: convert hair to guides and add an Ox edit guide modifier to adjust problematic strands. Indeed, the Push away from surface modifier can give some unwanted results in some hairs. To partially fix this, use closed geometries as collision.

Of course, you need to be very precise before baking and dupplicate some hair setup as backup for whatever reason as this workflow is destructive.

Step 5: Export the various hair groups.

Step 6: (optional) Since all the lighting and materials were already configured in Blender, I was able to export everything as Alembic. After importing them back into Blender, some adjustments were needed to convert the splines to shapes. This allowed the use of the Geometry Node system for final touches like hair duplication, frizz (again), shrinkwrap, trimming, and set curve profile nodes.

Naturally, some of the curves had to be manually edited to resolve collision issues in certain areas, which turned out to be the most time-consuming part of the hair setup.

Therefore, I can proceed confidently to one of the final stages of the project: fine-tuning everything in post-production.

 

COMPOSITING

Coming from a VFX background, I enjoyed experimenting with various color corrections and effects to achieve the final look early in production.

However, as the project progressed and the mood evolved, it became a rewarding experience to explore individually coloring different light passes to dramatically alter the scene’s appearance.

After introducing additional light sources to enhance contrast within the room, I implemented a technique of selectively masking passes to control where light illuminated specific areas such as faces and eyes, or to remove light from undesired areas. For instance, on the female character’s face, the light from the left window was drawing too much attention. By blending the masked light path with illumination from the computer monitor, I achieved smoother shadows on her face.

Additionnal passes like Zdepth, AO, Diffuse direct/indirect, etc. helped in the process of highlighting the overall image:

The final touches

At the end of production, I generated several render previews to be adjusted in post. The primary adjustments focused on the color of the bed sheets, opting for darker tones to make the new version of the long white hairs stand out more.

Once everything in the previews began to look right, I proceeded to launch the final 3D render.

Spending an extended period on a large-scale project like this allows you to rethink approaches from past projects. Should everything ideally be perfected directly in the 3D scene? Certainly, as it greatly eases the post-production process. However, occasionally, you can elevate the final appearance further by delicately adjusting small areas of light, repositioning elements for better composition, all in 2D without needing to wait for another day of render.

To achieve greater control over the hair’s appearance, a custom mask was created specifically to adjust the brightness of the strands. This mask was incredibly useful in precisely managing how the glow softly highlights the right side of the artwork.

Compositing process on an earlier version of the final render

COMMUNICATION

This phase began early in production: crafting the making-of article/book, sharing progress on personal blogs and social media platforms (including Discords and specialized forums), reaching out to magazines to propose articles on the artwork and creative process. Additionally, I offered some of my 3D creations on various online markets. Un Avion en papier dans la nuit represents my first project oriented in this manner, with most props now available for purchase and certain materials offered freely to the community.

Here are some of the props you can freely explore. The available models are listed just below:

Assets from the store

Free assets

CONCLUSION

I hope this article has provided you with valuable processes and concepts that you can apply to your own projects. As for me, moving forward, I will be focusing on animation and creating artworks with shorter production times. See you on my next project!

Nicolas Brunet – October 2024 – All rights reserved