Making Good with Ray Tracing Part 2 – Biased or Unbiased…

Featured

Visual software engineers have two methods of computing ray tracing: Biased / Unbiased. Deciding speed/efficiency versus accurate effects in the render is one of the critical issues. So, what is Biased Rendering and what is Unbiased Rendering?

Exterior Visuals and Interior Visuals

From an artist point of view, minus all the computing jargon, if you are rendering lights in a scene that are ‘out there in nature’ , you might choose to have light rays act as close to nature as possible so the latest Unbiased path tracing is the best option, since as the name implies, there is no ‘biased’ selection of say, number of rays, direction or frequency. Countless rays are shooting randomly in all directions and bouncing and reflected or refracted and re-bounced or re-transmitted, like in nature’s light. Think …’Light Cache’ for ‘Global Illumination’ – the unbiased method creates a general light dispersal effect.

However, for interior visualization, the artists may now wish to play with the lights and the mood of the scene so Biased path tracing now works best as it gives you some control over the frequency of rays and the direction they are shooting e.g. rays are shot toward the light source (and rays can be restricted from shooting anywhere else). You could say then that the light acts quite specifically, and skillful artists can use ‘shot cuts’ to achieve photo-realism or add moods to the scene.

finalRender is a biasHybridTM that allows for both Biased and Unbiased ray tracing controls. Difference between Path and Ray tracing? That will be another blog article.

biasHybrid TM and Caustic Effects in Rendering

Caustic is a great way to simulate reflection and refraction properties bouncing from object / object(s) at random when illuminated by lights. In rendering visuals where there are one or more light sources and object(s) of varying translucence, caustic effects are critical in making the scene photo-realistic. In finalRender biasHybrid, in the case of caustic effect the unbiased path tracing is used which makes it trickier but speed-accuracy had to be optimize. Hence, the enhanced fR-AreaLight helps create well-defined caustic effects.

finalRender Copyright 2019
finalRender Copyright 2019

Light rays are followed along their paths and eventually they will start to ‘built up’ or focus in some areas more than in others. Such an effect will create those recognizable light patterns we know as caustics.

light rays build up – focus – reflect/refract

In the simulation above an area light was used to create the well defined caustic effects. The image above actually represents indirect illumination (GI), only caustic effects.

With and Without Caustic Effects
Without and With Caustic Effects

Normally known as ‘Surface Caustics’ in CGI. Surface caustic are a natural phenomenon produced by one or more focal points from multiple light rays. Mother nature doesn’t differentiate between caustics created by reflection or refraction; the cause is always the same. Light rays that collide with a reflective surface bounce off in a specific way and will be directed to a focal point somewhere near to the object. Likewise, transparent objects will bend the light rays and some of them will be directed into one focal point creating a caustic light effect on other surfaces.

finalRender supports all possible types of caustic effects: one from light rays reflecting off a surface like metal, or refracting through surfaces like glass, and another that is generated through the use of Volume Lights (called Volume Caustics). And to enhance even further in terms of capturing realism, functions are sampled many times over known as the Monte Carlo technique.

Learn more @ https://vimeopro.com/cebasvt/finalrender-quick-tutorials

What is OpenVDB ?

Featured

‘ OpenVDB is an open source C++ software library comprising a novel hierarchical data structure and a large suite of tools for the efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids. It was developed by DreamWorks Animation for use in volumetric applications typically encountered in feature film production and it is currently maintained and developed by the Academy Software Foundation (ASWF). The library was primarily developed by Ken Museth, Peter Cucka, Mihai Aldén and David Hill.’ See Openvdb.org

The Hollywood industry has always been about diversity, greater openness and greater accessibility to larger audiences. The same vision infuses the technology behind film productions, animation and post-production especially, when it comes to the programming and handling of images and large graphic data.

Today, software engineers have a major role in furthering advancement in movie making. OpenVDB was first developed by Dreamworks Animation in 2012/13 and further enhanced by ILM. DreamWorks essentially opened up a new era of volumetric storage and processing,and since then, OpenVDB has burst into mainstream productions and has been used successfully in major animation successes such as, ‘Puss in Boots’ (2011), ‘ Rise of the Guardians’ (2012), and ‘How to train your Dragon ‘ (2014 / 2019) and many more animated films. Open VDB is used in the V/FX department of the production pipeline in animation / film making, especially in executing volumetric effects such as destruction, explosion, fire, fluids, clouds, smoke and sometimes even character effects (CFX) like hair, cloth. Most typically or partially involves volumetric data.

For more on the history of Hollywood’s push towards open source, read this article:
https://www.zdnet.com/article/hollywood-goes-open-source/

A dictionary definition of VDB file:
VDB File Association 2. A VDB file is a volume database file created by OpenVDB, an open source C++ software library used to create special effects in DreamWorks animated films. It contains data used to simulate volumetric effects, such as clouds and smoke, or represent a surface, such as water.

Open VDB Overview

Why VDB?

extracted from the above lecture video by Ken Museth (the founder of Open VDB)

VDB has a low overhead, is highly adaptive and achieves speed processing. VDB is paradoxical as it is unbounded (not in a ‘box’) and can take in more data without higher memory.

As a hierarchical data ‘tree’ structure, VDB is build to run on shallow (less levels), namely achieving speed without using too much memory. ” VDB: High-Resolution Sparse Volumes with Dynamic Typology.” (Ken Museth, (Chair of TSC for OpenVDB)

Highlights of movies with VDb since 2012

OpenVDB hierarchical data structure essentially allows leap-frogging over empty spaces in the tree structure (say a 3D image of a bunch of flowers would have lots of pixel ’empty space’) and reach the voxels near the screen to create the image. OpenVDB has more than 70 tools to do rasterization at different levels.

from https://youtu.be/7hUH92xwODg

VDB has the ability to convert volume data back into polygons with a high efficiency at full multi-threaded speed of your processor, a critical function for so many animation projects. And particles as well. There are tools for handling ‘fractures / fragmentation / deformation’ well – these are normally very difficult to do.

Note that the OpenVDB GitHub repository is hosted by the ASWF at github.com/AcademySoftwareFoundation/openvdb.

Besides Cloud Modelling..

… and DreamWorks Animation did an amazing job using VDB with clouds on ‘How to train your dragon’.

VDB handles volumetric data efficiently for liquids, raster primitives, level sets, grid analysis, … particle to level sets….flow fields, morphological operations..advection(transport of a substance or quantity by bulk motion ), adaptive/masked meshing… distributed filtering, as well as manipulation of secondary elements and vector fields. Refer https://www.openvdb.org/forum/

END
– this writer supports Extinction Rebellion.

Making Good with Ray Tracing Part 1

Featured

Glass Spheres
GPU Unbiased GI Caustics Fun

The phrase ‘Ray Tracing’ comes often hand-in-hand with any latest news about rendering techniques, so today I hope to give some clarity to myself with this terminology, and hopefully, the reader as well.

An excellent rendering engine creates a photo-realistic image from 3D geometries. This means an accurate calculation of light sources and their effects on objects, mimicking light particles hitting-reflecting-and transmitting on surfaces.

Light Sources and Effects

This may be ‘old’ knowledge for expert CG artists, but in order for beginning 3D artists to see how their light source(s) and surfaces interact to achieve a certain ambiance, it’s important to know what ray tracing actually traces.

In 3D setups, you can have one light source or multiple, but let’s start by saying there is one light source like a bulb above a table with some objects on the table. Ray Tracing will trace a light wave from your eye to a pixel point on the view plane or computer screen to the surfaces/objects and then to the light source and back, in the 3D plane. The ray tracer algorithm will first remodel out the scene and its geometric objects.

One key factor to note is whether your rendering engines use Forward Ray Tracing, Backward Ray Tracing or Hybrid Ray Tracing. You are right to think that the hybrid algorithm, as usual, is the more advanced and most accurate tracer but it is at the expense of speed since now there are more calculations involved. Hybrid tracing balances both the forward and the backward rays, biased more towards backward ray tracing.

Ray tracing works completely opposite to how our real vision works. When we see things with our naked eyes, it is a light source bouncing of objects that hit the cornea of the eyes. The algorithm of Forward Ray Tracing, in fact, does mimic this. However, the more efficient Backward Tracing does it the other way, it charts a light path from our eyes through a point on the view plane or computer screen and to the object(s), with the closest object to our view plane being seen first. To make it simpler, Ray Tracing for the purpose of rendering traces only objects you are supposed to or created to be seen on your screen.

The algorithm of pure forward tracing is seldom used since it is difficult to ensure that every ray of light from the objects will reach the viewer’s eyes but it creates better color calculations. So, Hybrid Ray Tracing must still sample a selective number of Forward Ray Tracing in order to achieve accurate color tones for the objects or surfaces. The Backward Ray Tracing does most of the job and also gives the artist more fine-tuning power to achieve a certain look and feel by modifying the light source and the object positioning.

3dlink.it
MarcoLazzarini_day n night kitchen corridor 3dlink.it

Paths of Light Travel

For discussion sake, there are only two main paths of how light travels when it hits a surface – the ray is either reflected (bounces of) or transmitted (goes through the object) – transmission.

And for Reflections, there are only two types of reflective light, either ‘specular’ that is one light in and one light bounces out, or by ‘diffusion’ that is one light in and many diffused bounces of lights outwards. Specular reflections, as you may have guessed, occurs when the surface texture of an object is smoother and less rough. Diffusions occur when the surface has much roughness to it and therefore the light coming in hits at varying degrees and there is more than one angle of light bouncing back.

”  The reflection of light can be roughly categorized into two types of reflection: specular reflection is defined as light reflected from a smooth surface at a definite angle, and diffuse reflection, which is produced by rough surfaces that tend to reflect light in all directions… There are far more occurrences of diffuse reflection than specular reflection in our everyday environment. ” See diagram @ http://micro.magnet.fsu.edu/primer/java/reflection/specular/

So, this allows you to tweak your rendered image depending on surface roughness or material textures which you can choose in a 3D environment for a particular effect. Again, knowing the difference of material/texture types and how they interact with the light source or sources gives you the power of rendering out a medley of scenes using the same 3D model. You may access helpful rendering tutorials @ https://vimeopro.com/cebasvt/finalrender-quick-tutorials

Ray Tracing and Rendering Effects

The concern of 3D artists for interior design, architectural visualization work or animated characters and environment will always be creative look and feel. For this, you may wish to learn more about the main kinds of rendering effects with Ray Tracing:

  • Shadowing
  • Reflection
  • Transparency
  • Refraction

Referencing www.cebas.com/manual/finalRender , there are now finer tunings of Global Illumination, Ray Tracing effects combined with Scene Converters ( if you are on a different renderer ); besides supporting all standard lights in 3dsMax, cebas Visual Technology has recently incorporated Spectral Wavelength hybrid rendering (unbiased) with a unique LightCache (biased) method giving biasHybridTM.

The list is self-explanatory as we encounter these main four light effects everyday in our environment. Ray Tracing and Global Illumination are givens in a renderer. You get a different look and feel each time by adjusting the material type and placing light sources or Area Lights. Then, adding Motion Blur or Depth-of-Field again changes the creative scene. More of this later. Follow our blog http://www.cebas.org/blog/ and join in the cebasWall.

Frame Buffer – finalRender: see tutorial by Edwin Braun @ https://youtu.be/W7shVvWJa6U

I will not be going into details of how the rendering based on the aforementioned, desired effects will affect your scene as there are much information online. Try first at this link: https://cs.stanford.edu/people/eroberts/courses/soco/projects/1997-98/ray-tracing/effects.html

Due to the need for complex calculations, Ray Tracing is more suited to production work where complex and long rendering speeds are tolerated, such as film and television, whereas games rendering prioritizes speed and thereby does not use Ray Tracing.

For those who wishes to delve deeper into the world of Ray Tracing, see http://www.cs.cornell.edu/courses/cs4620/2013fa/lectures/22mcrt.pdf

References:
https://computergraphics.stackexchange.com/questions/337/radiosity-vs-ray-tracing
https://cs.stanford.edu/people/eroberts/courses/soco/projects/1997-98/ray-tracing/intro.html
https://www.cg.tuwien.ac.at/research/rendering/rays-radio/
https://www.designnews.com/design-hardware-software/why-every-engineer-needs-know-about-ray-tracing/154993415159316
https://www.scratchapixel.com/lessons/3d-basic-rendering/global-illumination-path-tracing 
https://chunky.llbit.se/path_tracing.html


Physical Rendering with Random Walk Sub-Surface Scattering

Featured

What is Random Walk?

If you’re dealing with dynamic particles whether in the form of lightwave or actual visible particles such as smoke or diffusion in water, you probably would have heard about ‘Random Walk’ (the phrase derived from probabilities by Pierre de Fermat, French mathematician). Random walk (RW) can happen in a 1D, 2D or 3D environment. For rendering objects and spaces, the 2D-3D random walks are more useful as reference, such as gas particles bouncing in an enclosed tank, or more precisely, how RW can help in simulating realistic Sub-Surface Scattering (SSS).

At its base, ‘Random Walk’ and its probability is not a difficult concept to grasp. Basically, the best way to depict light particles motion and spread through a material is to program a random non-linear motion (2D/3D). The material’s density spectrum from liquid to semi-solid, and solid-translucence will define how light scatters and/or how a color diffuses, so RW is an algorithm for shaders in rendering. This rendering algorithm gives the artist greater refinement in creating an image that is almost life-like or photo realistic.

Mathematically Random Walk in 1-Dimension (+) /  (-) progression:



And expand it to include the X-Y-Z axis:

Reference: http://www.mit.edu/~kardar/teaching/projects/chemotaxis%28AndreaSchmidt%29/more_random.htm

Random Walk in SubSurface Scattering

For new learners, the best way is to see with your own eyes how natural subsurface scattering behaves is by shining a light from under your palms and through your fingers. You will see an orange-reddish hue coming through your fingertips and the sides lining of your skin. Sub-Surface Scattering or SSS occurs in any or most translucent / organic materials such as wax, anything with a skin surface, marble and even silky cloth. You can understand it better if you think of SSS as having to do with how objects absorbs light and re-emit light, rather than pure reflection of light. And that is what becomes so tricky to compute – how does a material absorbs light particles, diffuse its motion and re-emits it ?

Reflection is a simpler process to render. In contrast, with translucent objects like images of glass statues – the refractions, reflections and SSS are all acting at the same time on how light disperses, so it can be pretty tricky to render accurate depictions of light through such objects. And if you have a glass full of marbles, there is even greater complexity in trying to get things physically accurate. Rendering engines also has to take on the important challenge of processing speed. Yes, Sub-Surface Scattering can seriously slow down the rendering process. And time equates money.

Nowadays, render engines SSS shaders usually falls under either ‘fast’ or ‘physical’ modes. Fast SSS are like Unity game engine where speed in processing time matters more than getting physical accuracy, as long as the image quality is presentable. In fast mode, lightmaps are used to fake and simulate the scattering effect. On the other hand, Physical SSS rendering is computationally expensive. The rendering engine has to achieve physically-correct light photons, ray tracing, and global illumination to simulate the absorption versus scattering effect. finalRender, for example, is a physical rendering engine that combines all these with the Random Walk approach to enhance the look and calculation quality of sub surface scattering. You control the absorption of light based on its individual spectral wavelength. And then speed the whole process up with biasHybrid™ Light Cache and hybrid GPU+CPU rendering.  

Random Walk is perfect in computing scattering of light photons as well as the rays splitting into multiple rays. Splitting of rays and scattering can go up to the tens of thousandth, depending on the material. And the speed equation is resolved with finalRender’s Adaptive Sampling.     

The Gold Standard Stanford Dragon Sub-Surface Scattering test:
finalRender biasHybrid ™ Random Walk:

A standard way of testing how well a rendering engine handles SSS is to compare image quality using forward scattering and back scattering sources of light, as the light enters the object from the front and back, respectively.

Great Adjustability with Cool Speed

In rendering an image, unlike in real nature, where things are automatically exposed to countless particles bouncing off in a natural state of affairs and giving our watchful eyes what we eventually see as ‘natural’ or how we defined something as a feast for the eyes – a beautiful thing to behold when bathed in light and parts mysteriously covered in the shadows – but when art is created, a style is achieved based on how the artist chooses to depict the lights and shadows – standing out or blending in to their 2D/3D environment. It makes images more manageable, creating greater perspectives. Hence, Random SubSurface Scattering computations being physically-accurate is your best bet.

by Cedar Thokme, Social Media www.cebas.com

Physically Accurate Dispersion in Rendering Colors

Featured

By Edwin Braun

Computer generated images have become indistinguishable from real world images. Modern rendering applications, like finalRender, use the power of GPU and CPU rendering to recreate reality in all of its beauty. The technology behind all of this is called trueHybrid™, it is the manifestation of an artists dream by leveraging all of the power offered by the CPU and GPU.

Physically Based Spectral Wavelength Rendering

finalRender treats colors totally different from what you might know of other rendering applications on the market. Colors in finalRender are treated as a combination of individual wavelengths, just like in the real world. An overlay of multiple frequencies of light results in the final colors we all see around us. Lets take a rainbow for example, water drops disperse (or split) the sun light into its individual components and this Dispersion of light presents to us this awe inspiring rainbow of colors – right there in the sky!

I took dew (water droplets) as an example of why such effects, even when they are subtle, are so important in creating photo real images. A Dispersion effect (a wavelength-dependent refractive index), inside of the water drops, causes different colors to refract at different angles, splitting white light into a spectrum.

Here is my 3D scene fully rendered in 3ds Max and finalRender.

color and rendering dispersion closeup

color and rendering dispersion closeup

Have a close look, the No-Dispersion image shows flat un-sparkly water drops. The drops are missing the life and brilliance you would expect from Dispersive media. Water drops are like little diamonds, they sparkle and shine! Lucky for me, Dispersion is a simple one parameter control in finalRender which allows me to turn it on and off to do multiple renderings of the same situation. Being a physically accurate spectral renderer, finalRenderis able to recreate the real world Dispersion effect without any extreme calculation overhead – thanks to trueHybrid™ and the added power of NVIDIA GPUs.

Find below a HD Image without any Dispersion effects turned on.

spider_web_no_dispersion

spider_web_no_dispersion

And here is the same image but with dispersion added to the water droplets.

spider_web_dispersion

spider_web_dispersion

Even though it is a subtle effect, it is essential to the final quality and realism of the image. Physically Based Spectral Wavelength Rendering is the only accurate and correct method to re-create optical effects like Dispersion, Diffraction or Interference including radiation based color effects (black body radiation).

Do you Remember ? Some years ago Dispersion was a Hype in the rendering market!

I’m long enough in this industry, I started some 20 years ago and I remember all the hypes we went through, while offering our ground breaking rendering application finalRender Stage-0 to 3ds Max users. At some point one renderer manufacturer came up with the idea to add Dispersion (they are no longer in business) to their rendering solution. What an outcry it was ! Suddenly everyone wanted to have Dispersion in their renderers – so all of the developers got to work and added that feature. Funny fact: No one ever used it from then on! Besides the marketing material each developer created – no one ever used Dispersion in their renderings. You might ask why ? It was ~20 years ago and rendering of such effects, even with brutal tricks, took ages and was not practicable at all. Also do not forget all those renderers offering Dispersion effects back then (and today) were based on a simple RGB color model and this color model is just dead wrong and has nothing to do with the real world effect of Dispersion.

Try it out with cebas finalRender free trial copy (comes with finalToon) or if you are ready for Accurate Dispersion Color Rendering, get it at cebas.com/finalRender.

Adaptive Sampling in the Age of Unbiased Rendering

Featured

by Edwin Braun

Physically accurate spectral based rendering, like it is offered by finalRender, uses unbiased rendering methods to create realistic and physically accurate rendering results. No other rendering method is able to recreate physically correct real world effects like diffraction, interference and dispersion or even calculate radiation based effects.

Unbiased rendering, while physically accurate, suffers from higher levels of noise in the final rendering when compared to old school biased rendering approaches. In simple terms unbiased rendering can be described:

As long as you shoot an unlimited amount of random rays at a scene, the rendering will converge to what we see in the real world. However, to get the exact and error free rendering result as we see it in mother nature, an unlimited amount of rays would be needed.

I’m sure we all agree “unlimited” is not realistic to achieve within a normal life time of a human. Thanks to modern GPU and CPU advancements the amount of rays we can use in our renderings is steadily increasing, and so is the scientific progress on algorithms and theories behind unbiased rendering. Adaptive sampling is meant to help accelerate the rendering process by controlling when and where calculations happen at render time. It is the go to method to get more speed or higher quality out of a renderer. However, such speedups usually come at a cost and generally speaking adaptive sampling always bares the risk of introducing bias into any kind of rendering calculation and bias means the willful introduction of error in exchange for a possible increase in rendering speed.

unbiased_sampling

Edwin Braun’s article on unbiased_sampling with finalRender

Great care was taken to bring adaptive sampling methods to finalRender without introducing any bias in rendering. finalRender stays true to its physically accurate unbiased approach of spectral based rendering, even when adaptive sampling is turned on. This is possible because of how finalRender interprets adaptive sampling, calculated pixels are never changed or interpolated. The only thing finalRender does in its adaptive process is to skip processing for certain pixels that fall within a threshold. To overcome the bias pitfall, finalRender allows to recalculate any pixel after several passes even those pixels that have been skipped before so they can fall back into the threshold and become processed for some passes until ruled to be fully converged. finalRender works within a complex “feedback loop” to generate the most realistic and physically correct rendering result in the least amount of time.

adaptive_sampling 50% speedup: finalRender

Ready for finalRender trueHybrid(TM) ? Wanting to know more?
Write to cebas Visual Technology by using this form or info@cebas.com

Follow finalRender on facebook

Read the next cebas blog article


 

Does Artificial Intelligence really help in getting the grain out of our images?

Featured

If it is challenging enough to have a layperson figure out what is meant by trending technology such as ‘neuronal network technology’ in AI (Artificial Intelligence), try a simpler idea: how about let us just get rid of that grainy ‘noise’ in our rendering. We demand photo-realism. This and this alone had baffled many software engineers for all time.

Besides, developing art-directed technology that keeps our user artists and visualizers at the edge of not tipping over the frustration of wanting more speed, less quality versus more quality, less speed? There is the demand for greater flexibility: how to achieve picture perfect with simple and fast controls yet, not take away that creative versatility that personalized every work of art rendered?

Cebas Visual Technology has been in the business of creative visual effects for more than a quarter of a century now – sometimes eureka! – at times groping in the dark for that unique piece of software engineering. Now, following last September’s release of the new finalRender trueHybrid™, cebas will deliver with its upcoming subscription drop an enhanced finalRender featuring an AI-Denoiser based on NVIDIA’s OptiX 5.0 platform. This AI- trained denoising solution, tested and certified by NVIDIA on tens of thousands of 3D scenes offers realtime denoise functionality with outstanding quality.

NVIDIA has spent a lot of resources and time to develop this new technology that is aimed at helping other software developers in tackling one of the pressing issues when developing a modern rendering system. Physically accurate unbiased renderers, like finalRender, have to deal with noise while rendering an image. True unbiased renderers are prone to more noise than old school biased renderers, one reason being the methods used to follow light rays in unbiased path-tracing.

We could explore noise in general and the reason for it in an endless wall of technical descriptions and white papers. It can easily fill a whole book if not several books! Do not worry we are not going to do it here.

NVIDIA did much of the heavy lifting for developers and they delivered a remarkable tool for us to integrate and enhance on –  which we did. finalRender Subscription Drop1 will incorporate this new AI-Denoiser technology fully integrated into finalRender. Early testing already shows promising speed gains by reducing the amount of render passes (samples) needed to get a clean image. Depending on the scene, we could see speedup factors ranging from 2 to 5 times or even 10 times faster in some situations!

Video: New finalRender AI-Denoiser incorporating NVIDIA OptiX 5.0 Update Coming Soon for Subscribers

finalRender treuHybrid rendering in denoiser image

finalRender treuHybrid rendering in 45 seconds test image 1

About this Image:
The PC workplace image was rendered with 100 samples, only. This was done to illustrate the power and strength of the AI-Denoiser, integrated in finalRender. The zoomed in keyboard section (pixel blow up) should resemble a flat shiny surface illuminated by an Area light. Without the AI-Denoiser, noise is clearly visible in the image. The same picture with AI-Denoising active, shows the noise fully removed. Other remarkable areas to watch out for are the soft shadows of the cup and the pen.

As seen in this example image, AI-Denoising is pretty impressive and it does not really eat up render time at all. In consecutive passes, the AI-Denoiser is processing its results in the millisecond range! Our integration of the AI-Denoiser into finalRender is fully transparent and without any complexity at all for the artist! Following our philosophy – one button is enough – you either turn it on or off. No complex features or controls needed, and all the denoising magic is done by the Artificial Intelligence.

So far our testing did not reveal any major headaches besides the known issues or better, the technical challenges when using transparent objects (glass) or extreme contrast values. Purely artificial image setups (no noise at all, all black areas) will also startle our integrated Artificial Intelligence, which is expected somehow. You can’t tell an AI to look for noise and create photoreal images by not supplying it the minimum – a real image.

We will soon release a video recording of the AI-Denoiser working live in finalRender stay tuned for more and updated information; follow cebas updates on facebook/rendersoftware and / or  facebook.com/cebasVT.

Click to finalRender features intro videos.

finalRender treuHybrid rendering denoiser test image 2

finalRender treuHybrid rendering denoiser test image 2

Important Note : This blog describes forward-looking releases and technical advancements. It is not a description of a shipping product but rather describes a product in foreseeable development.

END

 

 

 

Get ready to launch – something transformatively destructive coming this December 11, 2014!

Featured




REGISTER NOW FOR THE NEXT INTAKE MARCH 30 , 2015 !

Quote from FX Creative Destruction Genius, Allan McKay:

“I honestly really like the new pricing plan for thinkingParticles 6, even though technically it’s probably more cost affective to pay full price in the long term – subscription is definitely the future – and people are more happy to pay with instalments.

My TDTransformation course for instance, I know a lot of people after I started doing instalments for courses, they wished everyone else would (provide subscription), because its easier to handle – plus more attractive to them.”

AllanMcKayR2D2   …you’re not alone.

Inspiration is the heart of cebas CG software. Quotes from CG / FX Artists 2009-2014

Featured




high 5 from tumbler

cebas inspirational quotes with a high-5


This week, we decided to look back into our many testimonials and do a collection of quotable quotes that have something real to say about cebas products and share with all our friends and visitors (more to come).
_____________________________________________________________________________

“Thinking particles was at the heart of our destruction pipeline. ”

Sam Khorshid (FX artist of 2012-fame)

_____________________________________________________________________________

“We were able to design custom written thinkingParticle Nodes through the provided SDK which opened a whole new level for customizing our internal pipeline.”

Mohsen Mousavi (FX artist of 2012-fame)

_____________________________________________________________________________

it was the deadline.. we had lots of shots with 2500 boats and a crew of 8 to 12 peoples on each boat with masts and flags and water surfaces and interaction with explosions and debris and fragmentations  and all that had to be done in 8 weeks! “

Rif Dagher (VFX Artist of Red Cliff-fame commenting on the speed of thinkingParticles)

_____________________________________________________________________________

“All of my abilities are self-taught because I have never been able to work through tutorials. I think I am a little bit too impatient so I just use trial and error.”

Janne Hellmann (specialty: 3D watch design, on becoming a 3D artist)

_____________________________________________________________________________

“I struggled in the beginning to get the feel of the original artwork into the 3D, and after failed attempts I came across finalToon. The system was easy to learn, and I was thrilled with the amount of control and variation. I was easily able to replicate the original designs, staying true to the art direction.”

Joel Furtado (specialty cartoon characters)

_____________________________________________________________________________

“finalRender allowed me to have both high detail GI that did not flicker. If you want flicker free GI, you usually need to use soft low detail illumination. finalRender did a great job at having both tight details and flicker free GI and it was fast.”

André Cantarel (specialty: flying machines)

_____________________________________________________________________________

“finalRender is a great time saver when you are not sure, what your client will ask you to change, at the last minute.  With a 3 day deadline, everything which saves time, is a life saver.”

Guillaume Gaillard (The Simpson jingle-fame)

 

cebas.org/blog 

Art + Technology 2014 fast forward (p1)

Featured

“YOU CAN GET DIGITAL TECHNOLOGY THAT ALMOST IS FILM QUALITY, AND GO MAKE LITTLE FILMS AND DO EVERYTHING YOU CAN TO FIND A LITTLE UNDERSTANDING OF YOUR OWN VOICE AND IT WILL GROW – DON’T TAKE NO FOR AN ANSWER – TAKE EVERY OPPORTUNITY YOU CAN TO DO SOMETHING.”

                                                 JON VOIGHT

 




Well said. This is exactly the stated mission of cebas Visual Technology. We all know John Voight who starred in the very first Transformer1 movie. And we all know his award-winning daughter, Angelina Jolie. They are both seen as socially conscious and responsible actors of our time (measuring two generations). Jon Voight’s quote talks of our era of digital technology and the works of film art.

Movie VFX

Special effects started with the very first avant garde black-white movie way back in 1878, ‘The Horse in Motion‘. Then, a spice of cartoon technology in 1932 from Disney Technicolor ‘Flowers and Trees‘; followed by a dose of technological destruction in 1933 with the ‘Deluge‘ (RKO films). A bigger push came with ‘MGM’s Audioscopiks’ (1935), the very first 3D movie nominated for an Oscar, Best Short Subject, Novelty category. It was MGM’s first film to be shot in 3-D.

What came after that was certainly suggestive of our times: H.G. Wells’ novel The Shape of Things to Come (1936) adapted by producer Alexander Korda. A classic science-fiction on time-traveled from 1936 to 2036 A.D.’ (Note: we are still in 2014!) The VFX was primitive by comparison but nonetheless a breakthrough of the times.

thingstocome11

Screen shot of movie: the shape of things to come.

Fast-forwarding…. we are at 2001 (Artificial Intelligence), 2003 (Matrix), 2002-2006 Marvel superheros.. 2009 (James Cameron’s Avatar) to 2010 (Harry Potter), and the rest is history.

The merger of technology and art

What a fusion of creativity with the advent of CG graphics and 3d animation. Something few could have foreseen in its rapid overtake of the world of films, photography, art and design…even medical science. There are no boundaries, whatsoever, on how much creativity CG VFX software eventually will bring forth. It is like the ultimate time travel married with the boundlessness of the galaxies.

And if like me, you’re caught up with trying to figure out where is that niche technology tunnel to climb into for that light at the end? Well,.. I disturbed our co-founder, Edwin Braun, momentarily from his tax forms to ask him about this, so we could give something to our fellow artist trainees (okay, we are the technology-providers, but we are often swept along with the Arts and Entertainment tide, so fellows we are.)  First, let us route it this way.

From the standpoint of production, the four main areas of 2D and 3D CG art concerns are movies, games, architectural, industrial (engineering/printing), medical illustration, education, advertising and telecommunications (mobile/smartphones apps). One might enter based on vocational interest. Or cut the crap and broad base it by remembering that everything with CG starts with 3D skills.

Says Edwin Braun: the foundation of all CG work starts with being handy, effective and productive on 3D. Regardless of the tool you select, an artists starts with knowing how to create or integrate 3D models with the texture, the material and then figure out the camera-lights and the movements (rigging) or creative destruction. Artists then go into one of these specialized fields and as they work their way through…. a special love for some software eventually develops. Or, one can also choose to be a generalist.

 

Art + Technology 2014 fast-forward (p2)

Featured

Cinematic proportions

If you love the kind of impactful visual effects of cinematic proportions then, you will be the thinkingParticles VFX artist: the explosion, rain of fires and fumes, the 2012 earth-shatters, the hurricane, the superhero/transformer bashes and great floods. The jumping point here is the platform that you are familiar with and there are a dozens of them out there. cebas works with 3d and 3ds Max. The finalRender works on both Max and Maya. (Max vs Maya: a friendly comparison) is a good foundational read for the novice.

VFX thinkingParticles on the set AMBITION Show Reel 2014 from Rigel Bowen on Vimeo.

Here is a helpful excerpt from the aforementioned article on Max/Maya:

“Different industries seem to have their own preferences for 3DS Max or Maya. Within the film industry, most artists prefer to use Maya because of the advanced animation and effects tools. However, 3DS Max is most often used for architects and engineers because of its ability to interact with AutoCad and other Autodesk design tools. Interestingly enough, it has been discovered that both programs are used equally within the gaming industry, although game developers have strong opinions favoring one or the other. ”

The truth be told, it so happens that cebas’ well-used VFX software, thinkingParticles (now on version 6) has always been one of the favourite software supporting the visual special effects film art and animation in Hollywood and beyond. You simply need to check out our portfolio and the many interviews with acclaimed VFX artists on cebas Testimonials page to find this out.

(Fabian Buckreus’ dancing balloons with thinkingParticles spline and soft body action:)

And another excerpt…

“While 3DS Max and Maya are difficult programs to master, one has a more forgiving learning curve than the other. 3DS Max is considered to be more intuitive than Maya, making it the more user-friendly of the two. The basic functions of 3DS Max can be learned in under two months and if you’re familiar with simpler modeling programs already, you will feel comfortable with how the program works. Maya, however, is far more unconventional and presents a very steep learning curve.”
….see p 3

Art + Technology 2014 fast-forwarding… (p3)

Featured

cebas VFX plugins

were built on 3D/DS Max attuned to our founders and developers’ vision to service the larger industry of art and design, inclusive of architecture and landscape. The idea is to build digital technology that creates more space for the artist’s imagination to flourish and not be bogged down with technicalities. The worth of a software is how much it enriches the experiences of digital technology users.

cebas Visual Technology’s perspective is that new and innovative companies for films, music, video games, architecture will eventually find our Renderers (finalRender, moskitoRender, finalToon) as VFX software presenting the least system complications and least creative restrictions. In the end, as we say, art + technology = art-directed versus technology-directed. For certain industries like defence and heavy industries where art is not the crux of the matter, we have to view it objectively as technology + art = technology-directed and less of art-directed. This is a clear cut distinction. This is another decision marker for inspiring 3D workers: the field you are entering decides what your experiences will be as a user, more than the area of specialization. So, your choice as a 3D artist, boils down to if you prefer art-direction or technology-direction. Achieving a nice balance of both is not easy due to the high specialization in skills involved.

Cebas-MesysMedia9

finalRender shades and lighting

In P4, we will blog on how digital technology has changed and enhanced the sphere of architecture/landscape design – something an inspiring 3D student might also want to investigate further at developing their skill sets, and a career, as this is yet another groundbreaking field of visual technology that you may wish to be caught in the act of creating.

20 Times Faster ? Shut the F* Up!

By Edwin Braun

With the integration of NVIDIA’s AI-denoiser, finalRender brings more of the latest and most advanced rendering technology for you to leverage. AI-Denoiser is a GPU-Only image processing effect able to create remarkable de-noising results, even when there is little to nothing to work with in an image! This process is highly optimized and it is built to take specifically advantage of the latest GPU architecture offered by NVIDIA. De-noising an image takes a few milliseconds per pass and this is why finalRender implemented the AI-Denoiser to be called after every rendering pass is done (starting with the 10th pass). When the AI-Denoiser is activated within finalRender, it delivers the benefit of ‘seeing’ a photo-real result right from the get go. A clear image can be seen even when there is a low number of passes (e.g. 20) which makes it the perfect tool for Active Shade renderings when adjusting lights and material is crucial.

There are some known restrictions with the AI-Denoiser :

  • it needs a rather modern GPU to work with (Kepler,Maxwell, Pascal)
  • memory consumption is increased
  • plain black backgrounds will show artifacts (clouding)
  • results created with the AI-Denoiser will be clamped images
  • to use it interactively in Active Shade it has to be first switched on before the rendering starts.

How does the AI-Denoiser help me in gaining rendering speed?

The most time spent when creating renderings, with an unbiased path tracing renderer, lies in the reduction of noise in an image. Due to the very nature of unbiased rendering, random rays are sent into the scene to collect information about lights and material properties. If enough random rays are sent, the result will slowly converge into a photorealistic image that will actually represent reality like no other rendering technology would be able to achieve. This is when the AI-Denoiser comes into play, the sooner the noise disappears the faster the final image can be seen and used.

Lets have a look at some rendering examples:

finalRender 20X faster

Video https://vimeo.com/262110398

In the Illustration above, you can see how much of an impact the AI-Denoiser can have on render times. This scene is a benchmark created by Edwin Braun to test finalRender’s ability to render complex lit scenes. This scene uses no lights of any kind, the only source of light comes from a HDRI environment map. Such scene setups are problematic for any kind of renderer, the illumination has to be sampled from a HDR image, which is by definition high in dynamic range. A dark pixel in such an image can easily be right besides an ultra bright pixel which can causes extra noise due to the high dynamic range found in smaller areas of the picture.

In this sample scene the illumination created from behind (from the back of the room) is created by bounce light from the back and side walls of the room which adds an additional source of noise to the mix. If this wasn’t enough to kill a renderer, most of the materials sport blurry transmissive or reflective material properties. The front glass of the clock, for example, uses a multi layer refraction map with varying levels of blurriness. All those materials are in fact creating even more noise on their surfaces as well as in their surroundings. You could say this scene is a noise generator by intention.

One image was rendered “brute force”, to achieve a clean result it took 12 hours to get to the point of no visible noise. The other image was done in 35 minutes (20 times faster) and it is nearly indistinguishably from the other. Only a few minor differences can be spotted in the AI-Denoiser image result. To make this difference a bit easier to see, find below a differential image of both images shown above. All non black pixels indicate differences in the image.

view with only human eye

Where do I find the AI-Denoiser in finalRender ?

The AI-Denosier can be turned On and Off in the Render Dialog (this is for Active Shade and Production Render) and it can be activated as a post effect in the finalRender frame buffer. When using the AI-Denoiser as a post effect you can control how much the original image will be blended with the de-noised image. Keep in mind that the post effect might not show the same de-noising quality as the at render time effect does. The render time de-nosing can access more data (normals, albedo) which can sometimes be an advantage.

This article describes upcoming features of finalRender Subscription Drop1 which has been released March 2018. Get your free trial at http://www.cebas.com/finalRender

finalRender – the only Camera you will ever need!

NEW finalRender was launched just past September 2017. No frills that there is a tough act to beat in the flooded market of renderers, that’s true, but cebas is confident we are on the right track. The development team is working hard on the next updates, which includes the state-of-the-art GPU AI-denoiser (NVIDIA OptiX) and the latest hybrid technology that will put our users in good stead in the long run.

Uniqueness: talking about a TRUE HYBRID
– GPU and CPU rendering at the same time: what is the uniqueness of cebas trueHybrid Trademarked approach? It is actually one application whereby CPU and GPU are rendering at the same time and no difference between the two as the GPU is treated exactly the same as CPU with combined power processing simultaneously;

– this is not a GPU-only approach, finalRender behaves like a GPU renderer even on CPU with the added power, all the latest materials, and flexibility made available. A significant difference if you were to compare renderers. finalRender = only trueHybrid in the market today.

–  finalRender is about a balance of speed plus the additional flexibility working with your render elements; yes, there might be some other renderers that will give faster speed, fR development is towards quality and flexibility for the users.

Features Video: https://vimeo.com/240749822

finalRender trueHybrid 100percent - Test statistics GPU CPU speed performance

finalRender trueHybrid 100percent – Test statistics GPU CPU speed performance

AS +CPU SO +GPU.. ALL Render Elements working: No Restrictions!
– finalRender supports everything CPU plus GPU one a single application, an important distinction, unlike renderers that may offer hybrid with restrictions.

FUTURE UPGRADES IN GPU
– finalRender is also designed to take full advantage of future GPU developments and you will not need to buy a new renderer in view of the future fast flow of changes in system architecture, where big data handling can only be done on GPU.
See AI-Denoiser blog post

ACHIEVING THE ONE BUTTON DREAM RENDERER
– this new launch, cebas aims at achieving flexibility for the artists while making it as simple as it can get with the one-button-solution approach;

– as few buttons as possible with flex controls is all you need to learn : so CG artists can concentrate on the aesthetic presentation of your images instead of mind-boggling UI. It’s about achieving greater art-direction vs. technological tweaking. You can concentrate on the more important lights, shadow, materials … scene setups. And just press Render.

finalRender is achieving render artists one button dream

finalRender is achieving render artists one button dream

BEAUTY OF UNBIASED & RANDOM RAYS
– random rays are shot to the scene without any directional preferences : based on physically-accurate theories, if you can shoot enough random rays, the scene will resemble what mother nature does. finalRender is developing this technology and will enhance it further.

RENDER ELEMENTS : FRAME BUFFER
– 3ds Max ActiveShade mode gives a few render elements on screen which are complicated – fR enhances use and rather than confusing popup windows whereby these render elements are dumped into your hard drive taking up space, you now get all the layers and data elements properly organized and individually displayed to work on.

– finalRender FrameBuffer allows you to close off the dumping and provides a well-organized tree-sorting of the elements (see menu item : enable TreeMapping) and there is a saved History of your renderings. You can save and rework every stage of rendering that you have tweaked.

fR trueHybrid FrameBuffer, Render Elements

fR trueHybrid FrameBuffer, Render Elements

See more tutorials at Vimeo.com/cebasvt

– with FrameBuffer, u can adjust your HDR rendering (see menu item: Photographic Raw );

– artists can develop the rendering real time right in the finalRender screen and there is less hassle to go back and forth on Photoshop; especially useful to those that do not have the Photoshop pipeline.

– You can adjust all the combos : from Temperature / Tint / Exposure / Contrast / Highlights / Midtones / Shadows / Black / White / Vibrance / Saturation / Vignetting / Hot Pixel … just on this one ‘fR- FrameBuffer’ button alone; Reset-able as well for modifications while rendering on real-time.

PER RENDER ELEMENT FURTHER FINE-TUNE
– Right in 3d Studio Max, in fR- Frame Buffer: you can select : fR-Diffuse/ fR-Speculars/ fR-Lighting / fR-GI/ fR-Reflection / fR-Refraction / fR-Background : a huge deal of control on your side. It’s control and it is flex power!

– better still this is a Non-Destructive way of doing things and you can always return to your Original;

– You see a lot of noise coming into fR-Lighting and fR Global Illumination so finalRender allows you to work with these elements :
-introducing the new fnalRender’s new FrameBuffer for Render Elements – raw image adjustment with no need for Photoshop so you don’t have to have another PS pipeline, real time feedback;

– try fR-Lighting element and adjust the De-Noiser; Edwin demos the same using fR- GI. The good thing is that these two example render elements are only part of the composition and you therefore have the flexibility to create in-depth image/model changes.

– As promised, the detail adjustments is non-destructive to your original;

COMPOSE LAYERS (fR- FrameBuffer)
– Enabling the Compose Layers will then bring your detail adjustment of the render elements making a new image that can be saved;

– in other words, you need not switch to a Photoshop layers to see how things stack up and the final result – you get it right in the 3dsMax and fR-FrameBuffer on your work screen;

– and whatever you see in the FrameBuffer, you can now Save it. FR is fully integrated in 3ds Max and you can work on everything in Max.

The best is yet to come. Subscribe at cebas.com/finalRender


 

 

 

 

Rendering Technology and the Cyclops Wave!

It’s been quite a month for rendering software this last quarter, first Chaos Group and Render Legion (Corona) merged in August. Rumours abound but we are not at liberty to make comments on the whys and the then what..? We do know that there are users who were with Corona because they didn’t want to be on Vray. I’ll just highlight two telling comments,

“When I first heard this news, I admit that I was worried.  I don’t trust very large and dominant CG companies with small, innovative, up-and-coming companies.  Still, i’ll have to wait and see.  I quite like Corona.  I’m not sure exactly how VRay could benefit from their technology, since they have stuff that does a similar thing already.  Then there’s the new fR which is (finally) coming out soon.  Perhaps they are worried about the intensifying competition from all sides?” (Telemachus, the expert from forums.cgsociety.org)

And here’s another comment on CGArchitect.org forum by senior member . ‘C G’, “Competition is good and has certainly proven to benefit the community in this case. As Corona gained a solid foothold with some great innovation and awesome value for money, Vray suddenly had to lift their game and started an upgrade sequence that provided a benefit to their customers in turn. ….”

Only time will tell. It is too early to say and too many factors that has not come into full play.  BTW, thanks to Telemachus for mentioning yours truly, cebas finalRender – finally or not, we are still small and innovative, and most of all independent and free. We stand by our Trademark. Yes, we took precious time and we apologize to our users, but the technological horizon is like surfing the eye of a cyclops wave! Cebas had to surf well and make sure we served something that will stay afloat for a long, long time. So, we now have trueHybrid finalRender in the race.

Foretelling the future of rendering, on November 19, one year after relaunch, NVIDIA announces the demise of Mental Ray. Cebas knew about it in the news. It was not as dramatic a reaction as that for the Corona takeover. We sense that some users are still grappling with the changes – some are angry about NVIDIA’s decision to end Mental Ray. As a software developer,  we at cebas know very well, and understand the decision behind why Mental Ray is no longer an option.

We came to a similar conclusion when we looked at old finalRender – we also decided to ‘kill’ it, to make room for something better and more innovative. We had to accept that CPU rendering – the old way is dead and cannot deliver the future.

Looking back in 2001, when we first introduced finalRender at stage-0, cebas was the company, if you recall and if you were in the business then, that we actually started the rendering revolution in the industry. We gave the market its first affordable GI rendering solution for 3ds Max and the only one that worked inside of the 3ds Max Scanline renderer. See https://www.turbosquid.com/Index.cfm/View/FSBNCG

Then came our finalRender Stage-1 Release 1 – a fully integrated rendering system for 3ds Max – again, it was unmatched by any one at the time. The technology and methods we created back then offers still the most advanced you can get out there. It is still the flag-bearer, the Reference renderer for many developers out there – who copied and modified every single feature we had invented back then.

To be forthright, NVIDIA even poached our engineer who was responsible for GI engine development back then. I guess they had to get serious with Mental Ray or they would lose the battle on the rendering field. But the lip of the cyclops wave was too thick 
🙂

Old style biased CPU rendering solutions have exhausted its potential and it is starting to  hit a wall. As NVIDIA said it: why MR was discontinued, “To bring AI and further GPU acceleration to graphics, …”

The future is trueHybrid™ CPU and GPU rendering with a combination of “enhanced” unbiased rendering technologies. NVIDIA realized this as well and so did we at cebas Visual Technology! Our new finalRender trueHybrid represents a new generation of rendering system, so advanced it will exponentially become better over the coming years.

Every developer of old style-biased-CPU rendering will eventually come to the same conclusion. The future is “enhanced” unbiased CPU+GPU rendering like the way we have done it. So, you don’t have to worry that you are jumping into a bandwagon. It’s much. much more sophisticated than that and cebas have your back covered.

Our new and powerful finalRender delivers trueHybrid at its best and it does it in a way like no renderer for 3ds Max does – Physically Spectral Based Wave Length Rendering, which is the only known method in science to reproduce real world lighting phenomena.

Be brave and make the leap into a new rendering generation. finalRender will not let you down, it supports most of the mental ray materials and features so the transition is painless and joyful.

Email 3dgallery@cebas.com for more info.