The previous reply covered pretty well but I'll reply in particular here and follow with some required reading.
Good morning friends,
Yesterday we ran our first rendering. It took 8 hours at an SL of 16. The result could definitely use improvement. We contact Jeremy Hill at the Maxwell for Rhino section and he gave us some tips. As this is pretty heavy, obscure stuff a lot of it is going over our head. As it is general stuff that doesn’t really belong in the Rhino plugin section, we are following up with it out here in the general section hoping that we can generate a dialogue that can illuminate some of the issues that follow. The quotes are all Jeremy’s:
When we asked Jeremy why the rendering took so long and why it is still grainy, he responded:
“If the image is grainy, it means that it needs to render longer, and/or that your scene and/or materials are poorly-designed and are slowing down the progress unnecessarily… firstly, do not ask Maxwell to render things which do not matter. If you draw some geometry twenty miles away from your scene, Maxwell is going to render it, even if you can't see it at all -- Maxwell is an unbiased renderer, so it does not attempt to make guesses about your intentions.”
Okay, is Jeremy saying that Maxwell renders (or processes) geometry that is off camera? If we have a model of a three room space and we have pointed our camera to one corner of one room, and set Maxwell to render it, does it render the information in all three rooms even though the camera can’t see it? Does it process the information behind the camera – even though it can’t see it? If so, how should we handle it? Should we delete the other spaces, hide them, or put them on another layer and turn them off?
You can choose to leave the geometry there but it will take longer to render because in a nutshell what Maxwell is calculating is the bouncing (and transmission/bending) of light, whether on camera or not -- If you give the light a more complicated path to follow it will take longer to resolve. The basic concept is if you simplify the path of the light it will resolve quicker... but it is particularly pointless to allow Maxwell waste processing cycles on geometry/materials/emitters that do not effect what is seen on camera. So my advise is to (do whatever Rhino does to) "hide" the non-relevant portions before rendering any particular camera.
It is also worth saying that SL 16 may only be the beginning depending on what type of materials you are using -- Dielectrics (glass,water,etc), Displacement, and SSS/thinSSS can all lead to longer render times... I tend to render to at least SL 20 for most of my scenes and for tough to render materials much higher. Render time is irrelevant because it will vary completely dependent on what you are giving to Maxwell... you are in control of render time not Maxwell because you can make different decisions whereas Maxwell is a fixed variable.
Poor materials settings will also lead to longer render times... as a general rule of thumb I would make sure my saturation and brightness values of any color based material does not exceed 225 (for reflectance 0) and I would avoid excessive use of "Additive" blending mode. I would also check my material image maps and make sure that none of the images set to Reflectance 0 have any values exceeding the 225 limit as well.
Materials is a complicated subject and one that takes time to master and understand -- this will be an ongoing learning process for you and you will have to resign yourself to running many hundreds of tests to learn how to get to the best solution and to build the experience necessary to get the best possible results in the least amount of time.
“Secondly, build materials which mimic materials you might find in the real world. In the real world, you will never find the color [255, 255, 255]; at best when you see something white, you are looking at something more like 220-230.”
Is Jeremy really saying to avoid white because it takes longer to render or because it won’t look right?
We are working in spectral color in Maxwell (not RGB) -- if you design something in a graphics application and print it out, the "white" portions will simply be paper... in that instance the paper becomes "white"( although the true value of paper rarely exceed RGB 225) and all the colors printed upon that paper will be skewed downward in brightness to adjust to that new limit of brightness that the paper surface presents... this is a type of colorspace gamut and is a very complicated subject and one which you can wrestle for years (I have).
The bottom line is Maxwell does not put that limitation on you but for the sake of speed and realism it is best to obey the laws of physics and not exceed the brightness or saturation of objects in the real world. Again going back to the graphics applications,they may show bright and saturated colors because they are not bound by the same physical limitations(when choosing a color, basically RGB vs CYMK) but the printers can not print certain colors and will adjust them down in brightness and saturation to fit into the physical limitations of paper and ink.
“You will also never find a perfectly reflective surface in the real world; so don't ask Maxwell to render one. Just think about a ray of light hitting a surface and bouncing back -- what would the world look like if no energy were lost (converted to other forms, that is) when light bounced around? Well, what's your Maxwell world going to look like if you tell it that such materials exist?”
Again, how do we avoid “perfectly reflective surfaces”? How do we spot them and how do we tone them down?
Increase the roughness parameter slightly -- although I am not one to harp on this issue because I think in alot of instances a perfectly smooth surface with a bit of bump/normalmap/displacement is better than a modification in roughness (which looks "fake" sometimes)... regardless, this has not been a huge issue with render times for me.
“Lastly, you can help out the calculation a bit sometimes; think about a light ray trapped in a room -- it will bounce around until all of its energy is lost. As I wrote above, Maxwell takes your scene as you give it, so it's not going to do anything tricky to try to help here, but nothing is stopping you from helping; open up a wall behind the camera if it won't adversely affect the image; by doing so, you can let some of the less interesting light rays escape out into nowhere, and therefore cease to represent a calculation cost.”
What does this mean? How is light trapped in a room? Should we really “open up a wall behind the camera?”
Light is basically energy -- and some of that energy is absorbed with each surface it hits (unless you set the reflectance 0 for 255 and
have a perfectly smooth surface)... the remaining light energy will continue to bounce until it is completely exhausted. This exhaustion of the light energy is the completion of the render in Maxwell -- noise is simply the result of light energy still bouncing around. If you open a wall the weakened light can escape into nothingness and will no longer need to calculated, thus less noise.
“It sounds to me like a combination of the two then: an image which needs to render longer, which is then being stretched quite a bit beyond its native resolution for full-screen display. There is no conclusive answer to the question of how long a render needs to be calculated; it is completely scene-dependent. I could build you one scene which would be finished in two minutes, and I could build you another which would not be complete within the span of your lifetime. Maxwell is not a 'this region is done, move to the next' type of renderer, -- it is rather, as they say, a light simulator, and it will keep on simulating more and more light interactions the longer you let it work. As to the question of scene size (MB), Maxwell hardly cares at all about complexity of geometry; what matters is how difficult it is to calculate light rays, and that depends on the makeup of the scene, and on the materials used.”
What does this mean? Any help we can get would be appreciated.
This is partially a reference to how other render apps calculate -- the bottom line here is you want to think "reality" in all aspects of Maxwell scene design: Scale of geometry, complexity of geometry, material parameters, and camera parameters. But the important thing is when I say "reality" I'm talking Physics and Math. To that end here are some links to read in your off time:http://en.wikipedia.org/wiki/Lighthttp://en.wikipedia.org/wiki/Spectral_colorhttp://en.wikipedia.org/wiki/Whitehttp://en.wikipedia.org/wiki/Unbiased_rendering
There is much much more -- the learning may take years to fully master all elements... to that end my advice is that you still have to produce in the meantime so I would not worry about being perfect the first project (or first 10 projects) you do. Do what you have to do to get the work out the door and make note of the parts that you need to study and test for later.
Also be prepared to use the services of a render farm for complex renders with lots of emitters -- the average desktop machine may take ages to resolve what a render farm can produce in an hour.