Prev | Next



Lighting, BRDFs, and Shadows

An Exploration of Factoring the Interaction between Surface Shaders and Lights

September, 2009

Introduction

This document discusses the interaction between lights and the surface (BRDF) via a series of evolving code snippets, each adding a level of sophistication to the last. Along the way we will discuss some of the advantages, disadvantages, and design choices that may present themselves as you design a set of shaders for your production.


RSL vs. RSL 2.0

In the old world (before the introduction of RSL 2.0), shaders were basically this:

surface mysurface() {
    //...
    illuminance(..) {
        //.. accumulate using L and Cl
    }

}

Message passing could be used to grab additional information from the light inside the illuminance loop, allowing things like __nondiffuse and __nonspecular to be used in the surface's BRDF calculation, but generally the accumulation of the BRDF was done in the surface shader.

In the new world, RSL 2.0 allows many more options.

Recall that in RSL 2.0, lighting is performed using a light with the light() method. It's job is to produce L and Cl (and optionally more information, if required). The surface shader will invoke the lights by getting a list - using getlights(), optionally with category matching - and subsequently invoking the light() method on each. The resulting L and Cl values are then used in a BRDF calculation, which ends up with a resultant color for the light.

When considering how to factor the interaction between lights and surfaces, there are some interesting considerations:

The interaction between a light - as embodied by the BRDF calculation - may be performed in a number of different locations, each implying different performance and factoring characteristics on your shaders.

The natural RSL 2.0 conversion of our initial example is the following:

shader lights[] = getlights();
uniform float i,nlights = arraylength(lights);
for(i=0; i<nlights; i+=1) {
    vector L;
    color Cl;
    lights->light(L,Cl);
    //.. accumulate using L,Cl
}

Pros and Cons

Pros
This is a very simple port of an "old-style" shader to RSL 2.0.
Cons
It's still "old-style" RSL; shadowing must be in the light, and messsage passing is inflexible.

This implies that shadowing is being done inside the light() method and that the accumulation is done in the surface, but that doesn't have to be the case. Let's assume that there is more information that we need from the light - for example, __nondiffuse and __nonspecular (traditionally retrieved using the lightsource() shadeop). One option might be to extract these using getvar :

shader lights[] = getlights();
uniform float i,nlights = arraylength(lights);
for(i=0; i<nlights; i+=1) {
    vector L;
    color Cl;
    lights[i]->light(L,Cl);
    float nd,ns;
    lights[i]->getvar("__nondiffuse",nd);
    lights[i]->getvar("__nonspecular",ns);
    //.. accumulate using ns,nd,L,Cl
}

Pros and Cons

Pros
This is probably what is already written in shaders that haven't been updated to RSL 2.0, meaning less work to be done.
Cons
This still relies on "old-style" RSL, which is less efficient than passing structs, and can be unwieldy.

Lights With Structs

Neither of the preceding examples are very efficient. Realize that, due to the dynamic nature of shaders, we do not know whether or not a variable exists. The same efficiency concerns exist as they did for old style message passsing - namely that we must search for the relevant shader parameter or member each time getvar is called. Also, simply using the -> syntax is equivalent to getvar() in terms of efficiency. It would be better to rewrite your light to support passing back the __nondiffuse and __nonspecular variables:

// surface
shader lights[] = getlights();
uniform float i,nlights = arraylength(lights);
for(i=0; i<nlights; i+=1) {
    vector L;
    color Cl;
    lights[i]->light(L,Cl);
    float nd,ns;
    //.. accumulate using ns,nd,L,Cl
}


// light
class mylight() {
    public void light(output vector L; output color Cl;
        output float nondiffuse, nonspecular) {
        // ...
    }
}

Pros and Cons

Pros
This is a simple conversion to RSL 2.0.
Cons
It's still less efficient than passing structs, still unwieldy.

As the number of things we might like to get from the light increases, it becomes increasingly awkward to manage the parameter list of the light() method. This is where structs play an important role. Not only can we group together results that we want the light to produce, we can also isolate individual lights from changes to that parameter list by using sensible default values for the struct members. For instance, if we added a member that supported ultraviolet contribution from a light, we might just default it to zero. Lights that don't know about contributing ultraviolet wouldn't need to be modified. Basically, the struct allows us to pass a number of values, whilst keeping the code maintainable and clean.

// lightResult.h
struct lightResult{
    varying vector L = 0;
    varying color Cl = 0;
    varying float nondiffuse = 0;
    varying float nonspecular = 0;
};

// surface
shader lights[] = getlights();
uniform float i,nlights = arraylength(lights);
for(i=0; i<nlights; i+=1) {
    vector L;
    color Cl;
    lightResult lr;
    lights[i]->light(L,Cl,lr);
    //.. accumulate using lr's result
}

// light
class mylight() {
    public void light(output vector L; output color Cl;
        output lightResult lr) {
        // ...
    }
}

Pros and Cons

Pros
Passing data grouped together in struct is efficient, easy to maintain.
Cons
It's incompatible with illuminance() and doesn't support multisampling.

Note that whilst a light has to have a light() method to be considered a light, and it is the presence of the standard light() method that permits a light's use with illuminance and the other old-style lighting shadeops, there is no need to utilize the light() method if you're gathering light yourself using your own for loop - and it may be neater not to do so:

// lightResult.h
struct lightResult{
    varying vector L = 0;
    varying color Cl = 0;
    varying float nondiffuse = 0;
    varying float nonspecular = 0;
};

// surface
shader lights[] = getlights();
uniform float i,nlights = arraylength(lights);
for(i=0; i<nlights; i+=1) {
    lightResult lr;
    lights[i]->getlighting(lr);
    //.. accumulate using lr's result
}


// light
class mylight() {
    public void light(output vector L; output color Cl) {
        // make the light old-style compatible
        lightresult lr;
        getlighting(lr);
        L = lr->L;
        Cl = lr->Cl;
    }
    public void getlighting(output lightResult lr) {
        // ...
    }
}

Pros and Cons

Pros
Passing data grouped together in a struct is efficient, easy to maintain, compatible with illuminance(), and avoids the syntactic awkwardness of having to pass L and Cl to light when they'll be passed back in the lightResult struct.
Cons
It doesn't support multisampling.

Accumulating Inside Lights

There are two directions in which communication could occur. In the examples above we have been sending information back from the light to the surface. However, we could have chosen to send the BRDF to the light and have it accumulate there.

Suppose we have a known BRDF, perhaps represented by a struct:

/// FIXME: do real phong
struct PhongBRDF{
    varying float m_roughness;
    varying float m_color;
    varying float m_NdotL;

    public void accumulate() {
        //...
    }
};

We could choose to express the surface-light interaction by passing the struct to the light and having it accumulate directly into the BRDF:

class myLight() {
    // note that the nonstandard light method signature

    public void light(output color Cl; output vector L;
        output PhongBRDF brdf) {

        // ... compute L and Cl

        // struct member function used
        // to accumulate results directly
        brdf->accumulate(Cl,L);
    }
}

Pros and Cons

Pros
Passing data grouped together in a struct is efficient (this time passing the BRDF to the light).
Cons
Light must have full knowledge of BRDFs at compile time, so this may be inflexible. It doesn't support multisampling, is incompatible with illuminance(), and maintaining the shaders requires keeping BRDFs and lights in sync.

One positive implication of this implementation is that there is only one method call to the light and all accumulation to the BRDF is done with efficient function calls (rather than method calls). However, we have forced the light to know about the BRDF,and in doing so we have made a light that will work only with that one BRDF. Whether or not that matters for you will depend on how rigidly the set of BRDFs are predefined for your workflow. Regardless, the coupling between the light and the BRDF will certainly require the light to be recompiled any time the BRDF implementation is updated. Depending on how you like to work, this may be more of a problem than the decision to use a single BRDF.


Multiple BRDFs

Often there is a need to use multiple BRDFs inside lights, or to represent lobes of the surface-light interaction. In that case it isn't feasible to have one light with explicit knowledge of the BRDF. In order to contribute light to multiple lobes in the above scenario, each light would need to be duplicated such that one version exists per lobe that it must interact with, or it would need to have multiple methods for each BRDF and, ideally, implement some sort of caching scheme so that shared portions of computation were not wasted. This is clearly a bit much.

It is possible to extend this methodology to cope with mutliple BRDFs - doing so is simply a case of redefining the BRDF to include others:

struct multibrdf : PhongBRDF, LambertBRDF{
    public void accumulate(vector L; color Cl) {
        PhongBRDF::accumulate();
        LambertBRDF::accumulate();
    }
};

but, regardless, the light must have full knowledge of the BRDF/all the BRDFs when it is compiled.

It may, however, be better to do your accumulation for multiple BRDFs in the surface shader. The preceding examples mostly dealt with a single BRDF. We have examined BRDF accumulation in the light, and the drawbacks of accumulating in the light. So, how would we accumulate to multiple BRDFs in the surface? Accumulating multiple BRDFs that are unknown to the light requires an extra level of indirection/dynamism. Method calls provide such a mechanism - but where should such a method call go?

We could perform method calls to the surface shader/coshader thath implements the BRDF from the light in order to interface the light with multiple BRDFs:

class myLight() {
    public void light(output color Cl; output vector L;
        shader brdfAccumulator) {

        brdfAccumulator->accumulate(Cl,L);
    }
}

class mySurface() {
    //...

    public void accumulate(L,Cl) {
        for( ... brdfs ... ) {
            //.. do the accumulation using L, Cl
        }
    }
}

Pros and Cons

Pros
This approach uses structs for efficient communication and supports multiple BRDFs that the light does not need to be aware of.
Cons
It uses an unnecessary method call to perform accumulation and isn't compatible with illuminance().

Rather than using an additional method call, however, this can also be done using the methodology we were originally following:

// lightResult.h
struct lightResult{
    varying vector L = 0;
    varying color Cl = 0;
    varying float nondiffuse = 0;
    varying float nonspecular = 0;
};

// surface
shader lights[] = getlights();
uniform float i,nlights = arraylength(lights);
for(i=0; i<nlights; i+=1) {
    lightResult lr;
    lights[i]->getlighting(lr);

    // NOTE: hand-waving loop here, see below
    // ... for each brdf ... {
        //.. accumulate using lr's result
    }
}


// light
class mylight() {
    public void light(output vector L; output color Cl) {
        // make the light old-style compatible
        lightresult lr;
        getlighting(lr);
        L = lr->L;
        Cl = lr->Cl;
    }
    public void getlighting(output lightResult lr) {
        // ...
    }
}

Pros and Cons

Pros
This approach uses structs for efficient communication, supports multiple BRDFs that the light does not need to be aware of, and provides increased flexibility.
Cons
This is somewhat more verbose than the original light.

Note that we have intentioanlly been rather vague about the iteration over BRDFs - there are basically two choices here, because heterogenous containers of structs are not currently supported (this will be addressed in a future release). In other words, currently one cannot write this:

// This does not currently work
struct brdf{
};
struct brdf1 : brdf { };
struct brdf2 : brdf { };

brdf1 b1;
brdf2 b2;
brdf brdfs[] = {b1,b2};

because we don't permit containers of mixed type (despite the BRDFs sharing a common base, they are different structs).

This means that iterating over BRDF structs is not as convenient as it might be. However, this is probably not as restrictive as it might at first seem. If you require dynamic selection from a set of BRDFs, then most likely each will be in a coshader that you dynamically choose at rendertime. In that situation an array of shaders that implement a known method will work well.

The iteration becomes:

shader brdfs[] = getshaders("category","brdflobes");
unsigned float i,n = arraylength(brdfs);
for(i=0; i<n; i+=1) {
    brdfs[i]->accumulate(lr);
}

When hand constructing shaders (or when using a template generation mechansim like Slim), mixing a set of BRDFs becomes as simple as accumulating to each:

Brdf1 b1;
Brdf2 b2;
b1->accumulate(lr);
b2->accumulate(lr);

Invoking Lighting from Multiple Lobes

Either all lights are invoked and the results are handed to the BRDF lobes for accumulation, or each lobe selects the lights they want to run and accumulate the results at that time. If special BRDF/lobe-light links are needed (e.g. separate lights that contribute only specular, or only diffuse) then that information would either need to be communicated back in the lightResult struct, or category matching would have to be employed so that the BRDF chooses only the relevant lights. This means that a light may be invoked more than once. An alternate approach is to implement light caching. Lets look at each of these approaches:

If we want to have lights contribute to only specific lobes, we can do so by passing back information to that effect in the lightResult struct:

// lightResult.h
struct lightResult{
    varying vector L = 0;
    varying color Cl = 0;
    varying float nondiffuse = 0;
    varying float nonspecular = 0;
};

// fragment which performs accumulation

if (lights[i]->nondiffuse == 0) {
    // accumulate
}
// otherwise skip light

Pros and Cons

Pros
This uses structs for efficient communication and supports multiple BRDFs that the light does not have to be aware of.
Cons
The BRDFs have to be aware of special properties to avoid accumulating certain types of light.

The implication is that each lobe must be aware of this and must be coded to ignore lights with the relevant properties (or to only look for lights with specific properties). The same mechanism might be implemented with a string that is matched.

Simplistic categories may also be effectively used to select lights. This may be done with:

shader lights[] = getlights("category","categoryspec");

If lights only ever appear in one category, then grabbing a list of lights per category of BRDF is perfectly reasonable. The set of BRDFs would accumulate the results from the lights of the relevant category. If multiple such light selections are required, a given light may appear in more than one light list, which may lead to it being invoked multiple times. If the light does not depend on surface parameters then the result should be the same for both invocations, which means that invoking it multiple times might be rather a waste.

To mitigate against this, you might wish to implement light caching in your lights. Note that this is only relevant if the same light can be invoked multiple times on the same surface points. Surface caching does imply an overhead in terms of memory and speed. If it is possible to lock down the categories or use an alternate mechanism for linking lights to specific BRDF lobes, it may be more efficient to do so.

shader diffuselights[] = getlights("category","diffuselight");
shader specularlights[] = getlights("category","specularlight");

// depending on category conventions,
// a light can appear in both lists

Light caching works by the light remembering the last result it passed back:

class mycachedlight() {
    uniform float m_cacheValid = 0;
    varying float m_lastP;
    lightResult m_lightCache;

    public void getlighting(output lightResult lr){
        if (m_cacheValid == 0 || m_lastP != Ps) {
            // compute lr
            m_lightCache = lr;

            m_cacheValid = 1;
        } else {
            lr = m_lightCache;
        }
    }
}

Pros and Cons

Pros
This approach permits a light to be invoked multiple times without recomputing it.
Cons
It is less efficient than invoking each light only once.

Provided that the number of types of light (categories) can be decided upon, and the BRDFs can be aware of these types, it may be perfectly possible to avoid implementing light caching. If that is not possible, light caching provides a mechanism which will prevent having to recalculate the light for multiple lobes. For most lights this is much more expensive than the implementation of caching.

Light caching may also complicate interaction of your lights when re-rendering.


Multisampling and Area Light Support

Our examples up to this point have had only a few items being passed back from the surface in the lightResults struct. Now, let's imagine we want to support pseudo area lights with multiple samples. Instead of passing back L and Cl, we can pass back a plethora of L and Cl values:

struct lightingContribution{
    varying vector L[];
    varying color Cl[];
};

class myLight() {
    public void getlighting(lightingContribution lc) {
        //...
        for( ... generate samples ... )
            vector _L = ...
            vector _CL = ...
            push(lc->L,_L);
            push(lc->Cl,_Cl);
        }
    }
}

class mySurface() {
    //...

    public void lighting(output color Ci, Oi) {
        for ( ... lights ... ) {
            lightingContribution lc;
            lights[i]->getlighting(lc);

            // .. for each brdfs ... ) {
                // ... accumulate using all the samples
                // ie. brdf->accumulate(lc);
            }
        }
    }

Shadowing

Shadowing has implicitly been performed inside the light in the examples so far, but that doesn't have to be the case. There are some advantages in performing shadowing on the surface (or at least with knowledge of the BRDF), but as ever, there are complexity tradeoffs that must be considered.

Suppose for a light we pass back a shadow map as part of the lighting results struct...

struct lightingContribution{
    varying vector L = 0;
    varying color Cl = 0;
    uniform string shadowmap = "";
};

class myLight(string shadowmap = "") {
    public void getlighting(lightingContribution lc) {
        //...
        for( ... generate samples ... )
            lc->L = ...
            lc->CL = ...
            lc->shadowmap = shadowmap;
        }
    }
}

class mySurface() {
    //...

    public void lighting(output color Ci, Oi) {
        for ( ... lights ... ) {
            lightingContribution lc;
            lights[i]->getlighting(lc);

            // check if any point is in light
            uniform float needsShadow = 0;
            if (lc->L . N > 0) {
                needsShadow = 1;
            }

            // only perform shadowing if needed
            varying color shadowedCl = lc->Cl;

            if (needsShadow != 0 && lc->shadowmap != "") {
                shadowedCl *=
                    (1-shadow(lc->shadowmap,P));

            }
            // perform brdf accumulation using
            // shadowedCl
        }
    }

Pros and Cons

Pros
This can help perform more optimal shadowing calculations.
Cons
It requires a fixed style of shadowing.

This example gives us an unshadowed Cl, which is often desirable as an AOV, but it also, and more importantly, avoids us having to perform shadowing calculations where we know (via knowledge of the BRDF) that there can be no light contributed to the surface anyway. Such optimizations may take much more complex forms than the one shown here (which will work only in single-sided shading mode).

It should be noted that the example above uses an idiom known as the Apodaca Device. The purpose of the device is to check if a given condition holds true for any of the points being shaded. The Apodaca Device is discussed in more detail here Using an Apodaca Device.

There are downsides to the shadowing approach described above- namely that we have limited ourselves to a single type of shadowing via shadowmaps. Of course, we could extend the struct to support multiple shadowing styles (ray tracing and shadowmaps, et cetera).

struct lightingContribution{
    varying vector L = 0;
    varying color Cl = 0;
    uniform float shadowstyle = 0;
        // 1 = raytrace
        // 2 = use shader
    uniform string shadowmap = "";
    uniform shader blocker = null;
};

Note that we have included a third option, which is to defer the shadowing calculation to a known method on a specified shader. That way the light could perform the shadowing itself:

public void getlighting(output LightingContribution lc) {
    // ...
    lc->blocker = this;
}

public color performShadowing(point P) {
    // ...
}

You may even find it appealing to permit additional blockers to be added to the surface rather than the light. For example, using getshaders("category","blocker") might allow you to call additional blocker/shadowing modules, permitting, for example, ground planes to receive shadows, but the hero character to be unshadowed.


Special lights

Lights we have discussed so far have been the traditional sort, representing physical lightsources. However, light shaders may also be used to contribute reflections, environment maps, and global illumination/indirect light to a surface. In fact, there are some advantages to doing this. The major advantage is that putting ray tracing and so on in special lights allows you to leverage re-rendering technologies. This way ray tracing results don't have to be updated every time you move a light, which would be the case if tracing is done from the surface. There are also potential advantages in terms of workflow if tracing results need to be tuned during lighting, but you don't want surface properties to be editable.

If you take this sort of approach it is likely that these special lights might need special treatment. For example, shadowing is already taken account of in a traced reflection. You may also want to communicate trace directions to a light in order to control how it operates.

Here is a simple trace light that interacts with a BRDF:

class mytracelight(string __category = "tracelight", float samples = 16,
    float brightness=1) {

    public color getlighting(vector dir,float samplebudget, float angle) {
        float smp = max(samples*samplebudget,1);
        color hitc = 0,res = 0;
        gather ("illuminance",Ps,dir,angle,smp,"surface:Ci",hitc) {
            res += hitc;
        }
        res /= smp;
        return res*brightness;
    }
}

// surface
class mysurface() {

    //...

    public void lighting(output color Ci,Oi) {

        //...
        vector R,T;
        float Kr,Kr;
        fresnel(I,N,eta,Kr,Kr,R,T);

        float angle = ...
        float samplebudget = ...

        float performtrace = 0;
        if (Kr > 0) {
            performtrace = 1;
        }

        if (performtrace) {
            shader tracelights[] = getlights("category","tracelight");
            uniform float n=arraylength(tracelights),i;
            for(i=0;i<n;i+=1) {
                Ci += Kr *
                    tracelights[i]->getlighting(T,samplebudget,angle);
            }
        }
    }

    //...
}

Using this methodology, we can tune the ray-traced relection brightness during re-rendering. Note that the getlights() call will not return any lights when the currently edited light is not the tracelight. This means that ray tracing will not be recomputed when a standard light is being edited, which may greatly improve interactivity.

Additionally, placing brightness tweak control in the light permits a lighter to alter trace reflections while only editing the light rig.

The same approach applies to global illumination via indirectdiffuse().

images/note_note.gif

Calculation of sample budget and so on is discussed in more detail in the ray tracing section of the RSL 2.0 Shading Guidelines.


Conclusion

We have presented a number of different ways that the interaction between lights and the surfaces you write might be factored. There are certainly many more techniques that are possible, but we hope we have given a flavor for some of the design decisions that should be considered, and the implications of each design choice.

Please consult the appendix below, for a more fully elaborated shader set.


Appendix A: An Example Shader Set

This set of shaders shows use of a lighting results struct (lightStruct in the example) to pass back results from a light to the surface. It uses a half-angle phong BRDF in a struct (shadingResult in the example).

The example area light source shader and BRDF support multi sampling. Shadowing is performed on the surface side and is optimized such that only areas which have lighting contribution undergo shadow lookups.


The lighting results struct is shown below:

// lightStruct.h

#ifndef lightStruct_h
#define lightStruct_h

struct lightStruct {
        varying vector L[] = {};
        varying vector Ln[] = {};
        varying vector H[] = {};
        varying color Cl[] = {};
        uniform float numSamples = 0;
        uniform float iNumSamples = 0;
        uniform float initializedH = 0;

        public void initialize(uniform float samples) {
                numSamples = samples;
                iNumSamples = 1/samples;
                resize(L,numSamples);
                resize(Ln,numSamples);
                resize(H,numSamples);
                resize(Cl,numSamples);
        }
        public void prepH(varying vector V) {
                if (initializedH == 1) {
                        return;
                }
                initializedH = 1;

                uniform float i;
                for(i = 0; i < numSamples; i+=1) {
                        H[i] = normalize(Ln[i] + V);
                }
        }
};

#endif

The BRDF is implemented in a shadingResult struct.

// shadingResult.h

#ifndef shadingResult_h
#define shadingResult_h

struct shadingResult {
        varying color diffuseResult = 0;
        varying color specularResult = 0;

        varying color diffuseAOV = 0;
        varying color specularAOV = 0;

        public void merge( shadingResult sr ) {
                diffuseResult += sr->diffuseResult;
                specularResult += sr->diffuseResult;
        }

        public color sum() {
                diffuseAOV = diffuseResult;
                specularAOV = specularResult;
                return diffuseResult + specularResult;
        }
        public void applyDirectShadow(float shadowVal) {
                diffuseResult *= shadowVal;
                specularResult *= shadowVal;
        }
        public float needsShadow() {
                if ( diffuseResult[0] > 0 ||
                         diffuseResult[1] > 0 ||
                         diffuseResult[2] > 0 ||
                         specularResult[0] > 0 ||
                         specularResult[1] > 0 ||
                         specularResult[2] > 0) {
                        return 1;
                }
                return 0;
        }
};
#endif

The surface shader is shown below, and performs the BRDF accumulation, shadowing and output of final color / AOVs.

// example.sl

#include "lightStruct.h"
#include "shadingResult.h"


class example(
                float Kd = .8;
                float Ks = .2;
                float roughness = .01;
                uniform string diffuseTextureMap = "";
                uniform string specularTextureMap = "";
                color specColor = (1,1,1);
                color diffColor = (1,1,0);
                output varying color specAOV = 0;
                output varying color diffAOV = 0;
                ) {

        varying color m_diffuseColor = 0;
        varying color m_specularColor = 0;
        varying filterregion m_fr;
        shadingResult m_sr;
        varying vector m_Nn = 0;
        varying vector m_In = 0;
        varying vector m_V = 0;

        varying color tmpColor = 0;

        public color doSpec(output lightStruct ls; output shadingResult sr) {
                color specResult = 0;
                uniform float i;
                ls->prepH(m_V);
                for (i = 0; i < ls->numSamples; i+=1) {
                         specResult += pow(max(0,ls->H[i].m_Nn), 1/roughness) *
                                 ls->Cl[i];
                }
                specResult *= ls->iNumSamples;
                specResult *= m_specularColor;
                sr->specularResult += specResult;
        }

        public color doDiff(lightStruct ls; output shadingResult sr) {
                color diffResult = 0;
                uniform float i;

                for (i = 0; i < ls->numSamples; i+=1) {
                        diffResult += max(0,m_Nn.ls->Ln[i]) * ls->Cl[i];
                }
                diffResult *= ls->iNumSamples;
                diffResult *= m_diffuseColor;
                sr->diffuseResult += diffResult;

        }

        public void construct() {
                // This is the place to do computations that will be done only
                // when the shader in initialized
        }

        public void begin() {
                //This is run before each grid is executed
                m_fr->calculate2d(s,t);

                m_Nn = normalize(N);
                m_In = normalize(I);
                m_V = -m_In;
        }

        public void prelighting(output color Ci, Oi) {
                // do texture lookups in prelighting so that
                // the maps do not have to be accessed in the
                // lighting method

                if (specularTextureMap != "") {
                        m_specularColor = texture(specularTextureMap, m_fr);
                } else {
                        m_specularColor = specColor;
                }
                if (diffuseTextureMap != "") {
                        m_diffuseColor = texture(diffuseTextureMap, m_fr);
                } else {
                        m_diffuseColor = diffColor;
                }
        }

        public void lighting(output color Ci, Oi) {
                uniform float lightCount = 0;
                uniform float i;
                shader lights[] = getlights("category","rsl2");
                lightCount = arraylength(lights);
                lightStruct ls;
                for (i = 0; i < lightCount; i+=1) {
                        lights[i]->getLight(ls);
                        doDiff(ls,m_sr);
                        doSpec(ls,m_sr);
                        if( m_sr->needsShadow() == 1) {
                                float tmpShad = lights[i]->getShadow();
                                m_sr->applyDirectShadow(tmpShad);
                        }
                }
        }

        public void postlighting(output color Ci, Oi) {
           Ci = m_sr->sum();
           specAOV = m_sr->specularAOV;
           diffAOV = m_sr->diffuseAOV;
           Oi = 1;
        }
}

The area light shown here outputs a number of samples, contributing light from a number of directions.

// tAreaLights.sl
#include "lightStruct.h"

class tAreaLight(
                float intensity = 1;
                color lightColor = (1,1,1);
                point center = point "shader" (0,0,0); // center of rectangle
                vector udir = vector "shader" (1,0,0); // axis of rectangle
                vector vdir = vector "shader" (0,1,0); // axis of rectangle

                uniform float width = 10;
                uniform float height = 10;
                uniform float samples = 16;
                uniform string __category = "rsl2";
                ) {

        uniform float computed = 0;
        uniform float shadowComputed = 0;
        varying float shadowVal = 1;
        lightStruct m_ls;

        public void light(output vector L; output color Cl; float foo;) {
                // This is a dummy method so that prman
                // treats this class as a light
                L = 0;
                Cl = 0;
        }

        public void getLight(output lightStruct ls;) {
                varying point samplepos;
                computed = 1;
                uniform float i;
                varying float rnd1, rnd2;
                m_ls->initialize(samples);
                for (i = 0; i < samples; i += 1) {
                        rnd1 = 2 * random() - 1;
                        rnd2 = 2 * random() - 1;
                        samplepos = center +
                                rnd1 * udir * width +
                                rnd2 * vdir * height;
                        m_ls->L[i] = samplepos - Ps;
                        m_ls->Ln[i] = normalize(m_ls->L[i]);
                        m_ls->Cl[i] = lightColor * intensity ;
                }
                ls = m_ls;
        }

        public float getShadow() {
                if(shadowComputed == 1) {
                        return shadowVal;
                }
                normal Nn = normalize(N);
                shadowComputed = 1;
                varying color tmpColor = 0;
                uniform float i;
                for(i = 0; i < samples; i+=1) {
                        if((Nn.m_ls->Ln[i]) > 0) {
                                tmpColor += transmission(Ps, Ps+m_ls->L[i],
                                                "hitmode", "primitive");
                        }
                        else {
                                tmpColor += color(1);
                        }
                }
                tmpColor /= samples;
                shadowVal = clamp((tmpColor[0] + tmpColor[1] + tmpColor[2])/3,0,1);
                return shadowVal;
        }
}

Prev | Next


Pixar Animation Studios
Copyright© Pixar. All rights reserved.
Pixar® and RenderMan® are registered trademarks of Pixar.
All other trademarks are the properties of their respective holders.