Wednesday, June 27, 2007

10 Cities, I want to visit

1st LA

2nd NY

3rd Paris

4th Berlin

5th Hong Kong

6th Tokyo

7th Bejieng

8th Shanghai

9th Sydney

10th London

Matching real world with CGI

"How we can composite the 3d object with an still image or a video footages with exactly the same lighting and composition"


I think Ambient Occlusion is a good solution for achieving this particular goal. For example immense amount of planning and R&D must be required in order to match the real scene lighting setup to CG lighting set up and if we are matching models then again there is lot of planning is required.

Now industry is shifting towards IBMR for the same. (Image-Based Modeling and Rendering) IBMR method is pretty simple in concept it takes reference from set of photographs and with complex algorithms it generates final image this technology is still at its primitive stage though its gaining more popularity today. IBMR methods gave more importance to lighting than the actual shape of the model. And there is lot more happening in IBMR which helps the artist to match real world lighting in CG world. Almost all major VFX blockbusters in Hollywood use IMBR in their pipeline

Ref: http://www.debevec.org/Items/NewScientist/

http://research.microsoft.com/china/papers/Review_Image_Rendering.pdf

So lighting, as you know it is very important in geometry based modeling when it comes to compositing them to the CG scene or the real footage. Lighting in Computer is darker than real world and it doesn’t show the highlights clearly. So with these lighting normal lighting setup our models will look flat (will have that soft plastic feel) so on SIGGRAPH 2002 our good guys at ILM presented technique called ambient occlusion with this technique we got control over all the attributes of the scene or object and we may adjust the levels of each in the post according to our needs

Here you can see the advantages very well!!!

Ref: http://www-viz.tamu.edu/students/bmoyer/617/ambocc/

In lighting Context, as you may know one of the new and exciting technology is HDRI (High Dynamic Range Images) it is basically a technique where the ratio between brightest and darkest regions are stored in one single image file. This information is very useful for faking the real world lighting in 3d app and even AE 7 supports HDRI. There are lot of movies which had used this technique for example Spider man 2, Batman begins, Troy and lot more.

Ref: http://en.wikipedia.org/wiki/HDRI

http://www.debevec.org/Research/HDR/

Studying real world lighting is really challenging job because in Matrix sequels technical directors hired $700,000 machine which is actually used by US Defense to calculate how real world light acts on Agent Smith’s highly anisotropic silky cloth material in order to recreate the same on CG. They calculated BRDF (Bidirectional Reflectance Distribution Function) BRDF is calculated because same model or scene reacts differently in lights sources from different positions and angles

BRDF (It gives the reflectance of a target as a function of illumination geometry and viewing geometry. The BRDF depends on wavelength and is determined by the structural and optical properties of the surface, such as shadow-casting, multiple scattering, mutual shadowing, transmission, reflection, absorption and emission by surface elements, facet orientation distribution and facet density.)

For more information

Ref: http://www-modis.bu.edu/brdf/brdfexpl.html

And another example is for creating Doc Ock's CG skin in spider man 2 Sony image works created Light Stage system which captures reflectance field of a human face photographically. This resulting image was to used for model under different lighting condition and from various light points and angles and this same technique was used for creating various HDR environment maps.

After lighting the next important factor is dept which calculating 3 dimensional information from 2 dimensional images for this purpose a method called technique called Photogrammetry is used. In this technique different points on an object are determined by measurements made in two or more photographic images taken from different positions. Hollywood movie Fight Club is perfect example for this technique using this technique high speed high-speed photo-realistic camera movement down the side of, around and through buildings, as evinced at the film's beginning and also in the 'galaxy of trash' pull-back. And finally Photogrammetry and HDR are very closely related subjects Pixel Lieberation Front ( PLF ) used the Photogrammetry side of the initial research very effectively in films such as Fight Club.

Ref: http://www.univie.ac.at/Luftbildarchiv/wgv/intro.htm


Optical Flow

Optical flow is used in most compositing packages from After Effects to Shake, Flame, Nuke, Fusion and toxik.

Its a technology which tracks every pixels in you footage from frame to frame and it records the this movements of pixels in form of vector with analysis and study of this information we will able to track, matchmove and it can be also used have various other uses.

For example It forms the basics of MPEG compression technology and this technology can actually do wonders it had been used in for various complex techniques including Universal Capture in Matrix sequels. Its also much related to Mocap

It future optical flow can be used to track full human motions in our PC using simple compositing software.(http://www.cs.brown.edu/people/black/)

links:

http://www.cs.brown.edu/people/black/
http://www.cs.otago.ac.nz/research/vision/Research/OpticalFlow/opticalflow.html
http://petewarden.com/notes/archives/2005/05/gpu_optical_flo.html

(simpler explanation)
http://www.cs.umd.edu/users/ogale/research/research.html
http://www.fxguide.com/article333.html

Reflection Ccclusion

Its difficult to occlude the reflection and even there might be self occlusion if the objects itself got reflective surface so we use technique called Reflection occlusion this technique was first developed during Speed II and enlisted into full time service on Star Wars: Episode 1 by ILM.

Reflection occlusion is calculated by shoting number of rays in the reflection direction, if the ray hits the object it will be black pixel and if it doesn't then it will be white. we cloud simulate this in 3d app, for example we cloud add base color(white) to objects and if the object reflects then the color won't be white. so finally we will get a reflection occlusion pass. And I guess in real shot we can just take the enviorment map and make that into


This occlusion pass is multiplied with the reflection pass in a compositing software. So we will get accurate depth in the reflections.

This technique is basically similar to ambient occlusion approach.

so our final out put will be something like this

Ambient Occlusion + Environment Maps * Reflection Occlusion (Enviorment map Multiplied with Reflection Occlusion)

Even we cloud use as HDR image as our enviorment map for more flexible and better results. Using reflection occlusion we cloud get some pretty realistic output!

Future of Compositors

Evolution is slow process its not sudden change and there are various factors which affects the its development from one to stage to another so we don have to be much concerned about the same.
And I believe good craft man with his polished skill set won't get lost in future. If he is good at what he do and he do that perfectly then no one can take his place.

A perfect example will be senior VFX supervisors at ILM, they all started with opticals and now they doing the same job in digital environment. They got adapted easily. It all depends upon your passion and your know how in the craft.Compositors have to do what he is assign to do I don think their will be change in work a compositor in future. May be, he or she may face more challenges and difficulties but on the other end the technology is making things simpler and easier.

Production pipeline is all about specialization! A pipeline is there in production because jobs must be breakdown to people who are good at doing particular job. And finally one man can't do every job from pre production to post production.

There is no need to feel insecurity for compositors because this field is getting more exciting day by day. But we must be ready to learn new stuffs and broaden our know how, these days I feel a sound knowledge in 3d is a must because more and more compositing softwares are supporting real 3D environment and objects inside the composition.

I think, in future Compositors will be doing more work which have assigned to lighting, texturing and rendering artist now! So who’s is going to suffer at the end? Compositors or 3D people?
Well there is will be need of specialized person now and in future until technology makes things child’s play! And finally as long as there is some art in our job we all will be unique just because everybody's style and execution differs…

Crowd manupulation with compositing

Classic crowd-replication techniques:

80 to 100 extras were photographed in different separate passes. Then they were duplicated in Post Production, An immense amount of Roto/Tracking were done to match separate crowds to the final composite. In some cases a detailed color correction is done on each group of crowd to get a better output. Another classic method is to shot different blocks of crowd in different position with blue screen behind or along side them and composite them to clean plate but this method got lot of limitations like the camera must be locked off and the whole composition must be small.

Example of this can be found here: http://www.bftr.com/Pages/movies/vendetta_prisonhi.mov

Shadow-matting and color correction are the common challenges faced on crowd replication when done through classic techniques.

In simple words: shoot smaller crowds and composite them altogether with their digital extras (their own duplicates).

But these days...Industry is moving towards more to AI based crowds which can think and interact with environment around them; then only solution for them is think 3D! In beginning they were using Particles for generating crowd but this technique found difficult due computational difficulties so they began to develop huge scripts which drives the animation which later gave birth software called Massive which is very much known for its crowd replication in LORT! There are other proprietary tools and plug-ins for creating crowd replication today.

Check these links for more information:

http://www.digitalanimators.com/2002/07_jul/features/digitalcrowds.htm

http://www.animallogic.com/commercials/nintendo/index.html

http://www.creativematch.co.uk/viewnews/?89690

http://www.dneg.co.uk/production_history/projects/to_kill_a_king/to_kill_a_king.html

And finally check this sexy crowd ;)

http://www.beam.tv/beamreels/reel_player.php?reel=ZGszxVyWzX&reel_file=KFxnMHmjGP

Man this is hell lot bikini girls, girls, girls I mean lot of them ;-)

Cheers,
Rahul.

Reinterpretation of 300 Oracle sequence in Neo-surrealism



This piece of art was created for my module assignment in my Degree. The assignment was to interpret a art so I decided to interpret 300's Oracle Dance sequence for my project. Actually there was competition going on for the same in vfxtalk.com so it was good coincident for me which ended up pretty well.

I received some nice constructive criticisms and as well as appreciations for good try. Later I had uploaded the video in Youtube and I don't know how It have now 1000+ views... ;) still unclear whats the reason.. Doesn't matter whether people hate it or love it but they are watching it, which makes me happy :)

Welcome Note

Hey people,

Thanks for visiting my blog, this blog would be generally about my thoughts on life, art and anything which I consider worth posting on this space.

I think a major part of this blog will be about visual effects techniques since thats what I do for my living. I would be posting more stuffs on techniques and methodology rather than posting inspiring works or reviews.


Anyways thats all I have to say about this blog for now.

You stay beautiful,
Rahul