General Specialist

2006-10-30

Greenscreen and Bluescreen Checklist

Shooting for greenscreen or bluescreen? Here's a list of hard-earned experiences from the shoots where I've been vfx supervisor. I don't claim to be a chroma expert, so please post a comment if you have more tips to add to the list!

UPDATE: I've added some info on depth-of-field and motion blur as point number 2.

1. Keep it Blurry in Camera
Turn off all in-camera sharpening! This might make your director of photography (DOP) nervous and it will certainly make it harder for her/him to focus. On Sony cameras, there's usually two settings that need to be turned off: Detail and Skin Detail.

By default, all cameras apply a sharpening filter as a post-process before each frame is committed to tape/disk/memory card. While this makes the image look better, it is makes it so much more difficult to get a good and clean edge between your foreground and your chroma screen. Digital sharpening works by finding adjacent pixels of different lightness values and then increasing the difference, in effect crating a border with much higher contrast. Notice also how the in-camera sharpening brings out noise and imperfections in the chroma screen.

So shoot without sharpening and add it in post instead!



2. Keep it Sharp on Stage
While you don't want the camera to add artificial sharpening, you still want to keep everything in the foreground as sharp and correctly focused as you can. If the chroma screen is blurred in the background will only help to make it more evenly lit and textured, but you want to avoid having to key out blurred foreground, trying to separate it from the chroma.

If the blur comes from a too slow shutter speed or by too narrow depth-of-field, you'll have to tweak the keyer and possible sacrifice other parts just to manage the fuzzy edges. A blurred edge between foreground and background means that you will have to compromise between the edge and despilling settings, and quite possibly have to keyframe these settings to compensate for different levels of blur on different parts of the clip.

Instead, add motion blur in post by using optical flow technologies such as ReelSmart Motion Blur and add depth-of-field by layering chroma clips and post-blurring them.



3. Resolution and Framing
You want to shoot with as high resolution as you can afford, to make sure you keep your options open when you get to postproduction. Even if your finishing in SD, try to capture in HD or even 16 mm or 35 mm film. The more detail you can capture, the cleaner key you'll be able to pull. You can always scale down, but you can't get back image data that you haven't captured...

Keep a constant lookout for how the DOP frames the action. Since you'll be working with the shots in post, you can disregard the safe areas that are normally cut off by monitors and TV sets - that's 10% more image data to use!

I've found that I often have to keep pushing for tighter framing of each and every shot. To make sure that you and the DOP sees the entire image, set the camera viewfinder and the preview monitors so they are underscanned.

Even if you're shooting for a 16:9 production, you'll most likely want the set the camera for 4:3 aspect ratio, unless your shooting something that will fill the entire frame horizontally. Otherwise you'll be sacrificing horizontal resolution, making for rougher key edges.

Another way to squeeze the maximum amount of resolution from your cameras is to tilt them 90 degrees for shots of standing people.



Here's an example of three Sony Digibeta cameras with two of them tilted 90 degrees to capture standing people at maximum resolution.




4. Blue or Green?
What you are trying to achieve is to provide your keyer with a color channel that is as distinct as possible. Since human skin tones and lips tend to be red, that leaves blue and green. So which one to choose? That depends on a couple of things...

Green chroma screens have become more and more popular in recent years, largely because green provides a brighter color channel that tends to have less noise than the blue channel. The relative brightness of green makes it a bad choice for shooting blonde hair though, which is a lot easier to key against blue backgrounds.

The bluescreen has some distinct advantages. When you can't avoid a lot of spill (for example when you have to put the foreground very close to the chroma material) you can take advantage of the fact that we tend to find blue casts less disturbing than people walking around looking sea-sick with green faces. Also, when shooting for something that will be composited on to outdoor backgrounds and water, a slight blueish cast won't be a problem.

So if you are shooting a blonde with jeans, you'll have to settle for a compromise!



5. Don't Depend on the Crews' Imagination
Good storyboards that can be shown to the entire crew, both before the shoot (so that they can bring the correct gear) and during the shoot. Depending on the complexity of the shot you might need animatics, but at least bring sketches or printouts.

Talk to the crew so that they understand how stuff will be used in
post. For example, I have had instances where cameramen have cut off
talents' feet even though I've tried to explain that we needed the
whole body.



6. Don't Depend on the Talent's Imagination
If talents are supposed to look at things that will be added in post, make sure they have something (that can be keyed out later) to look at and interact with during the shoot.




7. Get Good Clothes
Make sure you avoid greens, browns and khaki for greenscreen shoots and jeans and other blue clothes for bluescreens. This cannot be allowed to be something you decide on location, it must be planned beforehand.



8. Get Good Props
Make sure you can dull-down shiny stuff so that they don't reflect the chroma color.

The choice of a shiny metal briefcase in the example above is a particularly bad one, considering it had to be rotoscoped in all the shots. The ear-ring was taken care of with an Inside Mask in Keylight.



9. Match the Lighting As If Your Sleep Depends On It
There's no substitute for good lighting and gaffers that can match foreground and background. You can fix almost anything in post-production, be relighting is among the hardest and least successful things you want to spend your nights with. There's nothing that screams fake as much as wrong lighting!



10. Preview Directly On Set
You can't underestimate the value of being able to compare a roughly keyed-out foreground against the background that it will be composited against. Not only is the immediate feedback important for the talent, it is also invaluable when it comes to matching the lighting and perspectives.

If you can't use a real-time keyer with a feed from the camera, like in the image above, at least bring a laptop and a digital still camera and do a quick key until the lighting matches perfectly.



11. Go Easy on the Tracking Markers
If you use tracking markers, make sure you have sufficient number in each shot, without having too many that you will have to paint-out in post. Try using markers with almost the same color as the screen, for example by using chroma tape, so that you can remove them by a second keying-pass.

The extensive number of markers in the example above comes from the fact that they were to be used for a tight head-shot during a 30 minute interview where the subject didn't want anyone except the interviewer and the DOP present. Therefore we had to make sure we had at least some markers visible at all times.



12. Avoid Unnecessary Spill
Keep the foreground as far away from the chroma screens as possible, since you'll have less spill to deal with. Make sure that all parts of the floor that might reflect chroma color onto the foreground are covered by non-reflective material such as black cloth.

It's up to you to keep each setup as far away from the chroma screen as possible, as people seem to be attracted to the big wall of color. It is your job to check that the entire foreground has chroma behind it during the entire take, which is why rehearsal is so important since it gives you the chance to spot potential problems which will force a setup adjustment.



13. Keep It Clean
Strive to keep the chromascreen as spotless as possible, and stop people from walking on it unnecessarily.



14. Get to Know Chroma Sampling and Codecs
Since chroma keying works on the principle of isolating one color, you would think that it was extremely important to get as much color data from the camera as possible. This is unfortunately not the case in many circumstances, especially when it comes to video. I won't get to geeky here, but you need to understand how digital video is stored.

The human eye is much more sensitive to the luminance/lightness of what we see, than to the color of the world around us. That's why all (but a few super-high-end cameras and formats) immediately throw away at least half of the color information that is captured. This is bad news for keying, since the less color information you have, the harder it is to accurately isolate a color.







If at all possible, you want to capture a 4:4:4 image without any color compression, and then keep that color resolution intact by using an appropriate codec at least until you have passed the keying stage. You should also strive to retain the more than 8 bits per channel of data that many systems capture, such as the 10-bit color depth of DigiBeta.

Trying to key of DV footage is even harder, since the DV codec only stores a quarter of the color data, using a 4:1:1 compression. If you have no choice but to key from DV footage, try to blur the U and V channels before pulling a key, or use a keyer that does this automatically, such as dvMatte from dvGarage.

Here's an example of the low horizontal color resolution of DV footage:




15. Frames Rather than Fields
Try to shoot progressive rather than interlaced, to avoid having to de-interlace the footage. If possible; shoot double the frame rate with progressive if you need to go to interlaced later. Avoiding interlacing not only gives you a cleaner edge and saves time on de-interlacing, but it also provides you twice the spatial resolution which might come in handy if you have to up-res in post.

Interlacing is an evil compression technique that severly limits your options, and you should always try to avoid it, instead adding it at final output.



16. BYOC: Bring Your Own Camera
Take lots of reference shots of locations, the lighting setup and other stuff that will help when you crawl back into your dark post-dungeon. You can never have too many reference shots!



Final Words
Avoid the temptation to think that problems on set can be "fixed in post." Everything that can be done in front of the camera should be done on set. Make sure the time allocated for postproduction is used to enhance the final outcome instead of fixing mistakes done when shooting.

Also, be prepared to pull several keys and to use garbage mattes and core mattes. Remember; you are trying to extract the edges, everything else can be mattes/roto'ed!

- Jonas

Labels: , , , ,

2006-10-27

New AE Plugin: ZbornToy

Update: This plugin takes a while to figure out, and I asked the creator a couple of questions at the Adobe Forums. I've added his answers to three of my questions at the end of this post. Also, there's now some sample AE projects to get you started with the demo.

Here's a fresh new way to composite externally rendered 3D images in After Effects. The plug-in ZbornToy takes grayscale depth maps and magically let's you continue tweaking and change many parameters from within AE.

In some ways, the technique is similar to Walker Effects' Channel Lighting, but with ZbornToy, not only can you change the lighting afterwards, you can also render with background refractions, cast caustic reflections onto other layers, and much more.


According to this discussion thread on ZBrushCentral, the rendering is super-quick as well.

Check out the ZbornToy gallery, and post a comment when you've tried the demo, I'm way too busy at the moment!

**************************
From Adobe Forums:
1. What passes do you need to render from your 3D software?

It all depends on what it is you want to do with it. ZbT does ultimatively want at least a depth sequence (Zbuffer). It should be from black(back) to white(front)! From this it will create normals and is able to render 2.5d shadows, occlusion approx. and all the transparency stuff. BUT it is still just 2.5d (relief). You can additionally, if you want, save out a surface normal sequence. Using it will skip a whole computation section of the plugin and speed thigns up again. It also will add a more accurate element, because ZbT has to recreate the normals from pixel to pixel, while the 3d application knows the normal before rendering it! If you want more complex shadows, you might want to render out a shadow pass as well. You can render out surface color pass. You can render out texture passes (glossiness, translucency). But you should first be familiar with how the shading functions, because it is a real surprise to some artsits. Since glossiness is such a major player here and does nearly destroy the need for specularity, reflectivity and a few more complex features for diffusion. You just define the glossiness and have everything activated you want (like reflections of the environment turned on). A glossiness of 0 totally diffuses both specular highlights as well as reflections (and more). A glossiness of one gives you a sharp highlight and reflections, as if the surface was very glossy (perfectly glossy). It's logical. So even if you needed more complex passes, you just need to render out a glossiness pass for many things contemporarily necessary in other packages. That's...about...I think that's it. I mean, you could come up with things you might need, which is part of the fun of it all, but that's really pretty much it.

2. Are those passes "standard" in all the major 3D packages?
Yes, they are. People just don't understand the nature of depth images entirely. And nowadays it just has to be 16bit or floats. Depth images are really the render of the actual geomtries. What ever subdivision level you have used, how ever find the polygons are on the geometry, this is how fine it will be on the depth image. So if you were just to use the ZbornToy on it, it would really be like a flat shader! It will also make the shadows accordingly! Now because you do not need to render textures and complicated materials, which take 3d render engines minutes and potentially hours, you should truely invest the little extra in more subdivisions, which should not take the renderer much longer. I've worked on messiah:Studio and believed and still believe in the power of the package, because it has such brilliant implementation of subdivision. I havn't been used to long rendertimes, thanx to it, but now I'm of course entirely messed up. ZbT uses 1 second, 2 seconds maybe 10 seconds for truely complicated stuff at 1k or higher. Maybe even 15 seconds here and there. Anyway...back to the question. It doesn't have to be pixel displacement, although that's nice (hehe), but it should be high enough. For the rest you can really use normal sequences to get find and smooth details. If the difference between geometry density and normal suggestion is too big, it will show and it will most likely be not as pretty. When you encounter such a thing, increase subdivisions!

3. To what problem is ZbornToy a solution? Just tweaking lights during comping, plus quick refractions?
Ah yeah...to what ARTISTIC problem is the computer a solution?
Really, I mean, really, just think about it. Think about what a master painter would have thought about. You may come to a better answer to that questions than I could. And I believe that answer is partially individual to everyone out there.
I could try even harder and squeeeez my brains to say something like: Time! (because it's a big time saver, for a whole bunch of rendering scenarios!). I could say ...eehhhtweaking lights during comping, plus quick refractions.... (hihihi..sorry). But this is nothing, really, if you begin to figure out the things you can do. That's all.


- Jonas

Labels:

2006-10-11

Free Download: Beautiful Earth Animation Project

Here's a fully animatable Earth project complete with water reflections and moving clouds. If you move the Sun to the back of the planet, you'll even see the night lights of the major cities!

Requires Adobe After Effects 7.0 Professional.



Download the project file first and then download NASA's free textures:
Night texture (land_ocean_ice_lights_2048.tif)
Day texture (land_ocean_ice_2048.tif)
Cloud texture (cloud_combined_2048.tif)

Post a comment if you have any questions or just to let me know what you've used it for (I'm curious.)

Update: I've relinked the NASA textures since they had been changed.
- Jonas

Labels: ,

2006-10-04

Write Your Own Plugin, The Way It Should Be

Here's a quick video tutorial showing how easy it is to create your own plugin in Final Cut Pro, created by Shane Ross at Little Frog in High Def.

After Effects' alternative, the animation presets can only span a single layer, and it's hard to build a user-friendly interface to an animation preset since there aren't any listbox effects or radio button effects you can use to expose tweakable parameters to the user. The other AE option is to use JavaScripts, but they too are relegated to a few geeks since they are so invisible in the AE GUI, and they are so hard to write since you can't click around and record your actions like you can in Maya or even Photoshop.

In Final Cut Pro there's a framework called FXScript which let's you script plugins and automate the program, much like Maya's MELScript. Both FCP and Maya let's you record and then edit the scripts that the program generates, where as AE forces you to start with a blank document, making the barrier to writing and tweaking your own scripts much higher.

Also missing from AE is a Maya-like shelf or palette in the GUI where you quickly access your favorite scripts.

- Jonas

Labels:

"It's Not HD" - First Moving Sample From the Red Camera

I've got a notoriously cranky collegue that always finds something to complain about when it comes to HD. I've been trying to discuss the HVX200 and other cameras that we have, but last time he claimed that:
None of today's cameras are HD!

When I asked what he meant, since many of the professional and even some of the prosumer cameras are now have a true HD capture sensor, but the reason was apparently that he thought they all used too much compression. I bit my tongue considering he was recently instrumental in buying over 30 DVCPRO25 and DVCPRO50 cameras (that according to his own reasoning couldn't even be SD since apparently it's all about the compression and not the resolution...)

I won't even mention what the same collegue said about the Red Camera but so far his statement applies to their first test footage that has just been posted as a torrent. It's a 15 second clip at 106 MB, 8-bit 24p QuickTime with 1024 by 512 resolution compressed with the Motion JPEG A codec. Go get the torrent and keep it seeding until we get higher resolutions to marvel at!

- Jonas

Labels: ,