/home/jeff/blog

Sucking away valuable moments of your life ...

Checking for Deprecated Wordpress Functions

| Comments

One of the major pains involved in Wordpress development and work (and one of the reasons why this isn’t hosted on Wordpress anymore) is that of their quickly changing API.

I’ve encountered issues where plugins have suddenly (and quietly) stopped functioning, due to a deprecated function call being removed from the Wordpress API. I’m sharing my “solution” to this issue, which is a script (which can be integrated into a CI system), which scans your plugin and/or theme code and gives you a list of the deprecated functions you’re using, as well as where they exist in your code.

Building Ganglia for OpenBSD 4.3

| Comments

I recently had to build a modern version of the Ganglia monitoring system for an OpenBSD 4.3 firewall, which hadn’t been upgraded to a modern version of OpenBSD in quite some time. I documented the process, which I’m sharing here.

Preparing for and Recovering From Disaster

| Comments

One of the greatest nightmares associated with digital cinematography and photography is that of the specter of data loss. The very notion that your carefully planned shots or footage could disappear in a single instant can be Earth-shattering, since it may not be possible to reshoot (or may be prohibitively expensive). The best defense is to be prepared, not only for the possibility that you may lose data, but also to safeguard against that possibility through preventitive measures.

Preparation

There are a number of things you can do to try to protect against data loss, both before and after shooting has taken place. Here are a few of them:

Don’t use dodgy or off-brand memory cards. This may seem pretty obvious to those who have been down this road, but really crappy cards tend to have dodgy QA processes involved in their manufacture, so although you may get a good card, you’re just as likely to get a dud – which could end up with your data. I’ve found that “Amazon Basics” and “KomputerBay” cards have failed me pretty regularly. Even “Transcend” cards have been pretty dodgy for me, on occasion. I tend to stick with “SanDisk” branded cards, when I use SD media, and there are a few decent manufacturers of CF media (Lexar, SanDisk, etc), which tend to produce quality cards, in my experience.

Storage media doesn’t last forever. Every piece of flash media has a certain number of R/W cycles before it becomes unstable and/or unusable. After a period of time, you might want to start regularly replacing your media cards to avoid the possibility that errors will begin to occur. It’s also a good idea to low-level format the cards in between uses, which supposedly increases their lifespan.

A single physical copy of your product is a bad idea. If you just move files off to your laptop harddrive, you’re practically expecting something to destroy your media. A single SSD or spinning platter in a laptop is a prime target for an accident to wipe out your data. Ideally, a RAID (Redundant Array of Inexpensive Disks) array would provide a good tradeoff between inexpensive media and redundant storage. I built a pretty inexpensive one with the following components:

A single physical location is a bad idea. Consider off-site backup. If you don’t have the money to store your data in S3 or a similar service, consider using something like Bittorrent Sync, with a friend or two providing remote backup locations. If this isn’t feasible, periodically backing up to DLT or another backup tape, then storing that offsite, may be useful. If you are questioning “why do I need off-site backup”, just remember that a single fire or natural disaster can destroy all of your hard work…

Camera-based writing solutions. Certain cameras (the Canon EOS 5D Mark III, for example) have multiple media card slots, and have the ability to write multiple copies of the same media. This can help circumvent the tragic circumstance (to which I have fallen victim) of a completed shoot with the inability to read any of the captured data later on. To understand this, read more here. It should be noted that there are some performance limitations to this, but if you’re not shooting RAW video and your camera body can handle this, you might want to seriously consider it.

Recovery

There is a pantheon of free and open-source software suites which provide recovery of lost files, deleted files, destroyed partitions, etc. The true nightmare scenario would involve a piece of damaged media, from which the data cannot be extracted – but never assume this unless you have exhausted all other avenues of recovery.

  • PhotoRec/Testdisk (Open source, Linux/Windows/Mac/DOS/etc). This is one of my favorites, although it may potentially require some fairly in-depth technical expertise to fully exploit its potential.
  • Recuva (Freeware, Windows). Recovers deleted files.
  • Hiren’s Boot CD (Freeware, Boot Disc). Hiren’s is a classic recovery and utility boot disk, which can be booted on any Intel-based computer. The download is free, and it has a very comprehensive suite of recovery tools. If you don’t have a copy of this disc hanging around your studio or house – why not?
  • Wondershare Photo Recovery (Freeware, Windows/Mac). A semi-commercial digital media file recovery suite.
  • iCare Recovery Free (Trialware, Windows). This has a 2GB maximum data file recovery limit with the trial, otherwise a license has to be purchased.
  • saveimg (Opensource, Linux/Mac). Extracts JPEG images from raw disk devices. Requires some expert knowledge to use properly.
  • Foremost (Opensource, Linux/Mac). Digital forensic tool to recover files based on headers, footers, and internal data structures.
  • SanDisk RescuePRO (Trialware, Windows). SanDisk’s recommended recovery software.
  • ddrescue (Opensource, Linux/Mac). A data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying hard to rescue data in case of read errors.
  • Stellar Photo Recovery (Trialware, Windows/Mac). Another digital photo recovery suite. The trialware shows what can be recovered, with previews.
  • PC Inspector (Freeware, Windows).
  • DataRescue (Mac).
  • Camera Salvage (Freeware, Windows).
  • PhotoRescue (Windows/Mac).
  • CF Card Recovery (Trialware, Windows/Mac).

Hopefully, this will either make it easier to avoid potential data loss, or at least help to minimize the impact, if and when you end up losing data.

As always, good luck!

Composing for Aspect Ratios

| Comments

Aspect ratios, simply put (for those who are unaware), are the ratios between the width and height of a single frame of video. Television has had a 4:3 ratio (4 units of width to 3 units of height), until the popularization of “HD” television, using a “widescreen” ratio of 16:9. I’m not going to go through the entire history of aspect ratios in cinema, as there is a great retrospective available on vimeo.

Most DSLR cinematography is done, by default, with a 16:9 ratio, as the maximum capture size for their video is generally 1920x1080, abbreviated as 1080p (to additionally indicate progressive, rather than interlaced, scan). It is relatively easy to enforce a “wider” aspect ratio by dropping lines at the top and bottom of the frame, which most watch-at-home film viewers will identify as “black bars” at the top and bottom of the frame. This does, however, introduce an interesting issue – that of composing inside that frame. That ends up looking something like this:

Black bars for different aspect ratios

Standard compositional rules generally make use of the two frame diagonals (top left to bottom right, and bottom left to top right) and both horizontal and vertical thirds. Most compositional choices for “balanced” frames have tended to rely on these invisible divisions to create a pleasing image, even though the rules are really guidelines, and are not absolute. If you, as a cinematographer, decide that you’re going to use an alternate aspect ratio, you cannot properly compose by using the thirds or diagonals indicated by any common guide or viewfinder in 16:9 ratio, as they will result in a vastly skewed product, compared to the original on-camera image. I’ve seen a number of forums on DSLR cinematography which had suggested everything from using a black marker to denote the frame to using black tape to cover the segments outside of the final crop. A more “modern” solution for Canon DSLR owners is using Magic Lantern cropmarks. If you’re shooting with another DSLR brand, you may be at the mercy of the firmware, or any of the aforementioned hacks.

NOTE: I’d like to mention, at this point in this article, that there are other compositional rules, including golden ratio/mean/spiral/rectangle, so please don’t take the rule of thirds and diagonals to be the only important and pleasing ways to frame an image….

Besides attempting to find a way to follow common compositional rules, there is another side to aspect ratio: selecting the appropriate one for the story you are attempting to tell. Apart from the 16:9 ratio being the “standard format” for HDTV broadcasts and web-based series, it is much taller (i.e., it offers more headroom for shots) than Cinemascope (2.35:1) or some of the other aspect ratios. Depending on the types of shots which you are using to compose your film, it may be advantageous to have a wider, yet thinner, view of the world which comes with “shorter” aspect ratios. Again there is no “one size fits all” for aspect ratios, and it should be a conscious decision for the material which you are filming.

An added advantage to using a shorter aspect ratio is that, due to the camera bodies recording the entire frame, regardless of target aspect ratio, additional editing and recomposition “wiggle room” is available in the editing and post-production process. If something isn’t framed in the exact way it should be, this can be corrected later by adjusting the vertical positioning of the frame within the crop. Most (if not all) NLE software has vertical positioning, including Adobe Premiere, Sony Vegas, and Final Cut Pro, so you shouldn’t have any issues doing this. I’m aware that the ideal cinematographer does everything in-camera (as I have vociferously advocated in past), and does not rely on post-production to fix mistakes, this does allow for that possibility.

Aspect ratios additionally carry some subconscious baggage with them, due to the associations we have made with previous material filmed using them, much in the same way we associate film grain with high-end Hollywood pictures, even when, in many cases, that film grain was added back to the digital footage in post-production.

The quick takeaway is that you shouldn’t embark on a project without having made a conscious decision regarding the aspect ratio which you are using for that project, considered the ramifications and technical concerns associated with that decision (including making sure that you can properly frame and compose shots in that aspect ratio), and storyboarded and/or prepared your material considering your chosen aspect ratio. It is yet another variable which can be tweaked to help make an already good project that much better, and like all power, it comes with responsibility. So, choose – but choose wisely, and as always, good luck!

On Documentary Filming

| Comments

I recently had the pleasure of shooting for a documentary down in Birmingham, AL, about one of the people involved in the 16th Street Baptist Church Bombing in 1963. For those who don’t know, that event was one of the most important parts of the Civil Rights movement in the 1960s. The 50th anniversary of the bombing was this September (2013), so I had gone with a small film crew to cover the event, and interview some of the people involved.

I had never shot documentary footage, apart from some small controlled one-on-one interviews, so this was an interesting learning experience for me. I’d like to share some of the more important things which I learned over the course of my experience down there.

Batteries. You’re going to run out of them. I ran through four batteries over the course of a filming day – which I thought would be enough, but weren’t. If you have one thing which requires a strange and unique battery, you should have at least one spare, even if it’s new. Someone is invariably going to leave something on, chew up the battery, and leave you short a vital piece of equipment. If you’re shooting with a DSLR, try to get a multi-battery grip, so that you don’t have to change batteries as often.

One Shot. You only have one shot to get event coverage correctly, which means that, ideally, you should be covering from more than one angle, to provide not only options in editing, but some sort of redundancy. Otherwise, you can easily be left with a serious dearth of footage for what could be a vital part.

Focus and Calibration. Get there early, get set up, get everything metered and working properly before things start moving. You really can’t easily recalibrate during taping without potentially ruining footage. Did I forget to mention that you only get one shot with event footage?

Depth of Field: More is Better. You’re not shooting an art piece, you’re trying to cover something which probably only happens once, so you shouldn’t be shooting at f/2.8 or something ridiculous like that. Yes, you can sometimes take advantage of the hyperfocal distance of a lens to keep far-away objects in focus – but you should, most likely, find a fairly closed aperture to work with. This can be challenging in low light scenarios, where you’ll be fighting the specter of sensor noise at extremely high ISO levels when you close the iris more than a little.

Positivity. You need to be happy and in a good place when shooting people and interviews, especially in a one-on-one setting. Other people take certain cues from your body language and demeanor, and hostility will produce hostility. If at all possible, try to work on documentary projects which you like, or identify with.

Lens Changes. If you can avoid it, don’t. Wear cargo pants or a vest with lens pockets, much like most photographers, if you absolutely must change lenses. If you’re shooting a long distance, go with something like the Canon EF 70-200mm f/2.8L IS II USM lens, or if you are dealing with shorter distances, the Canon EF 24-70mm f/2.8L II USM lens. Carrying a few primes around will work well for static interview shots, but they’re going to be a real pain when you’re dealing with moving targets, no matter how quick you are with focus peaking assistance. If you’re shooting with a shoulder rig, you’re going to want an IS/VR lens, if you have one available – it cuts down on a lot of shudder and shake.

Bring a Spare. Got one lav mic? You might want to bring a second one. If there’s a small piece of equipment or accessory which, if missing, could screw you, it’s going to go missing. Especially when you’re far from home base and without the benefit of an equipment store.

You’re Going To Mess Up. This is pretty obvious – no matter how careful you are, something is going to get messed up. The trick is to try to make everything redundant enough that it doesn’t matter.

It’s Not About Your Equipment. Someone there is going to have a better camera, better lenses, and better everything than you have. Go far enough, and your rig is going to look bad to someone. Just remember, you’re there for a reason, so know your equipment, and use it as well as you can.

Rocking Out. If you’re interviewing someone who is rocking back and forth, bring it to their attention. It creates massive focusing, framing, and perspective issues. Don’t be intimidated; they can rock as much as they’d like, just as soon as you’re done filming.

Network. If you’re looking for an interview with someone, talking to some of the people around them may lead you to another interviewee who may produce even better results. There’s no cost associated with being polite.

Respect the Eyeline. For static interviews, don’t shoot up or down on someone’s eye-line unless you are trying to indicate something about that person. You should be even with their eye-line on initial shot setup. This is also a good idea when doing non-static interviews, when possible.

Make Two Copies. When you transfer the data off of your memory cards, make two copies, and if possible, don’t transport them together. Redundancy is the name of the game here, since even the best footage means diddly/squat if no one ever gets to see it.

Bring a Sound Guy. I know, you don’t think you need one – you’ve got a lav mic, right? A sound engineer is half of your production, since you’re in charge of the visuals. Also, boom mic operators generally should be fairly competent sound engineers, to avoid issues later on in the post-production process – when possible.

Don’t Go For The Cheaper Chicken. When booking accommodations, don’t go for the least expensive hotel – there’s a good chance that it’s in a dodgy part of town, and that extra 10 dollars a night might buy you not only a bit of security, but might buy you a better night’s sleep. Protip: the cheapest hotel is generally right next to a) train tracks with 24-hour train service, b) an open-air coal mine, c) a condemned house with mysterious block parties at all hours of the night, or d) all of the above…

Get Stills. If you’re doing a documentary with a bunch of Ken Burns effect style picture montages, you should try to either take stills yourself, or have another camera operator capture some for you. If you can, attempt to take pictures of landmarks or buildings when they are not occupied, unless it specifically works in your storytelling to have them occupied or surrounded.

There are many, many more things to think about when it comes to filming documentaries and documentary footage – but hopefully, these bullet points will help someone else avoid some of the pitfalls I experienced on my first documentary shoot. Good luck!

Are You a Cinematographer or a Camera Operator?

| Comments

There’s only a slight difference in the textbook definition between a cinematographer and a camera operator. Besides the slight variance in responsibilities (a cinematographer/DP can be responsible for several camera operators), there are some additional skills and aptitudes which play into the decision to try to be a cinematographer.

None of this is meant to downplay the skill and experience which make a great camera operator. Knowing your equipment, being able to choose the proper lenses and camera settings, being able to operate that equipment, being able to interpret the direction of both the director and the cinematographer, and being able to hold together a cohesive shot – these are all the hallmarks of a great camera operator.

At a much, much more basic level, there’s an art to cinematography, which transcends the physical manipulations of the equipment involved, and becomes more about creating a sort of separate reality; that of capturing a little slice of time and space, motion and stillness, light and dark, vibrant colors and subtle undertones. That’s the part of this that, to me, offers the most promising creativity and the most fulfilling, yet burdening, tasks.

It’s very easy to get lost in the minutiae of camera settings adjustments, equipment acquisition, troubleshooting and stabilization, among other demons – but the art is the derivative of those functions.

To properly illustrate this, I think of a concert I attended, quite a few years ago. The guitarist, Vernon Reid, was (and still is), an amazing technical instrumentalist. I watched him dance over the semi-circle of effects pedals and boxes, his hands moving, seemingly effortlessly over the six strings on his instrument. It struck me as astounding. We are all, in some way, capable of recognizing genius and immense skill, even if we cannot reproduce it, or explain it in purely quantifiable terms. In that way, the body of his technical skills simply became a foundation for the art which was made by way of them. I wish I could explain it better – I only know that I could recognize it when I saw it. The man himself, whom I had the pleasure of speaking with for a few brief moments, was very humble and very “zen” about his skill and his art.

Due to my particular circumstances, I’ve been able to pick and choose projects, more of late, specifically based on their relative merits to me. As such, I haven’t had much experience acting strictly as a “camera operator”. There may be some hubris, on my part, which would have come out of getting to perform the artistic duties of a cinematographer on the vast majority of the projects on which I’ve been involved. It has given me an affinity for the creative aspects of the job. For as long as I’ve been involved with cinematography, I have always assumed that being a “camera operator” has been a step towards being a cinematographer; a way of honing the technical foundation of the craft, so that the art can come later. It seemed like an “apprenticeship”, much as the old focus pullers became second unit cameramen, to eventually move up to first units, then perhaps aspire to become a cinematographer one day…

The landscape, mostly due to the explosion of low-cost cinematography (fueled primarily by DSLR cinematography), has been shifting away from that old model. Most projects have one, perhaps two camera operators – and the primary camera operator is usually the cinematographer/DP. Many projects don’t even have a second camera operator – it simply isn’t required.

Ultimately, it depends what your plans and/or revenue model is.

  • If you want to learn more about how to get your equipment to perform the way you want it to perform, you may do well working as a camera operator with a seasoned cinematographer, rather than holding out for a DP position on a project.
  • If you want to make a living behind a camera, you can’t really differentiate between cinematography and camera operator work – at least as long as you want to keep a steady revenue stream up. Even the best cinematographers can moonlight as strict “camera operators” when the pay is right.
  • If you want to work in a lower pressure environment, you might want to be a camera operator. Cinematographers are responsible for everything visual which goes on in a project, and it can be a lot of stress.
  • If you feel passionate enough about a project to want to be involved at any cost, you might sign on as a camera operator.

Even though there’s a lot of competition for certain spots on certain projects, we’re all on the same team; we all (for the most part) are interested in shooting great footage, and many of us are concerned about creating enduring works of art, wherever possible. We all have great potential for growth, so whatever your career and job choice, good luck!

Prime Lenses and Proper Depth of Field

| Comments

Depth of field is a massively misunderstood “side effect” of iris size, and an ultimately useful storytelling tool (when used properly).

DSLR photography has spawned a large group of cinematographers who are dealing with largely light-insensitive camera sensors (usually producing fairly unacceptable noise when used at greater sensitivies than ISO 800). This has created a need for “fast” lenses, which are lenses which have very large maximum apertures, featuring maximum f-stop ratings like f/2.0, f/1.8, and even f/1.4 (there are even some f/0.7 lenses out there, but I’m sticking to the realm of affordable DSLR cinematography, at the moment). Using firmware like Magic Lantern allows us to use focus peaking to exploit manual-focus and non-EF mount lenses (for those Canon-philes among us), to bring down the effective cost of shooting “fast” lenses. I have some M42 “Pentax screw-mount” lenses which open to f/2.8 – acquired for less than 10 USD each, plus a 7 USD M42-to-EF adapter ring. If you’re interested in more information on how to use focus peaking, check out how it works.

So, you’ve beaten high-cost DSLR and cinema camera body manufacturers by using “fast” lenses, right? Well, not so fast, there… You have to consider the side effect of fast lenses and wide apertures: the shallow depth of field. It’s both a blessing and a curse; we tend to associate shallow depth of field images and video with expensive equipment (rightfully so, in most cases) and a more personal type of image, but those lenses are not universally usable “wide open”. Many shots come off looking amateurish and ill-composed, when half of the subjects are out of focus, because they’ve exceeded the edges of the “sharp” area in the focus range.

(There’s also lack of lens sharpness and chromatic abberation to consider, which are generally present, to some degree, in the widest aperture settings available on most lenses. That’s a very intensely technical discussion, however, and will probably be reserved for a future posting.)

When considering the depth of field you want, consider the subject (or literal focus) of the shot, and figure out how much of the shot needs to be in focus to properly tell your story. The depth of field is very important for exposing the mise-en-scène in the way which best represents both the explicit visuals and external representation of internal character development and exposition. If your character needs to “pop” out of the scenery, a narrow DOF would be perfect – but if they are to appear as a figurative cog in the machinery of the world, you’re probably going to be looking for the widest DOF (approaching infinity) as you can get. When widening the DOF like this, you’ll most likely have to compensate by boosting the ambient lighting, or adding additional lighting, to adjust for the decreased light hitting your camera sensor.

There are a number of tools for figuring out the proper DOF for a certain shot, based on the camera/sensor, lens focal length, and distance to primary focal point.

There are more of them, but if you specify the variables, you can adjust the aperture f-stop to the widest setting which properly fits the scene and visuals which you’re attempting to achieve.

Like anything else, this isn’t strictly an instructional A-to-B guide on how to achieve stunning visuals; it’s simply an attempt to share some of what I’ve learned, and to help share the knowledge and tools to exploit your own inner artistic vision. Good luck shooting!

To Post or Not to Post

| Comments

Even though a good portion of the work has been completed when you press the shutter button, or stop rolling, since all of your planning and execution has been completed, there’s still a final step (or series of steps) to bring that artistic effort to a presentable format.

Many people shoot with default (or very close to default) settings on their camera or videocamera, and do simple “post production” by simply cropping or cutting whatever comes out of their camera body. This is a very “purist” way of going about shooting, but can also rob your end product of some of the polish and slight adjustments which can take good pictures or footage and make them into something extraordinary.

All of that being said, there’s a limit to the types and amounts of post production which can and should be used, much in the way that overproducing a musical album can result in a homogenous and lifeless product, robbed of the “soul” and variances which make it unique. You are the only person who can decide how much, if any, post processing is right for your project or other artistic endeavor.

RAW versus JPEG/H.264

With DSLR cameras being used for both photography and cinematography, there are places where the “processing” aspect can be pushed off into the post production process.

For photography, RAW format pictures can allow recovery past some of the previously accepted limits of what would normally be considered lost or unusable. Effectively, it allows you to make the decisions which are normally pushed off on your camera’s processor (the DIGIC processor(s), for all of you Canon users). It adds an additional step to the post-processing workflow, in that you’ll need a piece of software like Adobe Lightroom for Windows/Mac users, or if you’re a Linux user like me, Raw Studio. It also adds a significant amount of time to the photographic post-processing workflow, in that there are a number of additional parameters which can be adjusted, which would normally result in image degradation, but can be adjusted in RAW images with little or no deleterious effect. If you’re a Magic Lantern user, you can also experiment with “dual ISO” images, which can create images with enormous dynamic range through sensor tricks – but which adds an another step before RAW processing, that of running a CR2HDR binary. (If you’re interested in trying this out, but don’t want to compile a CR2HDR binary for Windows or Linux, I have binaries available here)

For cinematography, there’s a lot more to think about. Magic Lantern users of Canon 5Dmk3 models (and some others, as well) have the option of being able to shoot RAW video with some very fast CF cards. This adds a pretty substantial amount of post-processing steps before your NLE (non-linear editor) of choice enters the picture. There are a pretty wide swath of applications which now support the Magic Lantern RAW format, including RAWanizer, but your best bet is to get involved with the Magic Lantern forum if you want to begin working with a RAW workflow. The other serious disadvantage is storage space. Standard H.264 video, which is shot by most DSLR camera bodies, has issues with some digital artifacts in certain scenarios, as well as some dynamic range issues (some of which can be made better by use of flat color profiles like Cinestyle, but I’ll leave that for later) – but it produces fairly compact files, even at 1080p/24p (1920x1080 24fps progressive) resolution. RAW data can take upwards of a gigabyte of space every 15-20 seconds. Your storage media and backup solutions have to be able to deal with that, so it’s something to seriously consider before deciding to move a project to work with that format.

White Balance

It used to be accepted that white balancing (determining where “true white” lies in an image, compensating for varying light color temperatures) had to be done “in camera”, before shots were taken. There are a number of tools which allow this to be done in post-processing.

Photographers can do this when shooting with RAW camera images very simply, as most RAW image processors have a simple white balancing tool. It’s still possible to do this with processed JPEG images, but with the potential for losing some image data. Depending on how comfortable you are with straying from the “truest possible image”, you can decide where you feel comfortable dealing with white balancing.

Cinematographers have a pretty vast array of white balancing and color correction plugins available for their NLE of choice. I’m most familiar with Adobe Premiere, so I’d mention Fast Color Corrector and Red Giant Colorista II, both of which offer pretty decent white balancing capabilities, among other things. If you’re concerned about render speed, Fast Color Corrector seems to win out against Colorista II, but Colorista II has some additional parameters and capabilities which far outstrip what FCC has to offer. There’s also software like Adobe Speedgrade and Blackmagic Design Davinci Resolve, both of which offer white balancing, color correction, and color grading capabilities. There are a few threads comparing the two of them.

The important distinction is whether or not you want to spend the time to properly white balance your images or footage before you take pictures. It’s not always condusive to your shooting workflow to do so, and if this is a factor, you should seriously consider some of the post-processing options. It’s obviously to your greatest advantage to have the most accurate input images/footage to bring into your post-processing workflow, but sometimes it’s not pragmatic to take a completely purist attitude in this regard.

Exposure

We’ve all been bitten by changing light conditions, unexpected events, or incorrect metering producing images or footage which has exposure problems. If you’re shooting RAW (or using Magic Lantern’s Dual ISO hack), this is an ideal place to examine using post-processing to correct these issues.

We would all like to properly expose everything (unless we have a particular artistic reason for doing otherwise, since in the art of photography or cinematography, there are generally exceptions to every rule, rules were made to be broken, etc), but taking a post-processing detour to correct such things is generally preferable to re-arranging a shoot.

Special Effects

I’m not a big fan of post-processing special effects, mainly because I am a proponent of the idea that it takes some of the artistry out of pulling off those same effects “in camera”. I do understand that there are certain effects which would be either very difficult or downright impossible to achieve without the use of computer generated effects – but there’s a lot of things which can be accomplished without that.

Lighting is something which I’ve seen done in post, which generally baffles me. It’s something which is pretty simple to execute properly, given a little time and effort. Saving images or footage with issues might be a good use for that, but it’s probably not something I would recommend doing for everyday editing and processing.

For example, having someone “eaten” by light can be accomplished for a cinematographer, simply by adjusting the fill light to blow out the whites, then opening the iris of the lens (using a focus pulling kit, or just a spare hand, for the less equipment-heavy among us) to allow the fill light to “eat” the remainder of the image. That can be done without any specific post-processing.

Flat Color Profiles

This is pretty much for cinematographers, since photographers could simply take RAW photos – and cinematographers shooting RAW should probably ignore this section, as well.

There are quite a few “flat color profiles” available for DSLR (and other) bodies, such as the aforementioned Technicolor Cinestyle profile and the Flaat Picture Style. These work by effectively increasing the dynamic range being captured by the camera by storing the values differently than the stock picture profile.

I highly recommend shooting with one, if you’re not going to make the leap to shoot RAW. It does add the additional step in post-processing of having to adjust the blacks/whites and/or applying a corrective LUT (look up table) to adjust the end product to look less “washed out”. There are LUTs and plugins available for virtually every NLE out there right now – and I suggest learning how to do this. The improvement in the end product is substantial for relatively little time investment.

Audio Post

This is another section just for cinematographers.

The audio which is recorded with footage is generally “reference audio”, which is meant to be replaced by actual audio, generally recorded with something like a Tascam DR-40 or Zoom H4n external recorder and a shotgun microphone. It’s far outside the scope of this to explain the majority of these technologies and techniques, but sufficed to say that they produce far cleaner and usable recordings than on-camera audio (even when using augmented on-camera microphones, etc). If you can afford it, use external audio, and have an experienced audio engineer perform some basic post-production on your audio.

If you can’t afford it, or don’t have an audio engineer, there are a few basic steps to get cleaner audio.

1) Don’t use the microphone built into the camera. It sounds pretty awful, and is noisy. Get a lav microphone for interviews, or a directional hot-shoe mounted external microphone for anything else.

2) Adjust the audio levels manually. Automatic level adjustment is going to be terrible. It’s meant to avoid clipping, but it tends to make the noise floor (the level of background noise) jump around as the camera readjusts the audio recording level, so it may make your life significantly more difficult when it comes time to adjust the footage later. Adjust it to the loudest sound you’re going to be recording – then perhaps a click lower, for safety.

3) Respect the inverse square law. For those unfamiliar, sounds drop at a logarithmic rate as you move away from the sound source, following the same inverse square law which is used to figure light source brightness. The closer you are to a recorded sounds, the cleaner and better the source audio will be. Every object which a sound hits will produce a “reflection” (much as light does). If that reflection is equal or louder than the volume of the source sound, you’re going to get a pretty terrible sounding audio recording.

4) Get the cleanest audio possible. Try to cut down on external noise sources. Never assume that you can clean something later on; you should be trying to get clean audio. If something has to be re-taken to avoid a lawnmower being run in the background, then you should seriously consider doing it. It’ll save you from pulling out your hair.

I’m sure that this isn’t a comprehensive list, but hopefully this will help everyone get started with deciding what, if anything, to relegate to their post-processing workflow. Good luck!

Skill in the Age of Instagram

| Comments

I had started a post a few months ago, which I had tentatively titled “Approximating Skill”. It was a fairly scathing indictment of what I had refered to as the “Instagram generation”.

As anyone who has been following my recent posts here knows, I attach a lot of importance to the notion of “skill”, especially in relation to art forms where there are popular misconceptions regarding the ability to “buy” your way into a particular skill-set.

In “Skill vs Gear”, I had tried to explain that, no matter how many toys or additional pieces were added into any mix, the most important variable is the skill of the operator/artist in control of the production. It seems like a fairly simple thing to understand, but in the muddied world of extensive post-production and devalued skill-sets, that common understanding is constantly called into question.

Take Instagram, for example. It’s a fairly simple concept, being a marriage between Twitter-style hashtags (don’t get me started on that – we should just call it U+0023 and get it over with), social networking, and cameraphone photography. (I happen to particularly loathe cameraphone photography, except for the ubiquitous and universally available nature of cameraphones.) The issues, as far as I’m concerned, begin to arise in the dual areas of devaluation and lack of appreciation.

Devaluation. This comes into play when every Instagram (or insert other type of skill-sapping or replacing app here) user begins to assume that they, yes they, are capable of producing professional results. Malcolm Gladwell famously claimed that it took 10,000 hours of time to amass enough skill to master a certain skill-set. I prefer to refer to this chart (ignore some of the strange and inappropriate labels – the sentiment is the important part). As perceived self-skill increases, valuation of others’ skill-sets decreases. This explains much of the reluctance to hire professional photographers and cinematographers/videographers, or to value their skills so little that an insulting amount of compensation is considered “reasonable” for their time and effort.

Lack of appreciation. Autotune and Instagram are both partially to blame for this – but it seems as though actual skill is barely recognizable anymore, to a vast swath of the population, for any given skill-set. I do understand that the Dunning-Kruger Effect means that any person is going to assess their own skill-set incorrectly. This, however, reaches far beyond that conclusion. After having been subjected to hundreds, if not thousands, of hours of blown out and ill-colored photos, and subjected to far too many songs which have been pitch corrected beyond human recognition, the latest generation has mostly lost the ability to differentiate between “faked” output and true skill. I recently had an entire photoshoot (which I dutifully handed over to the photographed party) blown out, miscolored, and made to look like a bunch of terrible Instagram photos – all, supposedly, in the name of appealing to the younger generation, as they “appreciate” things that look like that. As far as faked output goes, it’s not limited to amateurs; big budget Hollywood studios have been exploiting basic digital correction to make everything look the same – the Instagramification of movies (that article also points out the annoying “shaky cam” stuff that I had been discussing in my earlier post). In the end, skills fall along the standard/normal distribution curve, even if the skills represented by the vertical axis may be logarithmic in scale.

The Hipster Effect. I realize that I only counted two areas earlier in this post, but this would lack something if I didn’t mention the effect that the “Hipster” culture has had on perceived skills. Rather than reiterate every terrible think about Hipster culture, I’ll let this guy do it for me. I don’t know if I can explain this better than there is no such thing as ironically bad music or photography. The misconception that there is has very deeply damaged the ability for the average hipster to properly appreciate music and photography, assuming that they are searching for ironically bad things, rather than attempting to appreciate skill. Crummy audio recordings, “vintage looks”, and other gimmicks serve to hide, rather than accentuate, the skills involved in the production of the underlying art.

Processing/Post-Production. When you do not require initial skill to create something, but can merely “shop” your way out of any situation, the underlying skill is devalued, and sometimes forgotten. I’ve been trying to emphasize the value of learning to create effects and the vast majority of the cinematic treatment of the mise-en-scène, rather than simply using a series of post-production tools to bend the footage to my will. This has more to do with the propensity for forgetting basic cinematic technique which comes with foregoing the steps involved with figuring out how to do this work on set. The real damnation seems to come when there is no actual knowledge of the underpinnings and workings of the art, and a novice decides to use post-processing exclusive of skill. This is the disease which is Instagram, Autotune, and Photoshop, in a nutshell.

The only way to fight this is to not concede defeat. If we are actually artists instead of simple camera jockeys, why shouldn’t we be pushing for our craft and skill-set to receive the recognition which it deserves? I understand that some of the lost ground, in terms of general appreciation, has been lost – but hopefully we can keep the situation from getting much worse through education and perserverence.

The Age of DSLR Cinematography

| Comments

DSLR cinematography (the practice of cinematography using relatively inexpensive DSLR camera bodies, which were originally purposed for still photography) has been enjoying a sort of minature renaissance over the last few years.

I was pleasantly surprised to find out that Shane Carruth’s new movie, Upstream Color, was shot entirely on a DSLR body. The information I’ve been reading indicates that he used a Panasonic GH2 DSLR body, with a few lenses, including the Rokinon 85mm f/1.4 (which I highly recommend). It scored at Sundance, and if it hadn’t been for a few “behind the scenes” photos, it might not have been quite that apparent.

Of course, Carruth also ended up using some Voigtlander glass, which is by no stretch of the imagination cheap/affordable for cash-strapped indie filmmakers. There’s an interesting write-up on the whole thing at EOSHD.

There’s a reason why I bring up “Upstream Color” – and it’s a point that I’ve been trying to make, off and on, for the past year or so. That film grossed over 300k dollars at the box office, and was shot on a small, inexpensive camera body. Still, it produced stunning visuals and did not seem to suffer from most of the “fatal flaws” that most seem to ascribe to DSLR cinematography, in general. The point, boiled down to its most essential component, is that the most important piece of equipment you’ve got is between your forehead and your nose. You can’t buy your way into it, and you can’t just assume that if you own it, you can shoot as well as cinematographer X who has the current successful movie on the big screen.

Whether or not you’re a big fan of “Act of Valor” or the “Fast and Furious” franchise, you can still appreciate Shane Hurlbut’s extensive use of Canon EOS 5D mk III bodies. He has had a series of posts on his blog praising not only the relative inexpensive nature of the 5D bodies, but also their versatility.

For every person who says “I shoot on the RED ONE” and produces sub-standard output, or bemoans not being able to afford an ARRI which would “really make a beautiful movie” – I call foul on that entire argument. Even contending with a more limited dynamic range (a common DSLR problem), a top resolution of 1080p for shooting, and mostly commodity lenses, Carruth managed to produce something beautiful. It’s not down to the equipment you buy, folks, it’s how you use it.

Good luck.