Rawfa wrote:
Regarding video it would have been a smart move from Sony to have chosen a touch screen LCD that allowed you to choose focus by tapping
This is THE key advantage of the Panasonic range: it works, and it's an 'eased' movement of the focus point, too (starts slow, speeds, then slows as it find it; looks very natural).
My approach to video is completely different to Rawfa's; a bit of background. The way Rawfa shot that lovely video (recording live sound via the Røde and only having one camera) makes a lot of work in post. One-camera shoots are traditional Hollywood, but are much more work than the method I use. As well, Hollywood records second system sound (sound is recorded separately, depending on the setup, then synced in post production).
As well, it's clear that R. was 1–3m away from the voice and the drum—and you can capture decent audio at those distances (but the on-camera mic means the perspective of the sound changes as the camera position changes). Humans are very sensitive to this; we will tolerate pretty much any visual chopping and changing (think any MTV clip) but are disturbed by sound track changes.
As he said:
But in this particular case the guy called me for a photo shoot and I ended up making him a music video with LIVE sound...which was kind of hell, as each time I shot a different angle the music had a different duration (from 3 to 5 minutes). Editing was much harder then with the artist playing with a master track playing in the background.
That’s one way.
What I do is record sound in the actual take I am recording vision for as well—on a separate recorder (I use a number of small, broadcast-standard recorders) and get the sound into the recorder via lavaliere mics (those small ones you see newsreaders use, pinned on to lapels) or use the recorders’ mics (one of my recorders, the $179 Zoom H2n, records variable-width mid-side recording; others use a conventional x-y pattern; more on this later). Because sound and vision is already in sync, and the sound is the
actual sound, I have no problems in post.
As an aside, in Guy’s runway case, I would be taking a feed from the emcee’s audio, or record the live audio from the audience’s perspective (assuming the live audio is good; often it’s not); you can always get a feed from whoever doing the sound for the show. I would record this audio with a recorder plugged in to the desk itself; this is what the audience is hearing, after all.
Back to my approach: unlike Rawfa, I shoot multicam and use an old fashioned slate (clapper board) at the beginning of the recording—this has many advantages. Assuming I am recording “live” (actual sound and vision being recorded simultaneously) I have no post problems at all: I bring sound and all camera’s vision into FCPX, and sync on the slate, then simply (while watching all cameras’ angles simultaneously, I decide which angle I want the audience to “see” at any time. All this is non-destructive, and all can be changed.
If you are recording at an event, and cannot use a slate, there will always be sound and vision that can be manually synced (the sound of a drum beat, and its vision, for example). I prefer the slate simply for speed in post.
This is the merest intro to shooting video. The biggest learning curve for the stills pro learning video is not the angles, or the lighting; you have all that. It’s what does the audience need to see, to tell the story you want to tell, and how to get the best realistic sound (sound that is perceived as real in relation to the vision you are showing). Sound is completely different to images in a fundamental way: it follows the inverse square law: double the distance from the source and the audio is one quarter the intensity at the mic—and we are very sensitive to the stereo ‘image’ we are hearing. Getting the sound ‘right’ is the key to good video, yet almost without exception, beginning directors focus on image.
The learning curve for video is in the post productions editing programs; FCPX is an amazing program (I have been using FCP since FCP2) but there was a big learning curve mowing from FCP studio (FCP7) to FCPX. Learning how to edit convincingly is the hardest part of the additional skill set for stills photographers moving to video, IMHO.
Briefly: I use Panasonic GX-1, G6, GX-7, and an Oly EM-5 (the latter is my ‘steadycam’: I attach a monopod, and hold loosely, use a relatively wide angle lens (usu. 34mm EFOV) and move like a ninja—and the footage is excellent and cuts perfectly with the rest). I use the other cameras (usually two others, sometime three) on fixed tripods.
If I had shot Rawfa's video, one camera would be the front angle 'wide' shot (musician in context), another on the closeup, one over the shoulder on hands (high angle, longer lens), one side angle (standard height, the "viewer's" perspective) and the moving ninja one. Notice that's
five angles—but only four cameras? I would have started all cameras and the audio recording, clapped the slate, and signalled to the musician to play, from the beginning of the piece to the end. I would repeat as necessary, with the reverse angle and other angles Rawfa shot on a second or third take—because these angles do not require strict sync (so the altered duration of each piece he mentioned above would not be a problem).
Whew: too long already, but perhaps you will get a feel for the immense potential complexity of adding video to the repertoire.