To embed a Flickr Slideshow in WordPress, see: http://wpbtips.wordpress.com/2010/07/17/gigya-2-flickr-slideshows/
I’ve recently spent a few hours working on quite a challenging technical issue for some friends who are planning a round-the-world sailing trip.
They’d like to send back 30 second “video postcards” from their trip using a satellite phone, using less than one “unit”. These videos are to be uploaded to YouTube for distribution. So let’s crunch the numbers.
I looked up the unit cost for both Inmarsat and Iridium. Both quoted prices for units of 1 MB (megabyte). Video bitrates are usually quoted in kilobits or megabits per second.
1 byte = 8 bits, so the unit size is 8Mb or 8000 kb.
The video length is 30 secs, so we have an available bitrate of 8000/30 = 267 kb/s to be shared between audio and video. For reference, AVC-Intra (a pro camera codec) has a video nitrate of about 113000 kb/s, Freeview HD averages around 8000, even high quality MPEG 1 audio layer 3 (MP3) audio files may be 256 kb/s.
So we need a very drastic encoder!
We don’t want vendor lock-in so we need to use open standard codecs. Advanced Video Codec (aka H.264) and Advanced Audio Codec fit the bill nicely. Both of these are available in FFmbc. Most Non-linear editors produce one track per audio channel – so we also need to create a stereo file from the two mono tracks.
First, this isn’t going to be HD. Reduce the raster size using a decent filter like Lanczos. Make sure the reduction is an integer division of the original, e.g. 1920×1080 to 480×270.
AVC only sends a full image once every x frames, by increasing x to 250 we drastically reduce the bitrate.
The intermediate frames are built by referencing other frames. By increasing the number of frames that can be referenced, we get a better picture.
By throwing the kitchen sink at this algorithm wise, we can get even more gains. So we need to allow the referencing algorithm to use its most exhaustive search, we choose the most processor intensive maths algorithm etc.
Audio is important, poor audio can ruin a video. So I started off looking at the audio. We can drop the sampling frequency to 32k. This keeps voices intact and only removes the top harmonics of music. We can then encode it at 64 kbps.
This leaves us with a fixed video bitrate. I’ve never managed to get FFmbc to match the video bitrate to the requested bitrate, so I experimented with the input value to get an overall file size of 0.97 MB, just less than 1 unit.
#!/bin/bash INPUT=$1 OUTPUT=$2 ffmbc -y -i $INPUT -vf scale=480:320:0 -sws_flags lanczos -map_audio_channel\ 0:1:0:0:1:0 -map_audio_channel 0:2:0:0:1:1 -vcodec libx264 -vb 245k -maxrate\ 255k -minrate 235k -bufsize 1500k -g 250 -bf 4 -refs 6 -partitions all -me\ umh -me_range 128 -subme 8 -trellis 2 -pix_fmt yuv420p -timecode 10:00:00:00\ -acodec libfaac -ac 2 -ar 32000 -ab 64k -f mov $OUTPUT
The Test Sequence.
Test sequences need to match the usage. This is for talking heads and there odd landscape. To test the software, I shot a quick 35 second sequence consisting of:
- cutaway with detailed bridge and reflective building
- cutaway with detailed brickwork and reflective building
- piece to camera with shallow depth of field
- street scene with movement
- digital zoom into map
This was shot on a Canon 700d (1920x1080p25) which is probably representative of the type of camera in use. To model the audio, I added a rights free instrumental track and a female voice reading Conrad.
I think it works quite well. YouTube accepts the file, and transcodes it to a very respectable video. The only sequence that doesn’t work is water – which I wasn’t expecting to work.
First start Lightworks and create a new project, give it a memorable name and choose the same frame rate that you used in camera.
You’ll then be asked to import some files. Navigate to where you’ve stored your files and choose which ones to import. If the files are in a format that Lightworks understands, you can create a link, or you can transcode them to an edit format.
Click import and a “bin” is created (the terminology comes from 35mm editing where you had bins of film). At this point, if you have lots of clips, you probably want to change the name and add descriptions. A second bin is needed, click the button ringed and type in a name for the new bin and press return. The title will go green.
Double click on the first clip you imported. A viewing window appears. Play through the clip until you reach the point where you want to start using the video. Press the ‘i’ key. Find the end of the video you want to use, press the ‘o’ key. Then click the button marked.
A sub clip appears in the workspace. Drag it into the new bin you created. Repeat for all other clips. Arrange in the order you want.
Click on the cogs on the bin title bar, select make edit. An edit appears on the workspace.
Skip this bit if not interested in more advanced editing! To adjust colours, add more audio, titles, L-cuts, fades etc. double click on the edit and use the icons on the right of the viewer to open the timeline view.
Click on the export icon on the toolbar. Choose an output type (DVD, YouTube, MOV etc.) and a destination and away you go.
I’ve been building some nice kit to do some light painting in February with my photography club. The equipment is all bought from eBay and requires only a slight modification to be used in lightpainting. The Twitter link above shows some examples of lightpainting.
Colour Painting Torch
The easiest to make. You will need:
- CREE Torch – a low power, high brightness LED torch – cost about £5 with batteries.
- Elastic Band
- Coloured celophane (Quality Street wrappers)
Use the elastic band to cover the torch with celophane. “Paint” buildings, models, etc.
The light orbs that you see in lightpainting photos are made using battery powered fairy lights. You will need:
- Battery operated fairy lights (£2 from Ebay)
- Electricians tape
- 2 small lengths of choc block (or soldering iron or tape – any method that can electrically join two wires)
- 4 foot of 2 core cable (bell wire, speaker wire, mains flex …)
Bunch all the LEDs together and tape them together. Cut the wire between the LED bunch and the battery and extend using the 2 core cable. Beware that LEDs are polarised – you need to connect + to + and – to -. If it isn’t working try swapping the connection around.
The long streaks of light are created using a light bar. For this, a strip of RGB LEDs is needed.
- A kit containing an RGB LED strip and Controller – you do not need a power supply (about £8)
- Electricians tape
- A garden cane
- A 9v battery clip
Connect the 9v battery clip to the short power flylead that came in the kit (red to red and black to black). Connect the flylead to the controller and the controller to the LED strip. Connect a 9v battery. You should now be able to control colour using the remote control.
Stick the LED strip and the control box to the garden cane and you have a light bar.
First off you need an intervalometer (if your camera doesn’t have one built in) and a tripod.
The intervalometer instructs your camera to take a photo every x seconds. For a 10 second clip, you need 250 images – so if you take a shot every 10 secs you’ll need about 42 mins of shots.
I set the camera to Medium JPEG as this creates 3088×2056 images. This allows you to crop to create a 2880×1620 images which are easily scaled to 1920×1080.
The images need to have exactly the same settings for each image or a significant flicker will occur. So the ISO, shutter speed and aperture need to be fixed at manual. Also, a DSLR when autofocussing will open the aperture, focus and close it again – so manual focus is needed. When framing, remember you’ll lose about 5% of the image on each edge.
Once you have the images, check for camera-shake. Use the FFmpeg deshake filter to fix it.
ffmpeg -r 25 -pattern_type glob -i '*.JPG' -vf deshake=-1:-1:-1:-1:48:48:0:4:64:0,crop=2880:1620:100:100,scale=1920:1080 -vcodec libx264 -b 10M -bt 100k -pix_fmt yuv420p -r 25 -an -f mov timelapse.mov
This creates a Youtube ready video file. Watch at at least the 720p setting.
Does colour make photography more life-like? 22 Words investigates.
Fantastic blog post from Alexey Kljatov from Moscow about a rig that he’s built for photographing snow flakes. It’s based on a reversed Zenit lens (other cheap Soviet lenses are available) and a glass plate. The snow sits on the backlit glass and the lens rig is placed over the top. The colour is added later to the almost monochromatic image. Unfortunately, I don’t have room to store a snowflake photography kit on the off chance that it snows
First, you need to know a friendly guard to let you in at 7am. Before it opens to the public, they’re happy for you to use a tripod, without which this shot wouldn’t be possible.
In order to get a focussed image from the foreground to the background, a small aperture is needed in the camera lens. This image is taken at about f/18.
In order to minimize noise in the image a low ISO value of 100 is used.
The camera is placed on a tripod and placed in the centre of the staircase (there’s a bannister you can use to line up the shot). Then a spirit level is used to level the camera and lens.
The dynamic range of the camera (the range of brightnesses it can record from the darkest shadows to the brightest highlights) is less than the range present in the room. To fix this I took 3 exposures, of 1, 3 and 6 seconds. I then used Luminance HDR to align and blend the 3 images into one image with a higher dynamic range.
The final problem to overcome was lighting. The top of the image is lit by daylight and the bottom half by tungsten lamps. These are different colour lights – daylight is quite blue, tungsten is quite orange. I could have adjusted the white balance of each shot (tungsten shadows, daylight highlights), but I didn’t. I couldn’t be bothered to fix it so I used DxO Filmpack to convert the oddly coloured picture to look like it had been shot on Ilford F Pan 25 film.
The first app is still in Beta format (i.e. it isn’t bug-free enough to be released proper) but allows you to connect you camera to your tablet via a USB on the go adaptor and USB cable to control the camera via a live viewfinder view and to review images taken on the camera.
It’s brilliant for doing those tricky things that are hard on a camera – zooming in to check that the bit of the image you want in focus is in focus, checking that the brightness range in the image is within the limits of the camera (i.e. in the histogram) without losing data because it’s over-bright or too dark. It can also be used with the DSLR in video mode where it is great for doing focus pulls – moving quickly from one focus to another.
The Photographer’s Ephemeris
I like taking sunsets and shots of the moon over buildings. This App allows you to position a pin and will show you the directions and times of sunrise, sunset, moon rise and moon set. You can then move the pin around to find exactly where you need to set up to capture the shot you want.
I got introduced to this at a work shoot in Geneva. It’s a really easy way to get property and actor releases. You fill in the details of the location on your tablet, and get them to “sign” the screen. This app then creates a PDF file with the legal release to allow you to use the images commercially. The text it contains is approved by Getty, iStockphoto etc. but can be changed to your own liking.
Please be aware that in the UK at least, you don’t legally need a release if you’re shooting in public (there is no legal expectation of provacy from photography in public), and the legal requirement is dubious at best for private situations. However, arse-covering means most agencies/employers etc. will want you to get a signed release.
Nice little GUI for doing some video compositing work. Based on the OpenVFX standard.
The video is in French, but it’s quite easy to follow from the screenshots they use.
PS Anyone know how they’re creating that screen grab video in Ubuntu?