Panopticon

Randall Maas 8/6/2014 8:17:27 PM

I have a small pile of "interesting ideas" that I'd like to try someday, time and motivation permitting.

One of these fun ideas that I have would be to create a panopticon using a bunch of cameras. Such as using several cameras watching my the yard (ie, to monitoring my robot), and combine all of them into single "panoramic" image of the whole yard. Or to create a ring of cameras point outward, maybe mounted on a car or ballon, to capture wide horizons.

Broadly speaking, there are two kinds of panopticon here. One is a birds eye image, looking down on the ideal map. The other single point of view, looking out from a center. The later is the one that is more interesting to me at the moment.

The Conceptual design

The conceptual design is moderately simple:

  1. Image Registration: Figure out which parts of the two images line up; this ranges from the simple (as is used with star charts), to using FFT's. Doing it by hand or brute force is seldom much fun.
  2. Figure out how to warp camera images so that they have similar perspectives. This is a bit trickier, as it's "arbitrary" - lots of different approaches will work. So it's a matter of picking a suitable one.
  3. Figure out the intermediate pixel sizing for the target image and construct a buffer to hold the target image.
  4. Figure out the "location" in the big buffer for each image. I'd say pick four "corners" of the camera image, and the respective points in the big buffer.
  5. Warp each images to their portion of the big buffer. Stretch goal: Give priority to the image that retains the most detail in that region.

All approaches will distort the image, and this is no different. It's just a simple start. The advantage here is that most of this processing isn't need for every frame. Since the cameras are mounted, steps 1-4a need only be done every (say) few seconds or minutes. Each frame still needs to have it's pixels copied to the output buffer.

Stretching it to get better detail.

I mentioned in step 5 giving priority to the most detailed image... What's that mean? One camera might have a part of the image with detail and the other out of focus, or small. So it's better to use the camera that has a higher quality image (at least, higher quality in that area.)

And how to do it? Well, it's conceptually simple:

  1. Convert the image to grayscale. This is also a neccessary step in the image registration process.
  2. Blurring it slightly,to reduce noise
  3. Performing edge detection. Like the FFT, this is another common tool, but using convolution filters.
  4. Blur some more, and
  5. Count the number of pixels in the region above a threshold

The count and the size of the area (in the source image) is a figure of how detailed that area is in the source image. It's a matter of picking the higher scoring one.

Tools that do this for you

The fun part is that the the tools to do this are now very common. There are kits, like OpenCV with all the building blocks. More importantly, the Windows, OS X, and iOS environments include very expansive image and video manipulation frameworks. These have the tools are likely to, over time, include the above as either standard examples, or prepackaged recipes.

The trick is to recognize the other applications for the examples. These really are just other names, or variations of the process: