I've been working on an Android app for work, and some ideas where thrown around regarding placing camera guides real time over an object. The idea being that the guides would automatically adjust to the size of the image we were working with. I wasn't totally sold on the idea, but I went ahead and investigated it any way.
Some time ago, I used OpenCV in conjunction with QT to build a webcam app that would do face/person/dog detection, so I had an idea that I could probably use OpenCV in some form or another for the edge detection (I figure detect the edges, and you could place the guides at the corners where the edges intersect). I was pretty happy to see that a quick google search pointed me to the Android port of OpenCV:
Not only is there a port, but it comes with a pretty cool sample app that has a ton of examples already built. Literally, all I had to do was modify the sample app and I'd have a pretty good proof of concept. I'm always amazed at the amount of free information and tools available to people these days.
After reading the OpenCV docs for a bit, I found some information on feature detection, and specifically a function called HoughLines, which finds lines in an image:
A little more digging, and I found a tutorial on how to use the function:
So my work would only consist of writing the Java code from the C++ example code. I'm probably smart enough to do that... :)
Anyway, I ended up with the following Java code (this code relies on the OpenCV sample app):
Imgproc.Canny(mRgba, mIntermediateMat, 80, 100);
Here's a screen capture from my GS3 while doing real time edge detection:
In the end, the code didn't perform well enough for my likings, and I think there's still a lot of (non trivial ) work to be done to make this work in non ideal conditions (like where the background is not in stark contrast to the paper for example). Non the less, I learned some stuff from the exercise, and who knows, we may do a version that works on the static, captured image for cropping or something.