« Data redaction | Main | Bike belt from my old Kenda small block »

Android and OpenCV

I've been working on an Android app for work, and some ideas where thrown around regarding placing camera guides real time over an object.  The idea being that the guides would automatically adjust to the size of the image we were working with.  I wasn't totally sold on the idea, but I went ahead and investigated it any way.  

Some time ago, I used OpenCV in conjunction with QT to build a webcam app that would do face/person/dog detection, so I had an idea that I could probably use OpenCV in some form or another for the edge detection (I figure detect the edges, and you could place the guides at the corners where the edges intersect).  I was pretty happy to see that a quick google search pointed me to the Android port of OpenCV:


Not only is there a port, but it comes with a pretty cool sample app that has a ton of examples already built.  Literally, all I had to do was modify the sample app and I'd have a pretty good proof of concept.  I'm always amazed at the amount of free information and tools available to people these days.

After reading the OpenCV docs for a bit, I found some information on feature detection, and specifically a function called HoughLines, which finds lines in an image:


A little more digging, and I found a tutorial on how to use the function:


So my work would only consist of writing the Java code from the C++ example code.  I'm probably smart enough to do that... :)

Anyway, I ended up with the following Java code (this code relies on the OpenCV sample app):

          Imgproc.Canny(mRgba, mIntermediateMat, 80, 100);

            Imgproc.cvtColor(mIntermediateMat, mRgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4);
            Mat lines = new Mat();
            Imgproc.HoughLines(mIntermediateMat, lines, 1, Math.PI/180, 150);
            double[] data;
            double rho, theta;
            Point pt1 = new Point();
            Point pt2 = new Point();
            double a, b;
            double x0, y0;
            for (int i = 0; i < lines.cols(); i++)
                data = lines.get(0, i);
                rho = data[0];
                theta = data[1];
                a = Math.cos(theta);
                b = Math.sin(theta);
                x0 = a*rho;
                y0 = b*rho;
                pt1.x = Math.round(x0 + 1000*(-b));
                pt1.y = Math.round(y0 + 1000*a);
                pt2.x = Math.round(x0 - 1000*(-b));
                pt2.y = Math.round(y0 - 1000 *a);
                Core.line(mRgba, pt1, pt2, mColorsRGB[1], 3);


Here's a screen capture from my GS3 while doing real time edge detection:


 Edge detection on Android using OpenCV


In the end, the code didn't perform well enough for my likings, and I think there's still a lot of (non trivial ) work to be done to make this work in non ideal conditions (like where the background is not in stark contrast to the paper for example).  Non the less, I learned some stuff from the exercise, and who knows, we may do a version that works on the static, captured image for cropping or something.



Reader Comments (5)

hello please can you explain what is the mIntermediateMat value is???

November 17, 2013 | Unregistered Commentersereen shalaby

I don't know off the top of my head. You should have a look at the Android OpenCV sample app that is shipped with the Android OpenCV SDK - Android OpenCV SDK

This code comes directly from that application, so you can have a better idea what this variable is used for. Also, you can have a look at the signature for the Canny function.

December 18, 2013 | Registered CommenterRobert Tadlock

Can you post entire source code project in a zip pretty please...?

I have tried to put your code in OpenCV Tutorial 1, function Tutorial1Activity.java onCameraFrame()

which didn't work.

I am like many people. If we could find a good starting point that at least drew a line, we would be energized!

thanks for posting.

August 10, 2014 | Unregistered Commenterjim


Its a very Interesting blog you have there. I am a Computer Vision student and we are designing an app for the Blind which involves the use of Hough Lines!.
Your Blog is what I was looking for. I am gona try it out. But I had a doubt before I Could jump in.
What is the data type for mColorsRGB[1]?
Why are you passing 1 to the array?

Il give it a shot!

November 13, 2014 | Unregistered CommenterChristopher

Hey Chris,

I wish I could help you out here, but I've moved on from this company and don't have the source code to look at to remind me what I was doing. Hopefully you've figured it out by now, but if I remember correctly, mColorsRGB was defined in the sample Android CV app, so have a look there to see what the data type was. The sample apps can be found here: http://opencv.org/platforms/android/opencv4android-samples.html


December 25, 2014 | Registered CommenterRobert Tadlock

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>