cool, could you give more infomation
Hi everyone, just wondering if anyone is interested in collaborating on some of the computer vision issues around homemade pick and place machines? Im considering building one myself, and thought this might be a good part to experiment with first.
I have had a go at detecting strips of components, detecting if the component is in the hole, and locating the round holes for indexing the strip.
I have also experimented with pad detection on the pcb. This turns out to be a bit difficult due to the silkscreen being a similar color. Im considering using polarised light to get round this.
Two of the source photos (seg-orig, pcb_sm) are from Outguessing the machine, which i have used to compare results. The others are captured with a logitech webcam on my desk. I dont have proper lighting set up (just room light), hopefully doing so will improve results.
cool, could you give more infomation
Sure, at the moment i am doing all my prototyping in a piece of software called RoboRealm, which is a windows gui based computer vision tool. There is a free trial available from their website. The reason i use this over openCV is its much faster to do development in. I may port the final solution to openCV when im happy with the result.
The webcam i am using is a logitech webcam pro 9000, which is a bit of a mixed blessing, as its autofocus will not work on close up objects, so have to keep camera around an inch away from the target.
On pcb recognition, there are several issues to overcome. I am trying to avoid just using red lights, as i want to support multiple resist colors. The problem i have is identifying pads from silkscreen, which are both similar intensities in the current lighting scheme. Two potential solutions spring to mind, IR lighting, or using polarizing filters, but havent tested them yet.
With parts in cut strip, i am trying to locate the round holes to index the strip with a cnc head. This works quite well with a classical image processing approach. I am using a canny edge detector, then a hough transform to locate circles in the edge image. The other test i want to achieve is being able to tell if a given pocket has a component in or not (for failiure to pick errors).
When using a blob detector and automatic threshold image, i can identify the pockets, component pads, and round holes. It should be possible to classify the blobs based on roundness, and inclusions to detect the three states reliably, but i havent done this yet.
My next job is to get an led ring light to remove shadowing, and a better background material for the images. I may also end up having to modify the webcam optics, as at the moment most of the resolution is going to waste.
why you dont use CCD camera? webcam too slow
don't know if this will be any help to you but the pick and place machines i have worked with mainly used "gray scale" for the component and Fiducial recognition.
tivoidethuong: the webcam i am using is capable of 720p@15fps, most ccd capture cards i have seen do far lower resolution at 30fps. They also tend to be more expensive, when you factor in cost of camera and webcam.
blighty: do the machines you use happen to have colored lights and filters on the cameras?
Also i would be interested to know what is required by way of component teaching before the system can recognise the parts.
the led array around the cameras have three sets of red led's done in layers. depending on what you was doing depended on what set was lit. e.g if it was just looking at a 0805, 0604 etc it would use just one set like a fleeting glance. if it was looking at a PLCC44 it would use all three for better look. as for filters, they had a diffuser cover that was just a piece of opaque plastic over the led's.
was done by a package library in conjunction with a component library.
each package has its own rules. e.g 0805 (cant remember the sizes so i will make them up ) lets say you have a 100nf cap to lay. in your component library it would be.......
item 100nf cap
code CAP318 (i remember that code as we shot billions of them)
in your package library you will have one for 0805. this is where you tell it what its looking for. don't know how your going to do this, but this is how it was done. you would place a 0805 on the nozzle (the way it would be picked up) then using the gray scale turn down the intensity till all you could see is bright white feet(pads) of your 0805. you would tell it that these white pads are 3mm x 2mm and 4mm apart @ 90deg. then you would have another 7 pages of stuff to fill in, but i wont go into that. if it doesn't see this it will reject the part and go get another one. you would do this for every package type you have. things get fun when you have to teach it what a 120 leg smd looks like.
hope that made sense.
just found this [nomedia="http://www.youtube.com/watch?v=mf_obohkygo"]YouTube - europlacer iineo smt pick and place in action[/nomedia]
@ the 0:40 mark they show the screen when its looking at the components and @ 4:00 a shot of one of the led arrays for the cam.
i miss the sound of these things................. NOT!!
you may got on ebay in-sight cognex vision system for 500$
then you train it for fiducial or part pad and program it for output
x,y cordinate by rs-232 and bingo your done ...
extract blob was nice but it later hard to use it for find part center
so best way was train for "pin" find and then compute average of all x ,y coordinate the result will be the part center
also the blob was very sensitive to external light source
You only have to compute the corner pins, not all pins. Actually you only need 2 opposite corner pins but it's probably best to still use 4.
Using a webcam is not difficult, the key is lightning.
You must have a histogram comparable to this, the background color donīt matter. If you donīt have it, you donīt have the right lightning and thatīs the failure.
The second image is a illumination that is not perfect, but working.
a few thoughts on image processing
let me just start with this is a necessary subject to learn how ever it is the application of this knowledge that is important.
if your machine cannot reliably arrive at the calculated position of a part then
you will not be able to place accurately any way.
the recognition of where the part goes on the board is not necessary as this information is made by the pcb cad package.
the machine i use at work uses the image system to find the board via fiducials, also it can be used to find the center of parts by measuring there legs. as a note the vision box has red green and blue leds for illumination only on rare cases some bga's will you change the light source. the camers used for loking at the board are monochrome with red leds as you usaly remove the solder mask around a fiducial.
what i belive the small pnp machine builder will need is the ability to pick the part take it to a camera and get placement correction data x,y offset and rotation, so it can then be placed. having just finished my first machine which has no placement correction i can say this is my biggest problem. we all use the tools we are most familiar with keep up the good work learning this tool