The word robot originally was supposed to mean a slave It is a machine which performs a variety of tasks, either using manual external control or intelligent automation A manually controlled car or a ASIMOV trying to kick a football are all robots (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Robotics is a multi disciplinary field of engineering encompassing the vistas of › Mechanical design › Electronic control › Artificial Intelligence ž It finds it’s uses in all aspects of our life › automated vacuum cleaner › Exploring the ‘Red’ planet › Setting up a human colony there :D (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø A mobile robot must have a system to make it move. Ob. Ø This system gives our machine the ability to move forward, backward and take turns Ø It may also provide for climbing up and down Ø Or even flying or floating J (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø Each type of locomotion requires different number of degrees of freedom Ø More degrees of freedom means more the number of actuators you will have to use Ø Although one actuator can be used to control more than one degree of freedom (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø The kind of locomotion most frequently used in robotics at the undergrad level Ø This involves conversion of electrical energy into mechanical energy (mostly using motors) Ø The issue is to control these motors to give the required speed and torque (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø We have a simple equation for the constant power delivered to the motor: › P = ζ X ω Ø Note that the torque and angular velocity are inversely proportionally to each other Ø So to increase the speed we have to reduce the torque (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø The dc motors available have very high speed of rotation which is generally not needed Ø At high speeds, they lack torque Ø For reduction in speed and increase in “pulling capacity” we use pulley or gear systems (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø Simplest, easiest to implement and most widely used. Ø It has a free moving wheel in the front accompanied with a left and right wheel. The two wheels are separately powered Ø When the wheels move in the same direction the machine moves in that direction. Ø Turning is achieved by making the wheels oppose each other’s motion, thus generating a couple (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø In-place (zero turning radius) rotation is done by turning the drive wheels at the same rate in the opposite direction Ø Arbitrary motion paths can be implemented by dynamically modifying the angular velocity and/or direction of the drive wheels Ø Total of two motors are required, both of them are responsible for translation and rotational motion (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø Simplicity and ease of use makes it the most preferred system by beginners Ø Independent drives makes it difficult for straight line motion. The differences in motors and frictional profile of the two wheels cause them to move with slight turning effect Ø The above drawback must be countered with appropriate feedback system. Suitable for human controlled remote robots (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø Uses synchronous rotation of its wheels to achieve motion & turns Ø It is made up of a system of 2 motors. One which drive the wheels and the other turns the wheels in a synchronous fashion Ø The two can be directly mechanically coupled as they always move in the same direction with same speed (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
The direction of motion is given by black arrow. The alignment of the machine is shown by red arrow (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø The use of separate motors for translation and wheel turning guarantees straight line motion without the need for dynamic feedback control Ø This system is somewhat complex in designing but further use is much simpler (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø Actuators, also known as drives, are mechanisms for getting robots to move. Ø Most actuators are powered by pneumatics (air pressure), hydraulics (fluid pressure), or motors (electric current). Ø They are devices which transform an input signal (mainly an electrical signal)) into motion (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø Widely used because of their small size and high energy output. Ø Operating voltage: usually 6,12,24V. Ø Speed: 1-20,000 rpm.. Ø Power: P = ζ X ω (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ØThe stator is the stationary outside part of a motor. Ø The rotor is the inner part which rotates. Ø Red represents a magnet or winding with a north polarization. Ø Green represents a magnet or winding with a south polarization. Ø Opposite, red and green, polarities attract. Ø Commutator contacts are brown and the brushes are dark grey.
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha Ø Stator is composed of two or more permanent magnet pole pieces. Ø Rotor composed of windings which are connected to a mechanical commutator. Ø The opposite polarities of the energized winding and the stator magnet attract and the rotor will rotate until it is aligned with the stator. Ø Just as the rotor reaches alignment, the brushes move across the commutator contacts and energize the next winding. Ø A yellow spark shows when the brushes switch to the next winding.
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ØIt is an electric motor that can divide a full rotation into a large number of steps. Ø The motor's position can be controlled precisely, without any feedback mechanism. Ø There are three types: Ø Permanent Magnet Ø Variable Resistance Ø Hybrid type
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha Ø Stepper motors work in a similar way to dc motors, but where dc motors have 1 electromagnetic coil to produce movement, stepper motors contain many. Ø Stepper motors are controlled by turning each coil on and off in a sequence. Ø Every time a new coil is energized, the motor rotates a few degrees, called the step angle.
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha Full Step Ø Stepper motors have 200 rotor teeth, or 200 full steps per revolution of the motor shaft. Ø Dividing the 200 steps into the 360º's rotation equals a 1.8º full step angle. Ø Achieved by energizing both windings while reversing the current alternately.
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ØServos operate on the principle of negative feedback, where the control input is compared to the actual position of the mechanical system as measured. ØAny difference between the actual and wanted values (an "error signal") is amplified and used to drive the system in the direction necessary to reduce or eliminate the error ØTheir precision movement makes them ideal for powering legs, controlling rack and pinion steering, to move a sensor around etc.
Ø Suitable power source is needed to run the robots Ø Mobile robots are most suitably powered by batteries Ø The weight and energy capacity of the batteries may become the determinative factor of its performance (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø For a manually controlled robot, you can use batteries or voltage eliminators (convert the normal 220V supply to the required DC voltage 12V , 24V etc.) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø Gears are the most common means of transmitting power in mechanical engineering Ø Gears form vital elements of mechanisms in many machines such as vehicles, metal tooling machine tools, rolling mills, hoisting etc. Ø In robotics its vital to control actuator speeds and in exercising different degrees of freedom (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø To achieve torque magnification and speed reduction Ø They are analogous to transformers in electrical systems Ø It follows the basic equation: Ø ω1 x r1 = ω2 x r2 Ø Gears are very useful in transferring motion between different dimension (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø An arrangement of gears to convert rotational torque to linear motion Ø Same mechanism used to steer wheels using a steering Ø In robotics used extensively in clamping systems (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø It allows for mechanical power, torque, and speed to be transmitted across axes Ø If the pulleys are of differing diameters, it gives a mechanical advantage Ø In robotics it can be used in lifting loads or speed reduction Ø Also it can be used in a differential drive to interconnect wheels (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø Sprocket is a profiled wheel with teeth that meshes with a chain Ø It is similar to the system found in bicycles Ø It can transfer rotary motion between shafts in cases where gears are unsuitable Ø Can be used over a larger distance Ø Compared to pulleys has lesser slippage due to firm meshing between the chain and sprocket (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
Ø For picking and placing many mechanisms can be used: vHook and pick vClamp and pick vSlide a sheet below and pick vMany other ways vLots of Scope for innovation (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Image Processing is a tool for analyzing image data in all areas of natural science ž It is concerned with extracting data from real-world images ž Differences from computer graphics is that computer graphics makes extensive use of primitives like lines, triangles & points. However no such primitives exist in a real world images. (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Increasing need to replicate human sensory organs ž Eye (Vision) : The most useful and complex sensory organ (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Camera, Scanner or any other image acquisition device ž PC or Workstation or Digital Signal Processor for processing ž Software to run on the hardware platform (Matlab, Open CV etc.) ž Image representation to process the image (usually matrix) and provide spatial relationship ž A particular color space is used to represent the image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha Image Acquisition Device (Eg. CCD or CMOS Camera) Image Processor (Eg. PC or DSP) Image Analysis Tool (Eg. Matlab or Open CV) Machine Control Of Hardware through serial or parallel interfacing
ž Using a camera ž Analog cameras ž Digital cameras › CCD and CMOS cameras ž Captures data from a single light receptor at a time ž CCD – Charge Coupled Devices ž CMOS – Complementary MOSFET Sensor based (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Digital Cameras › CCD Cameras – High quality, low noise images – Genarates analog signal converted using ADC – Consumes high power › CMOS Cameras – Lesser sensitivity – Poor image quality – Lesser power ž Analogue cameras require grabbing card or TV tuner card to interface with a PC (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Two types: Vector and Raster ž Vector images store curve information ž Example: India’s flag ž Three rectangles, one circle and the spokes ž We will not deal with vector images at all (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Raster images are different ž They are made up of several dots (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž If you think about it, your laptop’s display is a raster display ž Also, vector images are high level abstractions ž Vector representations are more complex and used for specific purposes (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Raster › Matrix ž Vector › Quadtrees › Chains › Pyramid Of the four, matrix is the most general. The other three are used for special purposes. All these representations must provide for spatial relationships (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Computers cannot handle continuous images but only arrays of digital numbers ž So images are represented as 2-D arrays of points (2-D matrix)(Raster Represenatation) ž A point on this 2-D grid (corresponding to the image matrix element) is called PIXEL (picture element) ž It represents the average irradiance over the area of the pixel (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Pixels are tiny little dots of color you see on your screen, and the smallest possible size any image can get ž When an image is stored, the image file contains information on every single pixel in that image i.e › Pixel Location › Intensity ž The number of pixels used to represent the image digitally is called Resolution ž More the number of pixels used, higher the resolution ž Higher resolution requires more processing power (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž MATLAB stands for MATrix LABoratory, a software developed by Mathworks Inc (www.mathworks.com). MATLAB provides extensive library support for various domains of scientific and engineering computations and simulations (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž When you click the MATLAB icon (from your desktop or Start>All Programs), you typically see three windows: Command Window, Workspace and Command History. Snapshots of these windows are shown below (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This window shows the variables defined by you in current session on MATLAB (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Command History stores the list of recently used commands for quick reference (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž In MATLAB, variables are stored as matrices (singular: matrix), which could be either an integer, real numbers or even complex numbers ž These matrices bear some resemblance to array data structures (used in computer programming) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Let us start with writing simple instructions on MATLAB command window ž To define an integer, ž Type a=4 and hit enter ž >>a=4 ž To avoid seeing the variable, add a semicolon after the instruction ž >>a=4; (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Similarly to define a 2x2 matrix, the instruction in MATLAB is written as ž >> b=[ 1 2; 3 4]; ž If you are familiar with operations on matrix, you can find the determinant or the inverse of the matrix. ž >> determin= det(b) ž >> d=inv(b) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Images as we have already seen are stored as matrices ž So now we try to see this for real on MATLAB ž We shall also look into the basic commands provided by MATLAB’s Image Processing Toolbox (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Once you have started MATLAB, type the following in the Command Window ž >> im=imread(‘sample.jpg'); ž This command stores the file image file ‘sample.jpg’ in a variable called ‘im’ ž It takes this file from the Current- Directory specified ž Else, entire path of file should be mentioned (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You can display the image in another window by using imshow command ž >>figure,imshow(im); ž This pops up another window (called as figure window), and displays the image ‘im’ (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The ‘imview’ command can also be used in order toview the image ž imview(im); ž Difference is that in this case you can see specific pixel values just by moving the cursor over the image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž To know the breadth and height of the image, use the size function, ž >>s=size(im); ž The size function basically gives the size of any array in MATLAB ž Here we get the size of the IMAGE ARRAY (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Now that we have our image stored in a variable we can observe and understand the following: ž How pixels are stored? ž What does the values given by each pixel indicate? ž What is Image Resolution? (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Have a look at the values stored ž Say the first block of 10 x 10 ž >>im(1:10,1:10); ž Or Say view the pixel range 50:150 on both axis ž >> figure,imshow(im(50:150,50:150)); (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž 1-bit = BLACK or WHITE ž 8-bit = 28 different shades ž 24-bit = 224 different shades ž 64-bit images – High end displays ž Used in HDRI, storing extra information per pixel, etc (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This is another name for 1-bit images ž Each pixel is either White or Black ž Technically, this is a black & white image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Another name for 8-bit images ž Each pixel can be one of 256 different shades of gray ž These images are popularly called Black & White. Though, this is technically wrong. (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Again, each pixel gets 8 bits ž But each of the 256 values maps to a color in a predefined “palette” ž If required, you can have different bit depths (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž 8-bits is too less for all the different shades of colors we see ž So 24-bits is generally used for color images ž Thus each pixel can have one of 224 unique colors (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Now, a new problem arises: ž How do you manage so many different shades? ž Programmers would go nuts ž Then came along the idea of color spaces (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž A color space can be thought of as a way to manage millions of colors ž Eliminates memorization, and increases predictability ž Common color spaces: › RGB › HSV › YCrCb or YUV › YIQ (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Each pixel stores 3 bytes of data ž The 24-bits are divided into three 8-bit values ž The three are: Red, Green and Blue i.e the primary colours ž Mixing of primary colours in right proportions gives any particular colour ž Each pixel has these 3 values (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž 1 byte = 8 bits can store a value between 0-255 ž We get pixel data in the form RGB values with each varying from 0-255 ž That is how displays work ž So there are 3 grayscale channels (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The Hue is the tint of color used › It represents the colour of the pixel (Eg. Red Green Yellow etc) ž The Saturation is the “amount” of that tint › It represents the intensity of the colour (Eg. Dark red and light red) ž The Value is the “intensity” of that pixel › It represents the intensity of brightness of the colour (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Advantages: › The color at a pixel depends on a single value › Illumination independent ž Disadvantages: › Something (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Intuitively RGB might seem to be the simpler and better colour space to deal with ž Though HSV has its own advantages especially in colour thresholding ž As the colour at each pixel depends on a single hue value it is very useful in separating out blobs of specific colours even when there are huge light variations ž Thus it is very useful in processing real images taken from camera as there is a large amount of intensity variation in this case ž Hence, ideal for robotics applications (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Widely used in digital video ž Has three 8-bit channels: › Y Component: – Gives luminance or intensity › Cr Component: – It is the RED component minus a reference value › Cb Component: – It is the BLUE component minus a reference value ž Hence Cr and Cb components represent the colour called “Color Difference Components” (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Advantages: › Used in video processing › Gives you a 2-D colour space hence helps in closer distinguishing of colours ž Disadvantages: (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The camera returns images in a certain color space ž You might want to convert to different color spaces to process it ž Colour space conversions can take place between RGB to any other colour space and vice versa (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Since cameras usually input images in rgb ž We would like to convert these images into HSV or YCrCb ž Conversions: › RGB->HSV › HSV->RGB › RGB->YCrCb › YCrCb->RGB (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž >> Y = rgb2ycbcr(im); ž This converts the RGB image to YCbCr ž The new colour space components can be seen using ž >> imview(Y) ž >> imview(Y(:,:,1)) “—Luminance—” ž >> imview(Y(:,:,2)) “—Differenced Blue—” ž >> imview(Y(:,:,3)) “—Differenced Red—” (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž >> R = ycbcr2rgb(im); ž This converts the YCbCr image to RGB ž The new colour space components can be seen using ž >> imview(R) ž >> imview(R(:,:,1)) “—Red—” ž >> imview(R(:,:,2)) “—Green—” ž >> imview(R(:,:,3)) “—Blue—” (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Formulae for conversion are very complex ž But the best thing is, you don’t need to remember these formulae ž Matlab and OpenCV have built-in functions for these transformations :-) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž OpenCV is a collection of many functions that help in image processing ž You can use OpenCV in C/C++, .net languages, Java, Python, etc as well ž We will only discuss OpenCV in C/C++ (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž It is blazingly fast ž Quite simple to use and learn ž Has functions for machine learning, image processing, and GUI creation (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Now, we need to tell Microsoft Visual Studio that we’ve installed OpenCV ž So, we tell it where to find the OpenCV header files ž Start Microsoft Visual Studio 2008 (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Right now, Visual Studio knows where to find the OpenCV include files and library files ž Now we create a new project (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Accept all default settings in the project ž You’ll end up with an empty project with a single file (like Mybot.cpp) ž Open this file, we’ll write some code now (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Add the following at the top of the code #include #include ž This piece of code includes necessary OpenCV functionality (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Now, we get to the main() function int main() { ž The main function is where for program execution begins (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Next, we load an image IplImage* img = cvLoadImage("C:\\hello.jpg"); ž The IplImage is a data type, like int, char, etc (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Comes built-into OpenCV ž Any image in OpenCV is stored as an IplImage thingy ž It is a “structure” (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Now we show this image in a window cvNamedWindow("myfirstwindow"); cvShowImage("myfirstwindow", img); ž This uses some HighGUI functions (comes along with OpenCV) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Creates a window with the caption title ž This is a HighGUI function ž You can add controls to each window as well (track bars, buttons, etc) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Shows img in the window with caption title ž If no such window exists, nothing happens (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Waits for time milliseconds, and returns whatever key is pressed ž If time=0, waits till eternity ž Here, we’ve used it to keep the windows from vanishing immediately (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Erases img from the RAM ž Get rid of an image as soon as possible. RAM is precious J ž Note that you send the address of the image (&img) and not just the image (img) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Right now, Visual Studio knows where OpenCV is ž But it does not know, whether to use OpenCV or not ž We need to tell this explicitly (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž src is the original image ž dst is the destination ž code is one of the follow: › CV_BGR2HSV › CV_RGB2HSV › CV_RGB2YCrCb › CV_HSV2RGB › CV_2 (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž src should be a valid image. Or an error will pop up ž dst should be a valid image, i.e. you need a blank image of the same size ž code should be valid (check the OpenCV documentation for that) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Allocates memory for an image of size size, with bits bits/pixel and chan number of channels ž Used for creating a blank image ž Use cvSize(width, height) to specify the size (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž It’s a three terminal linear 5 volt regulator used to supply the board and other peripherals ž Prescribed input voltage to this component is about 7-9 Volts (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Voltage fluctuations can be controlled by using low pass filter capacitors across output and input ž Higher input voltage can be applied if heatsink is provided (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Used to control Dc and Stepper Motors ž Uses a H-Bridge which is an electronic switching circuit that can reverse direction of current ž It’s a Dual-H bridge ž Basically used to convert a low voltage input into a high voltage output to drive the motor or any other component ž Eg: Microcontrollerà Motor Driverà Motor (5 Volts) (12 Volts) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž There are many situations where signals and data need to be transferred from one subsystem to another within a piece of electronics ž Relays are too bulky as they are electromechanical in nature and at the same time give lesser efficiency ž In these cases an electronic component called Optocoupler is used
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž They are generally used when the 2 subsystems are at largely different voltages ž These use a beam of light to transmit the signals or data across an electrical barrier, and achieve excellent isolation
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž In our circuit, Opto-isolator (MCT2E) is used to ensure electrical isolation between motors and the PC parallel port during wired connection ž The Viz-Board has four such chips to isolate the four data lines (pin 2, pin 3, pin 4, pin 5) coming out of the parallel port
ž Along with the Viz-Board 2 extensions have been provided i.e › The Rf Transmitter Module › The Rf Reciever Module (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha Transmitt er Receive r
ž Radio frequency modules are used for data transmission wirelessly at a certain frequency ž It sends and receives radio waves of a particular frequency and a decoder and encoder IC is provided to encode and decode this information ž Wireless transmission takes place at a particular frequency Eg. 315Mhz ž Theses modules might be single or dual frequency (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Antenna is recommended on both of them - just connect any piece of 23 cm long to the Antenna pin ž The kit has a dual frequency RF module with frequencies 315/434 Mhz (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The encoder IC encodes the parallel port data and sends it to the RF transmitter module for wireless transmission ž They are capable of encoding information which consists of N address bits and (12-N) data bits (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The HT12E Encoder IC has 8 address bits and 4 data bits ž A DIP-Switch can be used to set or unset the address bits A0-A7 (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The decoder IC decodes the RF transmitter data and sends it to the parallel port for wireless transmission ž They are capable of encoding information which consists of N address bits and (12-N) data bits (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The HT12D Decoder IC has 8 address bits and 4 data bits ž A DIP-Switch can be used to set or unset the address bits A0-A7 (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Data is transferred serially i.e packets are sent one after the other through a single port (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Data is transferred in parallel through different data pins at the same time ž Communication is pretty fast ž Found in old printer ports (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Parallel port is faster than serial ž A mass of data can be transmitted at the same time through parallel ports ž Though parallel and serial ports are not found these days in laptops ž Desktops and old laptops have these ports (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha Direct Output from parallel port Output from motor driver
ž All this sampling and quantization puts in extra noise on the image! ž Noise can be reduced by › Using hardware › Using software: filters (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Large amounts of external disturbances in real images ž Due to different factors like changing lighting and other real-time effects ž To improve quality of a captured image to make it easier to process the image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž First step in most IP applications ž Used to remove noise in the input image ž To remove motion blur from an image ž Enhancing the edges of an image to make it appear sharper (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The Averaging filter is used to sharpen the images by taking average over a number of images ž It eliminates noise by assuming that different snaps of the same image have different noise patterns (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This is extremely useful for satellites that take intergalactic photographs ž The images are extremely faint, and there is more noise than the image itself ž Millions of pictures are taken, and averaged to get a clear picture (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The Mean is used to soften an image by averaging surrounding pixel values (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha Center pixel = (22+77+48+150+77+158+0+77+219)/9
ž The center pixel would be changed from 77 to 92 as that is the mean value of all surrounding pixels ž This filter is often used to smooth images prior to processing ž It can be used to reduce pixel flicker due to overhead fluorescent lights (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This replaces each pixel value by the median of its neighbors, i.e. the value such that 50% of the values in the neighborhood are above, and 50% are below ž This can be difficult and costly to implement due to the need for sorting of the values ž However, this method is generally very good at preserving edges (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Its performance is particularly good for removing short noise ž The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value ž If the neighborhood under consideration contains an even number of pixels, the average of the two middle pixel values is used (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Used to `blur' images and remove detail and noise ž The effect of Gaussian smoothing is to blur an image ž The Gaussian outputs a `weighted average' of each pixel's neighborhood, with the average weighted more towards the value of the central pixels (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž A Gaussian provides gentler smoothing and preserves edges better than a similarly sized mean filter (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha Before Blurring After Blurring
ž It is very useful in contrast enhancement ž Especially to eliminate noise due to changing lighting conditions etc ž Transforms the values in an intensity image so that the histogram of the output image approximately matches a specified histogram (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž ‘Imfilter’ function is used for creating different kinds of filters In MATLAB ž B = imfilter(A,H,’option’) filters the multidimensional array A with the multidimensional filter H ž The array A can be a nonsparse numeric array of any class and dimension ž The result B has the same size and class as A (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž Options in imfilter ž Convolution is same as correlation except that the h matrix is inverted before applying the filter
ž FSPECIAL is used to create predefined filters ž h = FSPECIAL(TYPE); ž FSPECIAL returns h as a computational molecule, which is the appropriate form to use with imfilter (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž FSPECIAL is used to create predefined filters ž h = FSPECIAL(TYPE); ž FSPECIAL returns h as a computational molecule, which is the appropriate form to use with imfilter (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Things aren’t as simple as they were in Matlab ž C/C++ needs a bit of syntax and formalities (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Blur: The plain simple Photoshop blur ž Gaussian: The best result (preserved edges and smoothed out noise) ž Median: Nothing special ž Bilateral: Got rid of some noise, but preserved edges to a greater extend (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž There are no built-in functions for this ž So, we’ll code it ourselves ž And this will be a good exercise for getting better at OpenCV (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
#include #include int main() { IplImage* imgRed[25]; IplImage* imgGreen[25]; IplImage* imgBlue[25]; Holds the R, G and B channels separately for each of the 25 images (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž We created 75 grayscale images: 25 for red, 25 for green and 25 for blues ž Loaded 25 color images in the loop ž Split each image, and stored in an appropriate grayscale image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
for(int x=0;x{ int theSumRed=0; int theSumGreen=0; int theSumBlue=0; for(int i=0;i<25;i++) { • To figure out the average, we need to find the numerator (the sum) over all 25 images (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
for(int i=0;i<25;i++) { theSumRed+=cvGetReal2D(imgRed[i], y, x); theSumGreen+=cvGetReal2D(imgGreen[i], y, x); theSumBlue+=cvGetReal2D(imgBlue[i], y, x); } • To figure out the average, we need to find the numerator (the sum) over all 25 images (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
theSumRed = (float)theSumRed/25.0f; theSumGreen = (float)theSumGreen/25.0f; theSumBlue = (float)theSumBlue/25.0f; cvSetReal2D(imgResultRed, y, x, theSumRed); cvSetReal2D(imgResultGreen, y, x, theSumGreen); cvSetReal2D(imgResultBlue, y, x, theSumBlue); } } • Once we have the sum, we divide by 25 and set the appropriate pixels (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž cvLoadImage always loads as BGR ž cvSplit to get the individual channels ž cvMerge to combine individual channels into a color image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž IplImage to store any image in OpenCV ž cvCreateImage to allocate memory ž cvReleaseImage to erase an image from the RAM (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž cvWaitKey to get a keypress within certain milliseconds ž cvNamedWindow to create a window ž cvShowImage to show an image in a window (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž cvGetReal2D to get value at a pixel in grayscale images ž cvSetReal2D to set the value at a pixel ž CvSize to store an image’s size (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The process of extracting image components that are useful in representation of image for some particular purpose ž Basic morphological operations are: › Dilation › Erosion (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The operation that grows or thickens objects in a binary image ž The specific manner of thickening is controlled by a shape referred to as “structuring element” (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Erosion shrink or thins objects in a binary image ž The manner of shrinkage is controlled by the structuring element (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž In practical image processing dilation and erosion are performed in various combinations ž An image can undergo a series for diltions and erosion using the same or different structuring element (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž In practical image processing dilation and erosion are performed in various combinations ž An image can undergo a series for diltions and erosion using the same or different structuring element ž Two Common Kinds: › Morphological Opening › Morphological Closing (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž It is basically one erosion followed by one dilation by the same structuring element ž They are used to smooth object contours, break thin connections and remove thin protrusions (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž It is basically one dilation followed by one erosion by the same structuring element ž They are used to smooth object contours like opening ž But unlike opening they generally join narrow breaks, fill long thin gulfs and fills holes smaller than the structuring element (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Dilation in matlab is done using the following command: ž >>bw2=imdilate(bw,st) ž Bw = Original image ž St = Structuring element (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Erosion in matlab is done using the following command: ž >>bw2=imerode(bw,st) ž Bw = Original image ž St = Structuring element (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Opening in matlab is done using the following command: ž >>bw2=imopen(bw,st) ž Bw = Original image ž St = Structuring element (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Closing in matlab is done using the following command: ž >>bw2=imclose(bw,st) ž Bw = Original image ž St = Structuring element (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž By default, OpenCV uses the zero structuring element (all are zeros) ž You can explicitly specify your structuring element as well ž Check the OpenCV Documentation for more information (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Computers can manipulate images very efficiently ž But, comprehending an image with millions of colors is tough ž Solution: Figure out interesting regions, and process them (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Each pixel is checked for its value ž If it lies within a range, it is marked as “interesting” (or made white) ž Otherwise, it’s made black ž Figuring out the range depends on lighting, color, texture, etc (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž MATLAB provides a facility to execute multiple command statements with a single command. This is done by writing a .m file ž Goto File > New > M-file ž For example, the graythresh function can be manually written as a m-file as: (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Observe that, comments (in green) can be written after the symbol ‘%’. A commented statement is not considered for execution ž M-files become a very handy utility for writing lengthy programs and can be saved and edited, as and when required ž We shall now see, how to define your own functions in MATLAB. (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Functions help in writing organized code with minimum repetition of logic ž Instead of rewriting the instruction set every time, you can define a function ž Syntax: ž Create an m-file and the top most statement of the file should be the function header ž function [return values] = function- name(arguments) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The inbuilt graythresh function in matlab is used for thresholding of grayscale images ž It uses the Otsu’s Method Of thresholding ž A sample thresholding opreation has been shown in the next slide (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Thresholding of a grayscale image can be done in MATLAB using the following commands: ž >> level=graythresh(imGRAY); ž >> imBW = im2bw(imGRAY,level); ž >> imview(imBW); (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The graythresh command basically gives an idea as to what exactly the threshold value should be ž Graythresh returns a value that lies in the range 0-1 ž This gives the level of threshold which is obtained by a complex method called the Otsu’s Method of Thresholding (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This level can be converted into pixel value by multiplying by 255 ž Lets say, level=.4 ž Then threshold value for the grayscale image is: ž 0.4 x 255 =102 (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž What this indicates is that for the given image the values below 102 have to be converted to 0 and values from 103-255 to the value 1 ž Conversion from grayscale to binary image is done using the function: ž >>imBW = im2bw(imGRAY,level); (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Here level is the threshold level obtained from graythresh function ž This function converts pixel intensities between 0 to level to zero intensity (black) and between level+1 to 255 to maximum (white) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž In order to threshold an RGB colour image using the graythresh function, the following have to be done: › Conversion of the RGB image into its 3 grayscale components › Subtracting each of these components from the other 2 to get the pure colour intensities › Finding level for each of the grayscale using graythresh › Thresholding the image using imbw and the level (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Using a manually designed thresh_tool function to adjust the levels as required ž To get a feel of how levels vary (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
s=size(im); temp=im; thresh=128; for i=1:s(1,1) for j=1:s(1,2) if temp(i,j)temp(i,j)=0; else temp(i,j)=255; end end end imview(temp); (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Splitting of HSV image into components ž Using the Hue channel and thresholding it for different values ž Since the hue value of a single colour is constant it is relatively simple to threshold and gives better accuracy (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Splitting of HSV image into components ž Using the Hue channel and thresholding it for different values ž Since the hue value of a single colour is constant it is relatively simple to threshold and gives better accuracy (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
function [temp] = ht(im,level1,level2) s=size(im); temp=im; for i=1:s(1,1) for j=1:s(1,2) if (temp(i,j)level1) temp(i,j)=1; else temp(i,j)=0; end end end imview(temp); (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž To this function we give the input arguments as the upper and lower bounds of the threshold levels ž These levels can be obtained by having a look at the range of hue values for the particular colour (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž After thresholding, we get a binary image ž We want useable information like centers, outlines, etc ž There geometrical properties can be found using many methods. We’ll talk about moments and contours only. (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Consider xorder=0 and yorder=0 for a binary image ž So you’re just summing up pixel values ž This means, you’re calculating the area of the white pixels (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Now consider xorder=1 and yorder=0 for a binary image ž You sum only those x which are white ž So you’re calculating the numerator of an average (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The number of points where the pixel is white is the area of the image ž So, dividing this particular moment (xorder=1, yorder=0) by the earlier example (xorder=0, yorder=0) gives the average x ž This is the x coordinate of the centroid of the blob (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Similarly, for xorder=0 and yorder=1, you’ll get the y coordinate (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The order of a moment = xorder+yorder ž So, the area is a zero order moment ž The centroid coordinate = a first order moment / the zero order moment (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž There are entire books written on this topic ž You can find complex geometrical properties, like the eccentricity of an ellipse, radius of curvature of objects, etc ž Also check for Hu invariants if you’re interested (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž These are pixels of an image that are conencted to each other forming separate blobs in an image ž They can be seperated out and labelled (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž >>L = bwlabel(BW,n) ž Returns a matrix L, of the same size as BW, containing labels for the connected objects in BW ž n can have a value of either 4 or 8, where 4 specifies 4-connected objects and 8 specifies 8-connected objects; if the argument is omitted, it defaults to 8 (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž >>L = bwlabel(BW,n) ž Returns a matrix L, of the same size as BW, containing labels for the connected objects in BW ž n can have a value of either 4 or 8, where 4 specifies 4-connected objects and 8 specifies 8-connected objects; if the argument is omitted, it defaults to 8 (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž STATS = regionprops(L,properties) ž Measures a set of properties for each labeled region in the label matrix L ž The set of elements of L equal to 1 corresponds to region 1; the set of elements of L equal to 2 corresponds to region 2; and so on (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž 'Area'– The actual number of pixels in the region ž 'Centroid'-- The center of mass of the region. Note that the first element of Centroid is the horizontal coordinate (or x-coordinate) of the center of mass, and the second element is the vertical coordinate (or y-coordinate) ž 'Orientation' -- Scalar; the angle (in degrees) between the x-axis and the major axis of the ellipse that has the same second-moments as the region. This property is supported only for 2-D input label matrices (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Binary area open remove small objects ž BW2 = bwareaopen(BW,P) ž Removes from a binary image all connected components (objects) that have fewer than P pixels, producing another binary image, BW2. (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž For robotics purposes, moments are fine till have one single object ž If we have multiple objects in the same binary image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You can think of contours as an approximation of a binary image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The output you get for the previous binary image is: › Four “chains” of points › Each chain can have any number of points › In our case, each chain has four points (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Contour plot of an image im can be made in MATLAB using the command: ž im = imread(‘img.jpg'); ž imcontour(im,level) ž Level=number of equally spaced contour levels ž if level is not given it will choose automatically (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž OpenCV linked lists to store the “chains” ž We’ll see some code to find out the squares in the thresholded image you saw (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
cvFindContours(img, storage, &contours, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0)); • img is a grayscale thresholded image • storage is for temporary storage • All chains found would be stored in the contours sequence • The rest of the parameters are usually kept at these values • Check the OpenCV documentation for details information about the last four variables (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
while(contours) { result = cvApproxPoly(contours, sizeof(CvContour), storage, CV_POLY_APPROX_DP, cvContourPerimeter(contours)*0.02, 0); • The previous command makes contours point to the first chain • We’re approximating the contour right now • After this command, result stores the approximate contour as a polygon (many points) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
if(result->total==4) { CvPoint *pt[4]; for(int i=0;i<4;i++) pt[i] = (CvPoint*)cvGetSeqElem(result, i); } • We’re looking for quadrilaterals, so we check if the number of points in this particular polygon is 4 • Then, get extract each point using the command cvGetSeqElem • Once you have the points, you can actually check the shape of the object as well (by checking angles, lengths, etc) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
// Do whatever you want with the 4 points contours = contours->h_next; } • Do whatever you want to do with the four points • Then, we move onto processing the next contour (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž MATLAB has an image acquisition toolbox which helps capture images ž Now-a-days most of the cameras are available with USB interface ž Once you install the driver for the camera, the computer detects the device whenever you connect it (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž In MATLAB, you can check if the support is available for your camera ž MATLAB has built-in adaptors for accessing these devices ž An adaptor is a software that MATLAB uses to communicate with an image acquisition device (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž To get more information about the device, type ž >>dev_info = imaqhwinfo('winvideo',1) ž Instead of ‘winvideo’, if imaqhwinfo shows another adaptor, then type that adaptor name instead of ‘winvideo’. (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You can preview the video captured by the image by defining an object and associate it with the device ž >>vid=videoinput(‘winvideo’,1,‘RGB24 _320x240’) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž Now to see the video insert the following command: ž >> preview(vid)
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž You should see a window pop-up, that displays what your camera is capturing
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž The camera may support multiple video formats. To see for yourself all the supported formats, type ž >>dev_info = imaqhwinfo('winvideo',1); ž >>celldisp(dev_info.SupportedForma ts); ž Check out for yourself the display of other formats, by replacing `RGB24_320x240` with other formats, in the definition of the object vid
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž Now to capture an image from the video, define the object vid as described before and use getdata to capture a frame from the video ž >>start(vid); % This command initiates capturing of frames and stores the frames in memory ž >>im=getdata(vid,1); ž >>imview(im);
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž You can store the captured image as a .jpg or .gif file using imwrite function ž >>imwrite(im,'testimage.gif'); ž The image will be stored in ‘MATLAB71\work’ folder
ž Take a single picture of the arena and process it ž Find out critical regions and points ž Apply some geometry and mathematical calculations ž Then blindly follow a specified path (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Advantages › Simplest to implement › Can be fairly accurate with stepper motors ž Disadvantages › Bot goes blind because only one pic determines the bot motion › Accuracy is very low especially with DC motors (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Take images of the arena in discrete intervals (say Eg. 10secs/image) ž Process the images and find out critical regions and points ž Check bot orientation at these intervals and try to correct ž Partial feedback mechanism is implemented (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž Advantages › Quite simple to implement › Can be very accurate with stepper motors ž Disadvantages › Bot goes blind for a particular period of time › Accuracy is compromised with DC motors › Dynamic environmental changes cannot be accounted for
ž Take images of the arena continuously at a particular frame rate ž Process the images and find out critical regions and points ž Check bot orientation at every frame and correct ž Complete feedback mechanism is implemented (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž Advantages › Very accurate for both DC and Stepper motors › Gives dynamic feedback and accounts for changing environment › Can give bot orientation at each point of time ž Disadvantages › Requires more processing power › Requires more memory for taking so many images
ž Real-time image processing is the best approach for any real application ž Dynamic feedback systems give excellent accuracy and precision and hence is the best approach (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Image processing is an important tool in many applications ž Problem is that one needs to acquire images and pre-process them before doing actual IP ž Sometimes it may be required that offline image processing is not possible i.e. one needs to proceed with real-time IP or even more, processing of the video itself (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Every time you want to capture an instantaneous image, you have to stop the video, start it again and use the getdata function ž To avoid this repetitive actions, the Image Acquisition toolbox provides an option for triggering the video object when required and capture an instantaneous frame (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Every time you want to capture an instantaneous image, you have to stop the video, start it again and use the getdata function ž To avoid this repetitive actions, the Image Acquisition toolbox provides an option for triggering the video object when required and capture an instantaneous frame (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž In the above code, object im gets overwritten while execution of each of the interations of the for loop ž To be able to see all the five images, replace im with im(:,:,:,i)
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž In the above code, object im gets overwritten while execution of each of the interations of the for loop ž To be able to see all the five images, replace im with im(:,:,:,i)
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž triggerconfig sets the object to manual triggering, since its default triggering is of type immediate ž In immediate triggering, the video is captured as soon as you start the object ‘vid’ ž The captured frames are stored in memory. Getdata function can be used to access these frames ž But in manual triggering, you get the image only when you ‘trigger’ the video
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž ‘FramesPerTrigger’ decides the number of frames you want to capture each time ‘trigger’ is executed ž TriggerRepeat has to be either equal to the number of frames you want to process in your program or it can be set to Inf ž If set to any positive integer, you will have to ‘start’ the video capture again after trigger is used for those many number of times
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž Once you are done with acquiring of frames and have stored the images, you can stop the video capture and clear the stored frames from the memory buffer, using following commands: ž >>stop(vid); ž >>delete(vid); ž >>clear vid;
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha ž Getsnapshot function returns one image frame and is independent of FramesPerTrigger property ž So if you want to process your images in real-time, this is all you need:
(c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha vid=videoinput(‘winvideo’,1) triggerconfig(vid,'manual'); set(vid,'FramesPerTrigger',1); set(vid,'TriggerRepeat', Inf); start(vid); while(1) { trigger(vid); im= getdata(vid,1); % write your image processing algorithm here % % you may break this infinite while loop if a certain condition is met }
ž MATLAB provides support to access serial port (also called as COM port) and parallel port (also called as printer port or LPT port) of a PC ž MATLAB has an adaptor to access the parallel port (similar to adaptor for image acquisition) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž To access the parallel port in MATLAB, define an object ž >> parport= digitalio('parallel','LPT1'); ž You may obtain the port address using, ž >> get(parport,'PortAddress') ž >> daqhwinfo('parallel'); % To get data acquisition hardware information (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You have to define the pins 2-9 as output pins, by using addline function ž >> addline(parport, 0:7, 'out') ž Now put the data which you want to output to the parallel port into a matrix; e.g. ž >> dataout = logical([1 0 1 0 1 0 1 1]); (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Now to output these values, use the putvalue function ž >> putvalue(parport,dataout); ž Alternatively, you can write the decimal equivalent of the binary data and output it ž >> data = 23; ž >> putvalue(parport,data); (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You can connect the pins of the parallel port to the driver IC for the left and right motors of your robot, and control the left, right, forward and backward motion of the vehicle ž You will need a H-bridge for driving the motor in both clockwise and anti- clockwise directions (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Things are a little more involved ž There is this library called inpout32 ž Makes your task really simple ž Just follow the instructions that come along, and you’ll be sending data to your robot! (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Or you could use the following code ž The idea is: › Create a virtual file that “represents” the port itself (parallel, serial, etc) › Keep this file open › And keep writing to this file › So, data is automatically send to the desired port (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Step 2: Create function to create the virtual file, and store its “handle” in hPort (contd) bool SerialOpen(LPCWSTR strPort) { // Open the serial port. hPort = (HANDLE)CreateFile (strPort, // Pointer to the name of the port GENERIC_READ | GENERIC_WRITE, // Access (read-write) mode 0, // Share mode NULL, // Pointer to the security attribute OPEN_EXISTING, // How to open the serial port 0, // Port attributes (long)NULL); // Handle to port with attribute // to copy DCB PortDCB; DWORD dwError; // Initialize the DCBlength member. PortDCB.DCBlength = sizeof (DCB); // Get the default port setting information. GetCommState (hPort, &PortDCB); (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Step 2: And finally… // Configure the port according to the specifications of the DCB // structure. if (!SetCommState (hPort, &PortDCB)) { // Could not configure the serial port. dwError = GetLastError(); printf("Serial port creation error: %d", dwError); MessageBox(NULL, L"Unable to configure the serial port", L"Error", MB_OK); return false; } return true; } (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This function works equally well for both serial ports and parallel ports (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Step 2: I’ve assumed you’re creating a function ž This function returns a true when the port is created successfully ž If you’re not, replace the “return” statements with “printf”s (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Example usage of this function › hPort = SerialOpen(L”COM8:”); // Serial Port › hPort = SerialOpen(L”LPT1:”); // Parallel ž This is actually how you can access ports in DOS as well… using COM8: and LPT1: instead of C:, D:, etc ž The L before the quotes is just syntax for C/C++ (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Step 3: Next, we’ll write functions to write to the virtual file we’ve created bool SerialWrite(byte theByte) { // The port wasn't opened if(!hPort) return false; DWORD dwError, dwNumBytesWritten; WriteFile (hPort, // Port handle theByte, // Pointer to the data to write 1, // Number of bytes to write &dwNumBytesWritten, // Pointer to the number of bytes written NULL // Must be NULL ); return true; } (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You pass the byte you want to write as a parameter, and it gets written to the port ž Example: SerialWrite(12) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Again, I’ll emphasize that these functions will work equally well for both parallel ports and serial ports (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž A critical part of image segmentation ž Used for detecting meaningful discontinuities in intensity values ž Done by finding first and second order derivatives of the image ž Also known as gradient of the image (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž There are different kinds of edge detectors based on the criteria of derivatives ž An edge detector can be sensitive to horizontal or vertical lines or both ž In detection we try to find out regions where the derivative or gradient is greater than a specified threshold (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž A threshold value can be given manually as argument ž g=edge(im,’method’,t); ž t-threshold ž Sobel& Canny are the more frequently used edge detectors (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž A threshold value can be given manually as argument ž g=edge(im,’method’,t); ž t-threshold ž Sobel& Canny are the more frequently used edge detectors (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The idea of an orientation tag is to precisely indicate the orientation of the robot ž It should happen with the least number of operations (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This orientation tag is bad ž You can tell the position, but not the direction the robot is facing (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž But if you have a team of bots, this won’t be the best choice ž You need to have multiple colors for each bot ž So, you have more operations ž Slowing down the program (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž A better choice for a team of bots ž The asymmetry helps distinguish between multiple bots, with just two colors (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž With C/C++ you get two methods to capture images: › Using OpenCV’s built in libraries › Using some 3rd party capturing library (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Tries to get access to cam #index ž Useful when you have multiple cameras attached to the same machine (like in stereo vision) ž Then we check if we were able to get exclusive control of the camera. If not, quit. (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This window is for our convenience ž Displaying what’s going on within the program, what decisions are taken, etc (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž We’ll create a function to take snapshots ž It will use the CvCapture structure to tap into the camera’s stream ž It will return a single frame as an IplImage structure ž Scroll down, and add the function (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Now, we’ll display a live stream in the window we had created ž Because the getSnapshot() function returns a single frame, we need to take snaps regularly ž So it goes into the do…while loop (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž If you try compiling the program right now, you’ll get an error ž getSnapshot() comes “after” the main function ž So the compiler doesn’t know if it exists ž Hence we need the so called “prototype” (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž The cvCaptureFromCam works only for some supported cameras ž DirectX is a much bigger library and supports almost all cameras that exist (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž These are external libraries. So we need to tell Visual Studio where to find the required external files ž So we follow steps similar to the OpenCV ones. ž The VideoInput package comes with lots of sample code, so you shouldn’t have much problem (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You can check some sample code at this website as well › http://opencv.willowgarage.com/wiki/Direc tShow (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž For real time, you need to process captured frames as quickly as possible ž So we use a loop of some kind (usually a do…while) ž Within the loop, you do the following tasks: (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Quite obvious ž You need an image ž Only then can you process something (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Once you have the image, you need to enhance it with pre-processing ž Increase contrast, reduce noise, smoothen it out, etc (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž With the pre-processed image, you figure out the location of each object in the arena ž Use moments, contours, or anything else ž Thresholding, morphology, etc are helpful here (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž With the location of each object, you can decide what to do next ž Bot: Should I go to the red ball because it’s the closest? Or should I go to the green ball because it has the maximum number of points (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Once decided where to go, you make the robot move ž You must include feedback in this step itself (maybe another do…while loop) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You have two options: › Check if further movement is necessary (have all the balls been potted?) If required, only then go to Task 1 › Blindly go to Task 1 (the decision module will tell the bot when to stop) ž Both options are good enough (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Task 5: Move the bot (with feedback) ž We won’t go into code. Just high level logic on how to go about it (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž For the simplest feedback mechanism, you use pixels ž Everything is measure in terms of pixels: distances, coordinates, etc ž Angles are usually represented in radians (cos, sin, etc work well with radians) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž First, you need to decide your coordinate system ž And you need to stick with it throughout your code ž Here’s a coordinate system I used several times (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž This is a top-down view of the arena, just like the camera sees it (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž So all your calculations MUST use this particular coordinate system ž In particular, you must make sure all your angle calculations are consistent ž And that they cycle through 359-1 degrees perfectly (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž TurnBotToAngle(angle) › Turns the bot to angle degrees of the coordinate system › This would be the very basic feedback function › Within this function, you have a loop › This loop keeps running as long as the bot isn’t oriented at angle degrees › You’ll take snapshots within this function as well (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž MoveBotToPosition(x,y) › Moves the bot until it reaches the coordinates (x,y) in the image › If required, you can also put a call to TurnBotToAngle (to orient the bot to move) › Again, there’s a loop which keeps running until the bot reaches the desired position › And you’ll need to take multiple snapshots to check where the bot actually is (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You obviously need to set a range ž If you say TurnBotToAngle(30), it’s very unlikely that the bot will orient to exactly 30 degrees ž A range, say 28-32 should be good enough (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Reminder: All x, y and angle we’ve talked about are in the image, in terms of pixels ž We have no idea how they relate to physical distances and angles ž But we’re sure that they are proportional to physical distances and angles (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Though, you CAN calibrate your camera and actually figure out physical distances ž For example, 5pixels = 3cm ž This is very much possible, but of no use for our purposes! (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Use classes and structures as much as possible. And you don’t need to know OOPs to use them. ž They really simplify your work, and even make the code more readable (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž What we’ve described requires that the bot stops and then checks if the angle is correct or not, etc ž Try working on something which checks the bot’s angle without stopping the bot (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Working in the YUV space (this is what cameras use… so thresholding in YUV itself will eliminate processing time consumed by YUV to RGB conversion) (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž Kalman filters: If a bot is static, still it’s position might be calculated as different for different frames. ž If you use Kalman filters, you can “smooth out” the position data and get precise positioning and angle data (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha
ž You now know enough to participate in image processing based competitions ž All this knowledge can even serve as a start point for further studies in image processing ž Enjoy! (c) 2009-2010 Electronics & Robotics Club, BITS-Pilani, Goa | Ajusal Sugathan & Utkarsh Sinha