Você está na página 1de 78

GRAPHICIZER

A PROJECT REPORT

Submitted by AJAY KUMAR (0834213003) BRAJ KUMAR(0834213014) VIKAS CHANDRA(0834213038) VINAY KUMAR DUBYE(0834213039)

in partial fulfillment for the award of the degree of

Bachelor of Technology In
INFORMATION TECHN0LOGY

UNITED COLLEGE OF ENGINEERING AND MANEGMENT

GB TECHNICAL UNIVERSITY, LUCKNOW


May, 2012
1

CERTIFICATE
This is to certify that this project report entitled Kiosk Control System by AJAY Kumar (0834213003), BRAJ Kumar(0834213014), VIKAS Chandra(0834213038), VINAY Kumar Dubye(0834213039) submitted in partial fulfillment of the requirements for the degree of Bachelor of Technology in INFORMATION TECHNOLOGY of the GB Technical University,Lucknow, during the academic year 2011-12, is a bonafide record of work carried out under my guidance and supervision.

Mr. SURENDRA TRIPATHI Lecturer, Deptt. of Computer Science, United College of Engineering and Manegment, Allahabad

ACKNOWLEDGEMENT
We would like to express our sincere gratitude to our project guide Mr. Surendra Tripathi for giving us the opportunity to work on this topic. It would never be possible for us to take this project to this level without her innovative ideas and his relentless support and encouragement. Ajay Kumar(0834213003) Braj Kumar(0834213014) Vikas Chandra(0834213038) Vinay kumar Dubye(0834213039)

ABSTRACT

Kiosk software is the system and user interface software designed for a kiosk or Internet kiosk . Kiosk software locks down the application in order to protect the kiosk from users. Kiosk software may offer remote monitoring to manage multiple kiosks from another location. Email or text alerts may be automatically sent from the kiosk for daily activity reports or generated in response to problems detected by the software. Other features allow for remote updates of the kiosk's content and the ability to upload data such as kiosk usage statistics . Kiosk software is used to manage a touchscreen , allowing users to touch the monitor screen to make selections. A virtul keyboard eliminates the need for a computer keyboard.

Contents
List of figures List of Tables Chapter1 .Introduction 1.1 Railway Reservation 1.2 Airline reservations 1.3 Road transport booking system 1.4 Hotel Management System 8 9 10 11 11 11 12

Chapter 2. Overview and scope Chapter 3. Definitions, acronyms and abbreviations 3.1 CONTEXT DIAGRAM 3.2 Road transport booking system 3.3 Railway Reservation 3.4 Airline reservations 3.5 Hotel Management System Chapter 4. Technologies to be used 4.1 .NET Framework 4.2 Design features 4.3 Architecture 4.4 Languages 4.5 Windows Forms

14 16 13 14 15 16 17 18 18 20 21 23 24
5

4.6 ASP.NET 24 4.7 ASP. NET Web Forms 4.8 Web Services 4.9 NET Hierarchy, Another View Chapter 5. Operation Performed by the Graphicizer Chapter 6. Creating the the Graphicizer Window Chapter 7. Opening an Image File Chapter 8. Saving an Image File Chapter 9. Painting the Image Chapter 10. The Graphicizer window Chapter 11. Embossing the image Chapter 12. Sharpening the image Chapter 13. Brightning the image Chapter 14. Blurring the image Chapter 15. Reducing the image Chapter 16. Magnifying the image Chapter 17. Eroding the image Chapter 18. Edge Detection in image 18.1 Edges 18.2 Edge Detectors Chapter 19. Image negative Chapter 20. Flipping vertically an image Chapter 21. Flipping horizontally an image

24 24 24 25 23 24 27 30 33 35 36 42 46 48 50 52 54 57 57 58 62 64 66
6

Chapter 22. Rotation of 180 degrees Chapter 23. Future scope of enhancement Chapter 24. Conclusion Bibliography

68 71 72 74

Chapter 1

INTRODUCTION
Kiosks are a huge opportunity for online reservation. It is an interactive computerized unit in a reservation that combines software and hardware to provide internet, information, transaction facilities and many customer-activated services. Computer
kiosks can store data locally, or they can retrieve data from a remote server over a network. Kiosks can be used to provide information, which is great when it comes to e-governance, or for facilitating online transactions, or collecting cash in exchange for commodities. Kiosk Management System is the online service that can be used in Reservation Centres which allows users to reserve various facilities. Here we take KSRTC, Airline, Railway and Hotel Reservation as our application. The application is developed using ASP.Net and C#, to make it more powerful we have used SQL server as the back end. In the case of Bus, Train and Flight ticket reservation the user will have many options like search for seat availability, search stations, check Reservation Status, reserve or cancel the tickets, view Fares, etc. In

Hotel Booking, accommodations in various Hotels are provided for users. The Administrator Has options to control the KMS functionalities and the users. Users may be of two types: Authorized user and a visitor. Administrator provides Login ID and Password to the user. Visitors are unauthorized users who can just view the details that are provided by Administrator.

Kiosk software is the system and user interface software designed for a kiosk or Internet kiosk . Kiosk software locks down the application in order to protect the kiosk from users. Kiosk software may offer remote monitoring to manage multiple kiosks from another location. Email or text alerts may be automatically sent from the kiosk for daily activity reports or generated in response to problems detected by the software. Other features allow fo remote updates of the kiosk's content and the ability to upload data such a kiosk usage statistics . Kiosk software is used to manage a touch screen , allowing users to touch the monitor screen to make selections. A virtual keyboard eliminates the need for a computer keyboard. . 1.1 Railway Reservation This project aims at development of an Online Railway Reservation Utility which facilitates the Railway customers to manage their reservations online, and the Railway administrators to modify the backend databases in a User-Friendly manner. The Customers are required to register on the server for getting access to the database and query result retrieval. Upon registration, each user has an account which is essentially the view level for the customer. The account contains comprehensive information of the user entered during registration and permits the customer to get access to his past reservations, enquire about travel fare and availability of seats, make afresh reservations, update his account details, etc. The Railway Administrator is the second party in the transactions. The administrator is required to login using a master password, once authenticated as an administrator,

one has access and right of modification to all the information stored in the database at the server. This includes the account information of the customers, attributes and statistics of stations, description of the train stoppages and physical description of coaches, all the reservations that have been made, etc. The railway administrator has 1.2 Airline reservations system Airline reservations systems contain airline schedules, fare tariffs, passenger reservations and ticket records. An airline's direct distribution works within their own reservation system, as well as pushing out information to the GDS. A second type of direct distribution channel are consumers who use the internet or mobile applications to make their own reservations. Travel agencies and other indirect distribution channels access the same GDS as those accessed by the airlines' reservation systems, and all messaging is transmitted by a standardized messaging system that functions on two types of messaging that transmit on SITA's HLN [high level network]. These message types are called Type B [TTY] for remarks-like communications and Type A [EDIFACT] for secured information. Message construction standards are set by IATA and ICAO, are global, and apply to more than air transportation. Since airline reservation systems are business critical applications, and their functionally quite complex, the operation of an in-house airline reservation system is relatively expensive. 1.3 Road transport booking system People can avail bus transportation services from the comforts of their homes and offices. We offer excellent services with its wellmaintained coaches and courteous staff. 1.4 Hotel Management System Hotel reservations systems, commonly known as
a central reservation system (CRS) is a computerized system that stores and distributes information of a hotel, resort, or other lodging facilities.

10

Chapter 2

OVERVIEW & SCOPE:


Kiosks are a huge opportunity for online reservation. It is an interactive computerized unit in a reservation that combines software and hardware to provide internet, information, transaction facilities and many customer-activated services. Computer kiosks can store data locally, or they can retrieve data from a remote server over a network. Kiosks can be used to provide information, which is great when it comes to egovernance, or for facilitating online transactions, or collecting cash in exchange for commodities In the case of Bus, Train and Flight ticket reservation the user will have many options like search for seat availability, search stations, check Reservation Status, reserve or cancel the tickets, view Fares, etc. In Hotel Booking, accommodations in various Hotels are provided for users. The Administrator has options to control the KMS functionalities and the users. Users may be of two types: - Authorized user and a visitor. Administrator provides LoginID and Password to the user. Visitors are unauthorized users who can just view the details that are provided by Administrator. Kiosk Management System is the online service that can be used in Reservation Centres which allows users to reserve various facilities. Here we take KSRTC, Airline, Railway 11

and Hotel Reservation as our application. The application is developed using ASP.Net and C#, to make it more powerful we have used SQL server as the back end

Chapter 3

DEFINITIONS AND ACRONYMS


3.1CONTEXT DIAGRAM

Level 1 Administrator

12

Level 1 User

13

3.2 Road transport booking system

Level 2.Administrator

Level 2.User
14

3.3 Railway Reservation

Level 2.Administrator

Level 2.User

15

3.4 Airline reservations

Level 2.Administrator

Level 2.User

16

3,5 Hotel Management System

Level 2.Administrator

Level 2.User

17

Chapter 4

TECHNOLOGIES TO BE USED:
4.1 .NET Framework
The .NET Framework (pronounced dot net) is a software framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a large library and provides language interoperability (each language can use code written in other languages) across several programming languages. Programs written for the .NET Framework execute in a software environment (as contrasted to hardware environment), known as the Common Language Runtime (CLR), an application virtual machine that provides important services such as security, memory management, and exception handling. The class library and the CLR together constitute the .NET Framework.

18

This article is about the Microsoft technology. For the top-level domain, see .net. For other uses, see .NET.

.NET Framework

Developer(s)

Microsoft

Initial release

13 February 2002; 10 years ago

Stable release

4.0 (4.0.30319.1) / 12 April 2010; 2 years ago

19

Preview release

4.5 / 29 February 2012; 59 days ago

Operating system

Windows 98 or later, Windows NT 4.0 or later

Type

Software framework

License

MS-EULA, BCL under Microsoft Reference Source License[1]

Website

www.microsoft.com/net (General site) msdn.microsoft.com/netframework(Developer site)

4.2 Design features


Interoperability Because computer systems commonly require interaction between newer and older applications, the .NET Framework provides means to access functionality implemented in programs that execute outside the .NET environment. Access to COM components is provided in the System.Runtime.InteropServices and System.EnterpriseServices namespaces of the framework; access to other functionality is provided using the P/Invoke feature. Common Language Runtime Engine The Common Language Runtime (CLR) is the execution engine of the .NET Framework. All .NET programs execute under the supervision of the CLR, guaranteeing certain properties and behaviors in the areas of memory management, security, and exception handling. Language Independence The .NET Framework introduces a Common Type System, or CTS. The CTS specification defines all possible datatypes and programming constructs supported by the CLR and how they may or may not interact with each other conforming to the Common Language Infrastructure (CLI) specification. Because of this feature, the .NET Framework supports the exchange of types and object instances between libraries and applications written using any conforming .NET language. Base Class Library The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library of functionality available to all languages using the .NET Framework. The BCL provides classesthat encapsulate a number of common functions,

20

including file reading and writing, graphic rendering, database interaction, XML document manipulation, and so on. Simplified Deployment The .NET Framework includes design features and tools which help manage the installation of computer software to ensure it does not interfere with previously installed software, and it conforms to security requirements. Security The design is meant to address some of the vulnerabilities, such as buffer overflows, which have been exploited by malicious software. Additionally, .NET provides a common security model for all applications. Portability While Microsoft has never implemented the full framework on any system except Microsoft Windows, the framework is engineered to be platform agnostic,[6] and cross-platform implementations are available for other operating systems (see Silverlight and the Alternative implementations section below). Microsoft submitted the specifications for the Common Language Infrastructure (which includes the core class libraries, Common Type System, and the Common Intermediate Language, the C# language,[10] and the C++/CLI language[11] to both ECMA and the ISO, making them available as official standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms.

4.3 Architecture Common Language Infrastructure (CLI)


The purpose of the Common Language Infrastructure (CL) is to provide a language-neutral platform for application development and execution, including functions for Exception handling, Garbage Collection, security, and interoperability. By implementing the core aspects of the .NET Framework within the scope of the CL, this functionality will not be tied to a single language but will be available across the many languages supported by the framework. Microsoft's implementation of the CLI is called the Common Language Runtime, or CL

Security
.NET has its own security mechanism with two general features: Code Access Security (CA), and validation and verification. Code Access Security is based on evidence that is associated with a specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the local machine or has been downloaded from the intranet or Internet). Code Access Security uses evidence to determine the permissions granted to the code. Other code can demand that calling code is granted a specified permission. The demand causes the CL to perform a call stack walk: every assembly of each method in the call stack is checked for the required permission; if any assembly is not granted the permission a security exception is thrown.

Class library
The .NET Framework includes a set of standard class libraries. The class library is organized in a hierarchy of namespaces. Most of the built-in APIs are part of

21

either System.* or Microsoft.* namespaces. These class libraries implement a large number of common functions, such as file reading and writing, graphic rendering, database interaction, and XML document manipulation, among others. The .NET class libraries are available to all CLI compliant languages. The .NET Framework class library is divided into two parts: the Base Class Library and the Framework Class Library The Base Class Library (BC) includes a small subset of the entire class library and is the core set of classes that serve as the basic API of the Common Language Runtime.[12] The classes in mscorlib.dll and some of the classes in System.dll and System.core.dll are considered to be a part of the BCL. The BCL classes are available in both .NET Framework as well as its alternative implementations including .NET Compact Framework, Microsoft Silverlight and Mono.

Memory management
The .NET Framework CL frees the developer from the burden of managing memory (allocating and freeing up when done); it handles memory management itself by detecting when memory can be safely freed. Memory is allocated to instantiations of .NET types (objects) from the managed heap, a pool of memory managed by the CL. As long as there exists a reference to an object, which might be either a direct reference to an object or via a graph of objects, the object is considered to be in use. When there is no reference to an object, and it cannot be reached or used, it becomes garbage, eligible for collection. NET Framework includes a garbage collector which runs periodically, on a separate thread from the application's thread, that enumerates all the unusable objects and reclaims the memory allocated to them.

22

4.4 Languages Languages provided by MS

VB, C++, C#, J#, J Script

Third-parties are building APL, COBOL, Pascal, Eiffel, Haskell, ML, Oberon, Perl, Python, Scheme, Smalltalk

23

4.5 Windows Forms


Framework for Building Rich Clients RAD (Rapid Application Development) Rich set of controls Data aware ActiveX Support Licensing Accessibility Printing support Unicode support UI inheritance

4.6ASP.NET ASP.NET,the platform services that allow to program Web Applications and Web Services in any .NET language ASP.NET Uses .NET languages to generate HTML pages. HTML page is targeted to the capabilities of the requesting Browser ASP.NET Program is compiled into a .NET class and cached the first time it is called. All subsequent calls use the cached version.

4.7ASP. NET Web Forms Allows clean cut code Code-behind Web Forms Easier for tools to generate Code within is compiled then executed Improved handling of state information Support for ASP.NET server controls Data validation
24

Data bound grids 4.7Web Services It is just an application that exposes its features and capabilities over the network using XML to allow for the creation of powerful new applications that are more than the sum of their parts

4.8.NET Hierarchy, Another View

25

Chapter 5

OPERATION PERFORMED
1. Read in JPG,PNG,GIF files 2. Saves images in JPG or PNG format 3. Work with images,pixel by pixel 4. Embosses image 5. Sharpens images 6. Brighten images 7. Blurs images 8. Reduces images 9. Magnifies images 10. Erodes v 11. Edge detection in images 12. Negative of images 13. Flipping images vertically 14. Flipping images horizontally 15. Rotating images by 180 degrees 16. Undoes the most recent change on request 17. Resets to the originally loaded image 18. Exit the Graphicizer window.

26

Chapter 6

CREATING THE GRAPHICIZER WINDOW


In this project on image processing, the window the applications draws is created in the application's constructor. And this applications is based on the frame class. Here's what Graphicizer's main method and constructor looks like; the constructor creates the window: public class Graphicizer extend frame implements action Listener { Bufferedimage bufferedimage, bufferedimageBackup; Image image; Menu menu; MenuBar menubar; MenuItem menuitem1, menuitem2, menuitem3, menuitem4, menuitem5; Button button1,button2, button3, button4, button5, button6, button7, button8, button9, button10, button 11, button12; FileDialog dialog; LoadedImage lim; Image img; public Graphicizer () { steSize(900,500); setTitle("The Graphicizer"); setVisible(true); this.addWindowListener(new WindowAdapter()){ publicvoidwindowClosing( WindowEvente){ System.exit(0); } 27

} ); . It adds the buttons the applications uses ad drawing tools, which you can see in the Graphicizer window. setLayout(new FlowLayout.CENTER,5,420)); button11111111=new Button("Emboss"); button1.setBounds(30,getHeight()-50, 60, 20); add (button1); button1.addActionListener(this); button2=new Button("Sharpen"); button2.setBounds(100,getHeight()-50, 60, 20); add(button2); button2.addActionListener(this); . . . It also adds a File menu with the items Open...,Save As...,Undo...,Reset and Exit: menubar=new Menubar(); menu=new Menu("File"); menuitem1=new Menuitem("Open..."); menu.add(menuitem1); menuitem1.addActionListener(this); menuitem2=new MenuItem("Save As..."); menu.add(menuitem2); menuitem2.addActionListener(this); menubar.add(menu); 28

setMenuBar(menubar); . . Besides the menu and button system, the constructor also creates a FileDiaglog object, which will be used to display a File dialog box when the user wants to open or save files : dialog= new FileDialog(this, "File Dialog"); } And that's exactly how the user starts by opening a file and displaying it in the Graphicizer.

29

Chapter 7

OPENING AN IMAGE FILE:


All the user needs to do to open an image file is to use the file menus openitem when the user selects that item,the action performed method is called. Then,after making sure the open item was selected, the code sets the dialog objects mode to file dialog .LOAD and makes the dialog box visible with the dialog.set visible(true) call: Public void actionperformed(ActionEvent event){ If(event.getsource()==menuitem 1){ Dialog.setmode(FileDialog.LOAD); Dialog.setVisible(true); . . Setting the file dialog boxs mode to FileDialog.LOAD(the only other option is FileDialog.SAVE)makes the dialog box display the file open dialog box you see in Graphicizer window.

Fig7.1 : Opening a file

30

Table7.1

THE SIGNIFICANT METHODS OF THE JAVA.AWT.FILE DIALOG CLASS: Method


String getDirectory() String getFile() FilenameFilter get FilenameFilter() int getMode()

Does This

Returns the directory selected by the user Return the file selected by the user Return the filename filter used by this File dialog

box Returns the dialog box's mode, which sets

whether this file dialog box is to be used in void setdirectory(String dir) void setFile(String file) voidsetFilenameFilter(Filename Filter filter) void setMode(int mode)

reading a file or saving a file. Sets the directory used by this File dialog box

when it starts to the given directory Sets the selected file used by this File dialog box

to the given file Sets the filename filter for the dialog box to the

given filter Specifies the mode of the File dialog box, either reading or writing

If the user selects a file to open, the dialog box's getFile method will return the filename, and getdirectory method will return the directory the file is in. Opening a file is a sensitive operation, so everything's enclosed in a try/catch block here. The code creates a file object corresponding to the image file the user wants to open as the first stage in actually opening that file :

public void action Performed(ActionEvent event){ if(event.getSource()= = menuitem1){ 31

dialog.setMode(fileDialog.LOAD); dialog.setVisible(true); try{ if(!dialog.getFile().equals(" ")){ File input=newFile(dialog.getDirectory() +dialog.getFile()); bufferdImage=ImageIO.read(input); setSize(getInsets().left+getInsets().right+ Math.max(900,bufferedImage.getWidth()+60), getInsets().top+getInsets().bottom+ Math.max(500,bufferedImage.getHeight()+60)); button1.setBounds(30, getHeight()-30, 60, 20); button2.setBounds(100,getHeight() -30, 60, 20); . . } } catch(Exception e){ System.out.println(e.getMessage()); } repaint(); }

32

Chapter 8

SAVING AN IMAGE FILE:


After we have worked on a file, the Graphicizer would certainly let us down if we couldn't save our changes back to a file, Imagine doing a lot of work on an image and not being able to stroe the results. That's why this applications has a Save As...menu item, which, when selected, display a File Save dialog box, created by setting the dialog object's mode to FileDialog.SAVE: public void actionPerformed(ActionEvent event) { If(event.getSource()= = menuitem2){ dialog.setMode(FileDialog.SAVE); dialog.setVisible(true); . } This displays the File Save dialog as

Fig8.1:Saving an image File

33

This dialog box's job is to get the name of the file the user was not to store the image file to. When you have that name, you can use the ImageIO class's write method to actually write the file. If the user doesn't specify a file, or if he clicks the Cancel button, you'll get an empty string back from the dialog, box's getFile method. Otherwise, you should try to create a File object usng the name and path the user has given you. Just be sure to enclose everything in a try/catch block. Once we create the file object corresponding to the output file, we can use the ImageIO class' writ method to write the bufferedImage object to the file. We pass the BufferedImage object, we want to write to the writ method, followed by the type of image you want to write ("JPG" or "PNG") and the output file object. How do wee determine the three letter type of image file to write? We can pass a value such as "PNG" or "JPG" to the writ method, and in the Graphicizer, the code will simply take the type of the file to write from the extention of the filename (for example, "image.png" would yield "png"). Here's how the output image file is written to disk, using the ImageIO class's writ method : try{ if(!dialogm.getFile().equals("")){ String outfile = dialog.getFile(); FileoutputFile=new File(dialog.getDirectory() +outfile); ImageIO.write(buffereImage, outfile.substring(outfile.length()-3, outfile.length()),outputfile); . . } catch(Exception e){ 34

System.out.println(e.getMessage()); } Okay, at this point, Graphicizer can now load and save images. this means it can functions as an image converter, converting between various formats.

35

Chapter 9

PAINTING THE IMAGE:


Now that the user has loaded a new image, we have to make sure that it appears when the window is redrawn. We can do that in the paint method. There's not going to be a of fancy animation in this application, so doesn't use any double buffering. All it does is draw the current when required in the pain method. The pain method first makes sure there actually is a BufferedImage object to draw. public void paint(Graphics g) { if(bufferedImage!=null){ 0. . } } If there is an image to draw, you can use the Graphics object passed to the paint method to paint that image, using the Graphics object's draw Image method. Even though that method is usually passed an image object, you can also apses it a Buffered Image object and it'll be automatically cast to an image object. Here's how you can draw the loaded image centered in the window (the last parameter passed to the drawImage method here corresponds to an ImageObsever object if you want to use one to monitor the imageths application doesn't use an Image Observer object, so it simply passes a pointer to the current object for this parameter):

36

public void paint (Grpahic g) { if(bufferedImage != null){ g.drawImage(bufferedImage, getSize().width/2 -bufferedImage.getWidth()/2, getInsets().top+20, this); } }

This draws the newly loaded image, and the results appear in the Graphicizer window

37

Chapter 10

THE GRAPHICIZER WINDOW:


when our project is run the Graphicizer window displayed on to the screen is shown in the figure below.

Fig10.1: The graphicizer window

38

Chapter 11

EMBOSSING AN IMAGE:
The first button in graphicizer is Emboss, which converters and image into an embossed image,making it look as though it were embossed on paper in a three dimensional way. Embossing is the process of creating a three dimesnsional image or design in paper and other ductile material. It is typically accomplished with a combination of heat and pressure on the paper. Embossing refers to an image processing technique which the color at a gven location of the filtered image corresponds to rate of color change at that location in the original image. Applying an embossing filter to an image often results in the image resembling a paper or metal embossing of the original image,hence the name. If the user clicks the emboss button, graphicizer embosses the image.

Fig11.1:Embossing the image 39

To create an embossed image, graphicizer has to get access to the individual pixels in the image. We can load these pixels into an array using the java PixelGrabber class , which is the graphicizer does. Table11.1

The significant methods of the java.awt.image.pixelGrabber Class: Method


ColorModel getColorModel() int getHeight() Object getPixels() int getWidth() boolean grapbPixels() boolean grabPixels(long ms)

Does This
Returns the color model used by the data stored in the array Return the height of the pixel buffer, as measured in pixels Return the pixel buffer used by the PixelGrabber object Return the width of the pixel buffer, as measured in pixels Gets all the pixels in the rectangle of interest and transfers them individually to the pixel buffer Gets all the pixels in the rectangle of interest and transfers them individually to the pixel buffer, subject to the timeout time, ms(in milliseconds) Sets the color model used by this PixelGrabber object Sets the dimensions of the image to be grabbed Sets the actual pixels in the image

void setColorModel(ColorModel model) void set Dimensions (int width, int height) void setPixels (int srcX, int srcY, int scrW, int srcH, ColorModel model, byte[] pixel,s int srcOff, int srcScan void startGrabbing()

Makes the PixelGrabber object start getting the

pixels and transferring them to the pixel buffer How do we go about embossing the image, now that we have it stored in an array? We can emboss an image by finding the difference between each pixel and its neighbor and then adding the difference to the color grap. The start each drawing operation, 40

Graphicizer stores the present image in a backup object, bufferedImageBackup, in case the user selects the Undo menu item. public void action Performed(ActionEvent event) { . . . If(event.get.Source() = = button1){ bufferedImageBackup = bufferedImage; .} If the user selects the File menu's Undo item, Graphicizer can use the backed-up version of the image, bufferedImageBackup, to restore the original image. After creating a backup buffered image, the code uses a pixelGrabber object, pg, to load the actual pixesl from the image into an array. To create that pixel grabber, we pass the pixelGrabber constructor the image we want to grab and the offset at which to start in the image in this case, (0,0). We also pass the width the height of the image, the array to store the image in (named pixels in this example), the offset into the array at which we want to start storing data (that's 0 here,), and the "scan size," which is the distance from one row of pixels to the next in the array (that'll be width here). This is shown in code segmentpublic void action Performed(ActionEvent event) { . . If(event.getSource()= = button1){ bufferedImageBackup= bufferedImage; int width= bufferedImage.getWidth(); int height=bufferedImage.getHeight(); int pixels[]=new int[width*height]; pixelGrabber pg=new PixelGrabber(bufferedImage, 0,0, width, height, pixels, 0, width); try { 41

pg.grabPixels(); } catch(InterruptedException e){ System.out.println(e.getMessage()); } . . . Embossing is done by looping over every pixel in the image, first looping over rows (the X direction in the array), then with an inner loop looping over each column (the Y direction in the array). Inside these loops we can compare the red, green, and blue component of each pixel to its neighbour, add that difference to a neutral gray, and store the results in the pixel array. That's the operation we perform to emboss the image. The actual byte-by-byte manipulation looks like the color components of each pixel are extracted, compared to their neighbour, and then repacked into the integer for that pixel in the pixels array. That stores the new image in the pixels array. How do we get it into the bufferedImage object for display? There's no easy way to do that, because no BufferedImage constructor takes an array of pixels. We have to get this done by first creating an Image object using the Component class's create Image method, then creating a new BufferedImage object, and finally using the BufferedImage object's createGraphics and drawImage methods to draw the Image object in the BufferedImage object (which is the way we convert from Image objects to BufferedImage objectsthere is no BufferedImage constructor that will do the job for us). Here's how the code loads the pixels array into the bufferedImage object and then paints it on the screen: public void action Performed (Action Event { . . 42

. if (event. get Source () == button) { . . . for (intx = 2;x<width -1; x++) { for (int y = 2; y <height -1; y++) { int red = ((pixels [x+1) +Y* width + 1] & 0xFF) -(pixels [x + y* width] & 0xFF) + 128; int green = (((pixels [x+1)+y * width +1] & 0xFF00)/0x100% 0x100- ((pixels [x+y*width] & 0xFF00) / 0x100) % 0x100) + 128; int blue = (((pixels [(x +1) +y* width +1] & 0xFF0000) / 0x10000) %0x100 -((pixels [x+y * width] & 0xFF0000) / 0x10000) %0x100) + 128;

int avg = (red + green+blue) /3; pixels [x+y * width] + (0xff000000)/ avg <<16 /avg <<8/avg); } } image = createImage(new MemoryImageSource(width, height, pixels, 0, width); 43

bufferedImage=new BufferedImage (width, height, BufferedImage. TYPE_INT_BGR); bufferedImage. createGraphics().drawImage(image,0,0this); repaint(); } . . . } Now we've embossed the image by getting into the pixels in the image and working with them one by one.

44

Chapter 12

SHARPENING AN IMAGE:
The next button is the Sharpen button, which sharpens an image by accenting the borders between colors. We can see image. gif after it has been sharpened (the sharpening may not be totally evident with the limited-resolution image in the figure, but its's very clear when you un the Graphicizer and click the Sharpen button.)

Fig12.1 Sharpening an image

45

Sharpening is one of the most impressive transformations you can apply to an image since it seems to bring out image detail that was not there before. What it actually does, however, is to emphasize edges in the image and make them easier for the eye to pick out--while the visual effect is to make the image seem sharper, no new details are actually created. Paradoxically, the first step in sharpening an image is to blur it slightly. Next, the original image and the blurred version are compared one pixel at a time. If a pixel is brighter than the blurred version it is lightened further; if a pixel is darker than the blurred version, it is darkened. The result is to increase the contrast between each pixel and its neighbors. The nature of the sharpening is influenced by the blurring radius used and the extent to which the difference between each pixel and its neighbor are exaggerated. As with embossing an image, we have to work pixel by pixel to sharpen an image. There's an easy way to work pixel by pixel and combine a pixel with its surrounding neightours. We can use the Kernel class and the Convolveop class to the work for us. The Karnel class lets us define a matrix that specifies how a pixel should be combined with the other pixels around it to produce a new result, and the ConvolveOp class lets us apply a karnel to a BufferedImage object, pixel by pixel We can see the significant methods of the Karnel object in and the significant methods of

Table12.1

The Significant Methods of the Java.awt. image. Karnel Class Method


int get Height ()

Does This
Returns the height of the matrix specified by this Kernel object

float [] get Kernel Data (float [] data)

Returns the kernel data as an array

46

int get Width ()

Returns the width of the matrix specified by this Karnel object

int get XOrigin ()

Return the X origin of the data in this Karnel object

int get Y Orgin ()

Returns the Y origin of the data in this Kernel object

Table12.2

The Significant Methods of the jawa. awt. image. ConvolveOp Class Method
Bufferedimage createCompatibleDestImage (BufferedImage src, ColorModel destCM) BufferedImage filter (BufferedImage Table12.3 Performs a convolution operation on

Does This
Creates a compatible destination image

The Significant Methods of the java. awt. image. Convolveop Class Method
src, Buffered Image dst) Writable Raster filter (Raster src, Writable aster dst) Rectangle 2D get Bounds 2D (Buffered Image src) Rectangle 2D get Bounds 2D (Raster src) int get Edge Condition ()

Does This
Buffered Image objects Performs a convolution operation on Raster objects Returns the rectangle specifying the bounds of the destination image Returns the rectangle specifying the bounds of the destination raster Returns the edge condition, which specifies how you want to handle the 47

Kernel get Kernel()

edges of the image Returns the Kernel object used by this Convolveop object

Here's how this all works for sharpening an image when the Sharpen button is clicked. After storing the present buffered image in the image backup, buffered Image Backup (in case the user wants to undo the sharpening operation), the code creates a new kernel for the operation. A kernel is really a matrix that will multiply a pixel and its neighbors. Here is the kernel you can use to sharpen the image, you pass the kernel constructor thee dimensions of the matrix and then the matrix itself. Now you're ready to sharpen the image with the filter method. You pass this method a souce image and a destination image, and after the filtering is done, the code will copy the new image over into the image displayed by Graphicizer, buffered Image, and repaint that image this way. public void action Performed (Action Event event) } . . . if(event. get Source (0 ==button 2){ buffered Image Backup =buffered Image; Kernel kernel = new Kernel (3, 3, new float[] { o.of, -1.of, o.of, -1.0f, 5.0f, - 1.of, 0.0f, -1.of, 0.0f }); Convolveop convolveop = new Convolveop( ernel, Convolveop. EDGE_NO_OP, null); Buffered Image temp = new Buffered Image ( buffered Image. get Width (), buffered Image. get Height (), Buffered Image. TYPE_INT_ARGB); 48

ConvolveOp.filter (buffered Image, temp); buffered Image = temp; repaint (); }

Chapter 13

BRIGHTENING AN IMAGE:
If the user clicks the brighten button, graphicizer brightens the image.

Fig13.1:Brightining an image To brighten an image, you simply multiply the intensity of each pixel. In this case, all you need is a 1-D kernel. Heres what the code looks like, complete with kernbel and ConvolveOp objects:

49

If(event.getSource() == button3) { bufferedImageBackup = bufferedImage; Kernel kernel = new Kernel(1,1,new float[]{3}); ConvolveOp convolveOp= new ConvolveOp\(kernel); BufferedImage temp=new Buf feredImage(bufferedImage.getWidth(),bufferedImage.getHeight(),BufferedImag e.TYPE_INT_ARGB); convolveOp.filter(bufferedImage,temp); bufferedImage=temp; repaint(); } And thats all it takes , now we can brighten image just by clicking the brighten button.

50

Chapter 14

BLURRING AN IMAGE
We can also user graphicizer to blurr an image. By clicking the blur button. We Can see the results in figure below where the image has an out of focus look.

Fig14.1:Blurring an image 51

To blur an image, the code will simply combine the pixels surrounding a particular pixel. Heres what this looks like using a kernel object and a convolveOp object: If(event.getSource\() == button4) { bufferedImageBackup = bufferedImage; Kernel kernel = new Kernel(3,3,new float[]{.25f,0,.25f,0,0,0,.25f,0,.25f}); ConvolveOp convolveOp = new ConvolveOp(kernel); BufferedImage temp=new Buf feredImage(bufferedImage.getWidth(),bufferedImage.getHeight(),BufferedImag e.TYPE_INT_ARGB); convolveOp.filter(bufferedImage,temp); bufferedImage=temp; repaint(); }

And thats all it takes now we can blur images just by clicking the blur button.

52

Chapter 15

REDUCING AN IMAGE:
If the user clicks the reduce button, the image is magnified by a factor of 2 in each dimension, as you can see in the figure below.

53

Fig15.1:Reducing an image This one works by using the buffereimage classs getScaledInstanceMethod , which it inherits from the image class. This method changes the size of an image, but it returns an image object, So it takes a little work to get back to a bufferedImage object: If(event.getSource() == button5) { bufferedImageBackup = bufferedImage; image etHeight()/2,0); BufferedImage=new Image.TYPE_INT_BGR); bufferedImage.createGraphics().drawImage(image,0,0,this); } 54 Buf feredImage(bufferedImage.getWidth()*2,bufferedImage.getHeight()*2,Buffered = bufferedImage.getScaledInstance(bufferedImage.getWidth()/2,bufferedImage.g

After we convert from an image object back to a BufferedImage object, we need to resize the window, subject to a certain minimum size, to correspond to new image: If(event.getSource() == button5) { BufferedImage=new mage.TYPE_INT_BGR); bufferedImage.createGraphics().drawImage(image,0,0,this); setSize(getInsets().left + + getInsets().right 60),getInsets().top + + Buf

feredImage(bufferedImage.getWidth()/2,bufferedImage.getHeight()/2,BufferedI

Math.max(400,bufferedImage.getWidth()

getInsets().buttom + Math.max(340,bufferedImage.getHeight()+60)); button1.setBounds(30,getHeight()-30,60,20); button2.setBounds(100,getHeight()-30,60,20); repaint(); } That completes the reduce buttons operation.

Chapter 16

MAGNIFYING AN IMAGE:
55

If the user clicks the magnify button, the image is magnified by a factoe of two in each dimension,as you can see in figure below.

Fig16.1:Magnifying an image

This one works by using the bufferedImage classs getScaledInstance method, which it inherits from the image class .this method change the size of the image,but it returns an image object,so it takes a little work to get back to a BufferedImage object: If(event.getSource() == button5) { bufferedImageBackup = bufferedImage; image etHeight()/2,0); BufferedImage=new Image.TYPE_INT_BGR); 56 Buf feredImage(bufferedImage.getWidth()*2,bufferedImage.getHeight()*2,Buffered = bufferedImage.getScaledInstance(bufferedImage.getWidth()/2,bufferedImage.g

bufferedImage.createGraphics().drawImage(image,0,0,this); } After we convert from an image object back to a BufferedImage object, we need to resize the window, subject to a certain minimum size, to correspond to new image: If(event.getSource() == button5) |{ BufferedImage=new Image.TYPE_INT_BGR); bufferedImage.createGraphics().drawImage(image,0,0,this); setSize(getInsets().left + + getInsets().right 60),getInsets().top + + Buf

feredImage(bufferedImage.getWidth()*2,bufferedImage.getHeight()*2,Buffered

Math.max(400,bufferedImage.getWidth()

getInsets().buttom + Math.max(340,bufferedImage.getHeight()+60)); button1.setBounds(30,getHeight()-30,60,20); button2.setBounds(100,getHeight()-30,60,20); repaint(); } That completes the magnify buttons operation.

Chapter 17
57

ERODING AN IMAGE:
Erosion is one of the two basic operators in the area of mathematicl morphology, the other being dilation. it is typically applied to binary images, but there are versions that work on grayscale images. The basic effect of the operator on a binary image is to erode away the boundaries of the operatorr on a binary image is to erode away the boundaries of regions of foreground pixels (i.e. white pixels, typically). Thus areas of foreground pixels shrink in size, and holes within those areas become lager. The erosion opeator takes two pieces of data as inputs. The first is the image which is to be eoded. The second is a (usually small) set of coordinate points know as a structuring element (also known as kernel). It is this structuring element that determines the precise effect of the erosion on the input image. The result of eroding the image is shown below.

Fig17.1:Eroding an image The mathematical definition of erosion for images is as follows: Suppose that X is the set of Euclidean corrdinates corresponding to the input binary image, and that K is the set of cordinates for the structuing element. 58

Let Kx denote the translation of K so that its origin is at x. Then the erosion of X by K is simply the set of all points x such that Kx is a subset of X. As an example of erosion, suppose that the structuring element is a 7x7 square, with the origin at its center as used in our code.

if (event. get Source() ==button 7){ buffered Image Backup = buffered Image; //ERODE Filter is defined. Kernel kernel = new Kenel (7, 7, new float[]{ 0,0,0,0,0,0,0, 0,1,1,1,1,1,0, 0,1,1,1,1,1,0, 0,1,1,1,1,1,0, 0,1,1,1,1,1,0, 0,1,1,1,1,1,0, 0,0,0,0,0,0,0, }); To compute the erosion of a input image by this structuring element, we consider each of the foreground pixels in the input image in turn. For each foreground pixel (which we will call the input pixel) we superimpose the structuring element on top of the input image so that the origin of the structuring element coincides with the input pixel coordinates. For every pixel in the structuring element, the corresponding pixel in the image underneath is masked by the eroded filter or kernel stated above. Erosion is the dual of dilation, i.e. eroding foreground pixels is equivalent to dilating the background pixels. The 3x3 square is probably the most common structuring element used in erosion operation, but others can be used. A large structuring element produces a more extreme erosion effect, although usually very similar effects can be achieved by repeated erosions using a smaller similarly shaped structuring element. With larger structuring elements, it is quite common to use an approximately disk shaped structuring element, as opposed to a square one. 59

Erosion can also be used to remove small spurious bright spots (salt noise) in images. The imageDiagramShows an image with salt noise, and Diagramshows the result of erosion with a 3x3 square structuring element. Note that although the noise has been removed, the rest of the image has been degraded significantly. We can also use erosion of edge detection by taking the erosion of an image and then subtracting it away from the original image, thus highlighting just those pixels at the edges of objects that were removed by the erosion. Finally, erosion is also used as the basis for many other mathematical morphology operators. One of the simplest uses of erosion is for eliminating irrelevant detail from a binary images. Suppose we want to eliminate all the squares expect the largest ones. We can do this by eroding the image with a structuring element of size somewhat smaller than the objects we wise to keep.

60

Chapter 18

EDGE DETECTION IN IMAGES:


In computer vision and image processing the concept of edge detection is a part of feature detection in an images. Feature detection refers to methods that aim at computing abstractions of image information and making local decision at every image point whether there is an image feature of a given type at that point or not . the resulting feature will be subset of the image domain,often in form of isolated points ,continuous curves or connected regions . the edge detection operation of an image displayed in the Graphicizer window is shown in figure below .

Fig18.1:Edge detection 18.1 Edges Edges are points where there is a boundary (or an edge) between two image regions. In general, an edge can be of almost arbitrary shape, and may include junctions. In 61

practice, edges are usually defined as sets of points in the image which have a strong gradient magnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms may place some constraints on the shape of an edge. Locally, edges have a one dimensional structure. 18.2 Edge Detectors Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries, edge detection is extensively used in image segmentation when we want to divide the image into areas corresponding to different objects. Representing an image by its edges has the further advantage that the amount of data is reduced significantly while retaining most of the image information. The task of edge detection requires neighborhood operates that are sensitive to changes and suppress areas of constant gray values. In this way, a feature image is formed in which those parts of the image appear bright where changes occur while all other parts remain dark. Mathematically speaking, an ideal edge is a discontinuity of the spatial gray value function g(x) of the image plane. It is obvious that thisis only an abstraction, which often does not match the reality. Thus, the fist task of edge detection is to find out the properties of the edges contained in the image to be analyzed. Only if we can formulate a model of the edges, can we determine how accurately and under what conditions it will be possible to detect an edge and to optimize edge detection. Since edges consist of mainly high frequencies, we can, in theory, detect edges by applying a highpass frequency filter in the Fourier domain or by convolving the image with an appropriate kernel in the spatial domain. In practice, edge detection is performed in the spatial domain, because it is computationally less expensive and often yields better results. Since edges correspond to strong illumination gradients, we can highlight them by calculating the derivatives of the image. This is illustrated for the one-dimensional case in Figure 18.1. Edge detection is always based on differentiation in one or the other form. In discrete images, differentiation is replaced by discrete differences, which only approximate to differentiation. The errors associated with these approximations require careful 62

consideration. They cause effects that are not expected in the first place. The two most serious errors are: anisotropic edge detection, ie.e., edges are not detected equally well in all directions, and erroneous estimation of the direction of the edges. We can see that the position of the edge can be estimated with the maximum of the 1st derivative or with the zero crossing of the 2nd derivative. Therefore we want to find a technique to calculate the derivative of a two-dimensional image. For a discrete onedimensional functions f(i), the first derivative can be approximated by

df (i ) = f (i + 1) f (i ) d (i ) Calculating this formula is equivalent to convolving the function with [-11]. Similarly the 2nd derivative can be estimated by convolving f(i) with [1-21]. Different edge detection kernels which are based on the above formula enable use to calculate either the 1st or the 2nd derivative of a two-dimensional image. There are two common approaches to estimate the 1st derivative in a two-dimensional image, Prewitt compass edge detection and gradient edge detection. Prewitt compass edge detection involves convolving the image with a get of (usually 8) kernels. each of which is sensitive to a different edge orientation. The kernel producing the maximum response at a pixel location determines the edge magnitude and orientation. Different sets of kernels might be used: examples include Prewitt, Sobel, Kirsch and Robinson kernels. Gradient edge detection is the second and more widely used technique. Here, the image is convolved with only two kernels, one estimating the gradient in the x direction, Gx, the other the gradient in the y-direction, Gy. The absolute gradient magnitude is then given by G = Gx 2 + Gy 2 and is often approximated with G = Gx + Gy In many implementations, the gradient magnitude is the only output of a gradient edge detector, however the edge orientation might be calculated with 8=arctan (Gy/Gx) -3 /4 63

The most common kernels used for the gradient edge detector are the Sobel, Roberts Cross and Prewitt operators. After having calculated the magnitude of the 1st derivative, we now have to identify those pixels corresponding to an edge. The easiest way is to threshold the gradient image, assuming that all pixels having a local gradient above the threshold must represent and edge. An alternative technique is to look for local maxima in the gradient image, thus producing one pixel wide edges. A more sophisticated technique is used by the Canny edge detector. It first applies a gradient edge detector to the image and then finds the edge pixels using non-maximal suppression and hysteresis tracking. An opeator based on the 2nd derivative of an image is the Marr edge detector, also known as zero crossing detector. Here, the 2nd derivative is calculated using a Laplacian of Gaussian (LoG) filiter. The Laplacian has the advantage that it is a isotropic measure of the 2nd derivative of an image, i.e. the edge magnitude is obtained independently from the edge orientation by convolving the image with only one kernel. The edge positions are then given by the zero-crossing in the LoG image. The scale of the edges which are to be detected can be controlled by changing the variance of the Gaussian. A general problem for edge detection is its sensitivity to noise, the reason being that calculating the derivative in the spatial domain corresponds to accentuating high frequencies and hence magnifying noise. This problem is addressed in the Canny and Marr operators by convolving the image with a smoothing operator (Gaussian) before calculating the derivative. In our code of edge detection we have use a kernel with we have masked the input image to detect the edges present in the image as showed under. public void action Performed (Action Event event){ . . if (event.get Source() == button 8){ buffered Image Backup = buffered Image; //EDGE DETECT Filter is defined. Kernel Kernel = new Kernel (3, 3, new float[]{ 64

0.f, -1.0f, 0.0f, -1.0f, 6.f, -1.0f, 0.0f, -1.0f, 0.0f }); ConvolveOp convolveOp = new ConvolveOp (kernel, ConvolveOp. EDGE_NO_OP, null); Buffered Image temp= new Buffered Image (buffered Image.get width (), buffered Image. get Height (), BufferedImage. TYPE_INT_ARGB); convolveOp. filter (buffered Image, temp); buffered Image = temp; repaint(); }

65

Chapter 19

IMAGE NEGATIVE:
The negative operation on the image displayed in the Graphicizer window is shown in the figure below.

Fig19.1:Image negative The image is one of the most important image enhancement techniques.The value of the pixel before and after processing will be denoted by r and s respectively.Expression that will be used here is of form s=T(r), where T is a transformation.that maps a pixel value r into a pixel value s.since we are dealing with digital quantities,values of the transformation function typically are stored in a 1D array and the mapping from r to s are implemented via table lookups. For an 8 bit environment, a lookup table containing a value of T will have 256 entries. 66

The negative of an image with Grey levels in the range [0,L-1] is obtained by using the negative transformation which is given by expression S=L-1-r This is used in our code of image negative where we are using Byte lookup table and LookupOp classs to derive the negative of input image. Public void actionperformed(AcionEvent event){ . . If(event.getSource()==button9){ Byte negative []=new byte[256]; For(int i=0;i<256;i++) Negative [i]=(byte)(255-i); ByteLookupTable table=new ByteLookupTable(0,negative); LookupOp op=new LookupOp(table,null); . . } } Reversing the intensity of an image in this manner produces the equivalent of the photographic negative . This type of processing is particularly suited for enhancing while and gray detail embedded is dark regions of an image , especially when dark regions are dominant in size.

67

Chapter 20

FLIPPING VERTICALLY AN IMAGE:


Image rotation is an important function in image processing . when we use the option of flipping vertically ,the original image is flipped upside down. This is same as having the Mirror image of the image +x-axis with origin at left bottom of the image . The flip vertical operation on the image displayed in the Graphicizer window is shown in the figure below .

Fig20.1: Flipping vertically the image

68

Image editors are capable of altering an image to be rotated in any direction and to any degree. Mirror images can be created and image can be vertically flipped. A small rotation of several degrees is often enough to level the horizon, correct vertically(of a building ,for example), or both. Rotated image usually require cropping afterwards, in order to remove the resulting gaps in the image edges. In our code of flipping image vertically we have used Affine Transformation and Affine TransformOp classes with which we have flipped the input image vertically as shown below. Public void actionperformed(AcionEvent event){ . . If(event.getSource()==button10){ BufferedImage temp=new BufferedImage(bufferedImage.getWidth(),bufferedImage.getHeight(),BufferedImage.T YPE_INT_ARGB); AffineTransform tx= AffineTransform.getScaleinstance(1,-1); tx.traslate(0,-temp.getHeight(null)); AffineTransformOp op=new AffineTransformOp(tx, AffineTransformOp.TYPE_NEAREST_NEIGHBOR); Filter(op); Repaint(); } }

69

Chapter 21

FLIPPING HORIZONTALLY AN IMAGE:


Rorating an image is an important function in image processing .when we use the option of flipping horizontally the original image is flipped in the horizontal direction .this is same as having the mirror image of the image along + y-axis with origin at left bottom of the image . The flip horizontal operation of the image displayed in the Graphicizer wiondow is shown in figure below .

Fig21.1:Flipping horizontal an image

70

In our code of flipping image vertically we have used Affine Transform and Affine TransformedOp classes with which we have fliped the input image vertically as shown bellow. Public void actionperformed(AcionEvent event){ . . If(event.getSource()==button11){ BufferedImage temp=new BufferedImage(bufferedImage.getWidth(),bufferedImage.getHeight(),BufferedImage.T YPE_INT_ARGB); AffineTransform tx= AffineTransform.getScaleinstance(-1,1); tx.traslate(-temp.getHeight(null),0); AffineTransformOp op=new AffineTransformOp(tx, AffineTransformOp.TYPE_NEAREST_NEIGHBOR); Filter(op); Repaint(); } }

71

Chapter 22

ROTATION OF 180 DEGREES:


Rotation of a coordinate system has two important features. it does not change the length or norm of a vector and keeps the coordinate system orthogonal. A transformation with these features is known in linear algebra as an orthogonal transform. The coefficients in a transformation matrix have an intuitive meaning. This can be seen when we apply the transformation

Fig22.1:Rotating 180 degrees an image 72

APLICATIONS:
1 Computer Vision : Computer Vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory for building artificial systems that information from images. The data can take many forms, such as a video sequence, views from multiple cameras, or multidimensional data from a medical scanner. 2. Face Detection: Face detection is a computer technology that determines the locations and sizes of human faces in arbitrary (digital) images. It detects facial features and ignores anything else, such as buildings, trees and bodies. 3. Feature Detection: In computer vision and image processing the concept of feature detection refers to methods that aim at computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions. 4. Medical Imaging: Medical imaging refers to the techniques and processes used to create images of the body (or parts thereof) for clinical purposes (medical procedures seeking to reveal, diagnose or examine disease) or medical science (including the study of normal anatomy and function). As a discipline and in its widest sense, it is part of biological imaging and incorporates radiology (in the wider sense), radiological sciences, endoscopy, (medical) thermography, medical photography and microscopy (e.g. human pathological investigations). 73

In the clinical context, medical imaging is generally equated to Radiology or "clinical imaging". 5. Microscope Image Processing: Microscope image processing is a broad term that covers the use of digital image processing technology to process, analyze and present images obtained from a microscope. Such processing is now commonplace in a number of diverse fields such as medicine, biological research, cancer research, drug testing, metallurgy, etc. 6. Morphological Image Processing: Morphological Image processing is a collection of techniques for digital image processing based on mathematical morphology. Science these techniques rely only on the relative ordering of pixel values, not on their numerical values, they are especially suited to the processing of binary images and grayscale images whose light transfer function is not known. 7. Remote Sensing: In the broadest sense, remote sensing is the small or largescale acquisition of information of an object or phenomenon, by the use of either recording or real-time sensing device (s) that is not in physical or intimate contact with the object (such as by way of aircraft, spacecraft, satellite, buoy, or ship). Thus, Earth observation or weather satellite collection platforms, ocean and atmospheric observing weather buoy platforms, monitoring of a pregnancy via ultrasound, Magnetic Resonance Imaging (MRI), Position Emission Tomography (PET), and space probes are all examples of remote sensing.

74

Chapter 23

FUTURE SCOPE OF ENHANCEMENT:


Our project on image processing can be enhanced further in following ways 1. Enhancement in the frequency domain can be done. 2. Mathematical operation on images such as addition,subtraction,multiplication and division of images can also be added. 3. Other operations such as Segmentation and conversion to gray scale can also be added 4. Routines implemented in the project could also be enhanced so as to allow user to pass value on the basis of which the operations on the images will be performed.

75

Chapter 24

CONCLUSION:
Our project on IMAGE PROCESSING is an efficient image editing and conversion tool. It utilizes three files bulid in java Graphicizer.java , loadedimage.java, and image processing.java . Among these graphicizer.java extends frame and set visible a frame named Grahpicizer. Using our project, we can read in image files , work on them, and save them to disk. It supports a number of menu items to load in image files, To write them back to disk, to undo the most recent change, to reset the original loaded image and to quit the program. In addition, this application displays a set of button that function as image handling tools to emboss, sharpen, brighten, blur, reduce, magnify, erode, edge detect, negative, flip vertical, flip horizontal, and rotate 180 degrees the image. There are several new technologies here, starting with the ImageIo class, which you use to read in image and write them back to disk. This class proves to be very handy for the graphicizer, except that it only deals with bufferedImage object instead of standard image objects.By converting between bufferedImage and standard image objects ,

76

however , the code is able to do what it is supposed to do. Graphicizer also users a file dialog box to get the name of the file the user wants to open or saved. To work with the pixels in an image , Graphicizer uses two technique, working with a PixelGrabber to actually grab all the pixels in the image, and working with a ConcolveOp object to apply a kernel object to all the pixels without having to work pixel by pixel. Using a PixelGrabber object, graphicizer is able to extract every pixel from the image being work on and store them in an array.To work with each pixel you only have to address it in the array individually, which is how the application embosses images. Working with each pixels by addressing it individually and extracting its red, green, and blue color values is one way of handling image, but theres another way using kernel object and ConvolveOp objects. These two Object will do the major work for us. The kernel object lets you specify a matrix whose values will multiply a pixel and its neighboring pixels. And a ConvolveOp objects filter method lets you apply your kernel to each pixels automatically. The graphicizer uses kernel and convolveOp object to sharpen, brighten, erode,edge detect and blur images, all without a lot of programming. All in all, our project on IMAGE PROCESSING is usefull and fun application, providing enough image handling and image editing power, implemented in java.

77

BIBLIOGRAPHY:
Digital Image Processing by Rafael C.Gonzalez and Richard E.Words. Color Image Processing by Rastislav Lukac and Konstantinos N. plataniotis. Digital Image Processing by Bernd jahne. The GATF practical Guide to Color Management by Adams,Rechard M.&Weisberg ,Joshua B. Computer and Robot Vision by R.Haralick and L.Shapiro. Fundamental of Digital Image Processing by A.K. Jain. The Complete Reference Java2 by Herbert Schildt. IEEE Magzine.

78