<font color=#AA0000>ADVANCED IMAGE RETRIEVAL SYSTEM USING GENETIC ALGORITHM</font>
In recent years, rapid advances in science and technology have produced a large amount of image data in diverse areas, such as entertainment, art galleries, fashion design. We often need to efficiently store and retrieve image data to perform assigned tasks and to make a decision. Therefore, developing proper tools for the retrieval image from large image collections is challenging. Two different types of approaches. In the text-based system, the images are manually annotated by text descriptors and then used by a database management system to perform image retrieval. However, there are two limitations of using keywords to achieve image retrieval: the vast amount of labor required in manual image annotation and the task of describing image content is highly subjective. That is, the perspective of textual descriptions given by an annotator could be different from the perspective of a user. In other words, there are inconsistencies between user textual queries and image annotations or descriptions. To alleviate the inconsistency problem, the image retrieval is carried out according to the image contents. Such strategy is the so-called content-based image retrieval (CBIR).
The primary goal of the CBIR system is to construct meaningful descriptions of physical attributes from images to facilitate efficient and effective retrieval. CBIR has become an active and fast-advancing research area in image retrieval in the last decade. By and large, research activities in CBIR have progressed in four major directions: global image properties based, region-level features based, relevance feedback, and semantic based. Initially, developed algorithms exploit the low-level features of the image such as color, texture, and shape of an object to help retrieve images. They are easy to implement and perform well for images that are either simple or contain few semantic contents.
However, the semantics of an image are difficult to be revealed by the visual features, and these algorithms have many limitations when dealing with broad content image database. Therefore, in order to improve the retrieval via image segmentation were introduced. These methods attempt to overcome the drawbacks of global features by representing images at object level, which is intended to be close to the perception of human visual system.
However, the performance of these methods mainly relies on the results of segmentation. The difference between the user’s information need and the image representation is called the semantic gap in CBIR systems. The limited retrieval accuracy of image centric retrieval systems is essentially due to the inherent semantic gap. In order to reduce the gap, the interactive relevance feedback is introduced into CBIR. The similarity measures are automatically refined on the basis of these evaluations. However, although relevance feedback can significantly improve the retrieval performance, its applicability still suffers from a few drawbacks. The semantic-based image retrieval methods try to discover the real semantic meaning of an image and use it to retrieve relevant images.
However, understanding and discovering the semantics of a piece of information are high level cognitive tasks and thus hard to automate. Similarity computation phase to efficiently find a specific image or a group of images that are similar to the given query. In order to achieve a better approximation of the user’s information need for the following search in the image database, involving user’s interaction is necessary for a CBIR system.