Skip to main content

Research Repository

Advanced Search

A computational model of visual attention.

Chilukamari, Jayachandra

Authors

Jayachandra Chilukamari



Contributors

Sampath Kannangara
Supervisor

Grant M. Maxwell
Supervisor

Yafan Zhao
Supervisor

Abstract

Visual attention is a process by which the Human Visual System (HVS) selects most important information from a scene. Visual attention models are computational or mathematical models developed to predict this information. The performance of the state-of-the-art visual attention models is limited in terms of prediction accuracy and computational complexity. In spite of significant amount of active research in this area, modelling visual attention is still an open research challenge. This thesis proposes a novel computational model of visual attention that achieves higher prediction accuracy with low computational complexity. A new bottom-up visual attention model based on in-focus regions is proposed. To develop the model, an image dataset is created by capturing images with in-focus and out-of-focus regions. The Discrete Cosine Transform (DCT) spectrum of these images is investigated qualitatively and quantitatively to discover the key frequency coefficients that correspond to the in-focus regions. The model detects these key coefficients by formulating a novel relation between the in-focus and out-of-focus regions in the frequency domain. These frequency coefficients are used to detect the salient in-focus regions. The simulation results show that this attention model achieves good prediction accuracy with low complexity. The prediction accuracy of the proposed in-focus visual attention model is further improved by incorporating sensitivity of the HVS towards the image centre and the human faces. Moreover, the computational complexity is further reduced by using Integer Cosine Transform (ICT). The model is parameter tuned using the hill climbing approach to optimise the accuracy. The performance has been analysed qualitatively and quantitatively using two large image datasets with eye tracking fixation ground truth. The results show that the model achieves higher prediction accuracy with a lower computational complexity compared to the state-of-the-art visual attention models. The proposed model is useful in predicting human fixations in computationally constrained environments. Mainly it is useful in applications such as perceptual video coding, image quality assessment, object recognition and image segmentation.

Citation

CHILUKAMARI, J. 2017. A computational model of visual attention. Robert Gordon University, PhD thesis.

Thesis Type Thesis
Deposit Date Aug 15, 2017
Publicly Available Date Aug 15, 2017
Keywords Visual saliency; Saliency detection; In focus; DCT; Frequency saliency; Fixation prediction; Attention; Visual attention models; Saliency model; Face saliency
Public URL http://hdl.handle.net/10059/2443
Related Public URLs http://hdl.handle.net/10059/2477 ; http://hdl.handle.net/10059/2478 ; http://hdl.handle.net/10059/2479 ; http://hdl.handle.net/10059/2481
Award Date Feb 28, 2017

Files




Related Outputs



You might also like



Downloadable Citations