Show simple item record

dc.identifier.urihttp://hdl.handle.net/1951/59852
dc.identifier.urihttp://hdl.handle.net/11401/71402
dc.description.sponsorshipThis work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.en_US
dc.formatMonograph
dc.format.mediumElectronic Resourceen_US
dc.language.isoen_US
dc.publisherThe Graduate School, Stony Brook University: Stony Brook, NY.
dc.typeDissertation
dcterms.abstractShift-variant image restoration or image deblurring is useful in many applications including Machine Vision, Image Processing, 3D Microscopy, medical image analysis, etc. Currently, several shift-variant restoration approaches exist. However, they are either computationally expensive or inaccurate leading to poor image quality. This thesis proposes and investigates computationally efficient techniques that produce high quality restoration, even in the presence of noise. The methods presented here are general in that they are not limited to certain types of kernels to be computationally efficient. Detailed analysis and computational algorithms for implementing the methods are provided. This thesis addresses blurring in linear shift-variant imaging systems in both two and three dimensions. Image restoration in such systems corresponds to solving the Fredholm Integral Equation of the Fist Kind. In the two dimensional case, computational efficiency is achieved through localization. In the case of three dimensions, a new domain transformation is applied to achieve computational efficiency. These results are presented in two parts. In the first part, three image restoration algorithms are discussed. The first algorithm is a localized approach to restore highly defocused images. It is based on an existing method called the single-interval RT (SRT) method. The SRT method is found to restore only small to medium levels of blur. It is extended to restore images blurred with large shift-variant point spread functions (PSFs). The new method is called the multi-interval RT (MRT) method. In the MRT technique, the region around a pixel, with size comparable to the support domain of the blurring kernel, is divided into several smaller regions (intervals). The blurred image in each interval is modeled separately by truncated Taylor-series polynomials. A linear system is derived by differentiating the polynomial with respect to spatial variables. A vector of blurred image derivatives is then expressed as sum of such linear systems. An iterative update formula is obtained that is evaluated to improve the focused image estimate. Experimental results for the MRT technique in 1D on analytic functions and in 2D on simulation data and real images are presented. The results show that MRT technique is effective for restoring highly defocused images but at a modest increase in computation cost compared to SRT. The next two restoration algorithms are iterative versions of SRT. One of them is the RT Iterative (RTI) method. In the RTI method, forward RT equation (of SRT) which expresses the blurred image as a weighted sum of focused image and its derivatives is rearranged to form an update equation. The RTI update equation is found to converge rapidly to a solution. The other method is a modification of the gradient based Landweber's iteration and is called the RT based Landweber's (RTLW) algorithm. The RTLW algorithm has a step-size parameter and hence provides more control over the convergence to the solution. Both RTI and RTLW methods are analyzed for computational complexity. It is found that for deblurring defocus aberration, the RTI and RTLW methods are O(NlogN) complex per iteration. Both the methods are compared with Landweber's algorithm and Tikhonov regularization (using SVD), for computation time, accuracy, robustness against noise and quality of restored images. An interesting new insight towards ill-conditioned nature of the image restoration problem becomes apparent by analyzing the localized methods. The second part of this thesis focuses on a new theorem called the Generalized Convolution Theorem (GCT). GCT provides the conditions under which a linear shift-variant system could be transformed to a linear shift-invariant system. The motivation for such transformation is the computational advantage of implementing shift-invariant systems and shift-invariant deblurring using the Fast Fourier Transform (FFT). In the transformed domain the shift-invariant equivalent of a shift-variant system is deblurred in O(NlogN). Implementing the transformations is not computationally expensive. Hence, shift-variant restoration becomes computationally efficient. GCT is stated and proved in one dimension (1D). The 1D GCT is applied to a hypothetical imaging system for verification. A proof of multi-dimensional version of GCT is also provided. Next, applications of GCT in 3D imaging with digital cameras and microscopes are considered. Blurred 3D image sequence is modeled as the result of shift-variant filtering with a 3D PSF. It is found that the 3D shift-variant kernel under geometric optics satisfies the conditions required by GCT for domain transformation. Therefore, GCT is applied to 3D deconvolution microscopy. Specifically, GCT is useful in reducing computational requirements of shift-variant or depth-dependent deconvolution techniques. Simulation experiments in 3D compare GCT with shift-invariance (SI) approximation and piecewise constant shift-invariance (PCSI) approximations. It is demonstrated that GCT provides better results both qualitatively and quantitatively when compared to SI and PCSI approximations. Moreover, GCT is also found to mitigate some of the artifacts common in deconvolution microscopy. Shape recovery using GCT is also briefly investigated.
dcterms.available2013-05-22T17:35:33Z
dcterms.available2015-04-24T14:47:25Z
dcterms.contributorDjuric, Petar M.Bugallo, Monica F.Mueller, Klaus.en_US
dcterms.contributorSubbarao, Muralidharaen_US
dcterms.creatorSastry, Shekhar Bangalore
dcterms.dateAccepted2013-05-22T17:35:33Z
dcterms.dateAccepted2015-04-24T14:47:25Z
dcterms.dateSubmitted2013-05-22T17:35:33Z
dcterms.dateSubmitted2015-04-24T14:47:25Z
dcterms.descriptionDepartment of Electrical Engineeringen_US
dcterms.extent148 pg.en_US
dcterms.formatMonograph
dcterms.formatApplication/PDFen_US
dcterms.identifierhttp://hdl.handle.net/1951/59852
dcterms.identifierSastry_grad.sunysb_0771E_10738en_US
dcterms.identifierhttp://hdl.handle.net/11401/71402
dcterms.issued2011-12-01
dcterms.languageen_US
dcterms.provenanceMade available in DSpace on 2013-05-22T17:35:33Z (GMT). No. of bitstreams: 1 Sastry_grad.sunysb_0771E_10738.pdf: 5862026 bytes, checksum: 5e1da2143abbfbbf74a72530e6969b06 (MD5) Previous issue date: 1en
dcterms.provenanceMade available in DSpace on 2015-04-24T14:47:25Z (GMT). No. of bitstreams: 3 Sastry_grad.sunysb_0771E_10738.pdf.jpg: 1894 bytes, checksum: a6009c46e6ec8251b348085684cba80d (MD5) Sastry_grad.sunysb_0771E_10738.pdf.txt: 192312 bytes, checksum: 9f4c7bd6cfdc794e4aff5cc8d3df38f2 (MD5) Sastry_grad.sunysb_0771E_10738.pdf: 5862026 bytes, checksum: 5e1da2143abbfbbf74a72530e6969b06 (MD5) Previous issue date: 1en
dcterms.publisherThe Graduate School, Stony Brook University: Stony Brook, NY.
dcterms.subjectElectrical engineering--Computer engineering
dcterms.subjectConvolution, Generalized convolution theorem, Image processing, Image restoration, Rao transform
dcterms.titleComputationally Efficient Methods for Shift-variant Image Restoration in Two and Three Dimensions
dcterms.typeDissertation


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record