Show simple item record

dc.identifier.urihttp://hdl.handle.net/11401/77289
dc.description.sponsorshipThis work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.en_US
dc.formatMonograph
dc.format.mediumElectronic Resourceen_US
dc.language.isoen_US
dc.publisherThe Graduate School, Stony Brook University: Stony Brook, NY.
dc.typeDissertation
dcterms.abstractStatistical Relational Learning (SRL), an emerging area of Machine Learning, aims at modeling problems which exhibit complex relational structure as well as uncertainty. It uses a subset of first-order logic to represent relational properties, and graphical models to represent uncertainty. Probabilistic Logic Programming (PLP) is an interesting subfield of SRL. A key characteristic of PLP frameworks is that they are conservative extensions to non-probabilistic logic programs which have been widely used for knowledge representation. PLP frameworks extend traditional logic programming semantics to a <italic>distribution semantics</italic>, where the semantics of a probabilistic logic program is given in terms of a distribution over possible models of the program. However, the inference techniques used in these works rely on enumerating sets of explanations for a query answer. Consequently, these languages permit very limited use of random variables with continuous distributions. In this thesis, we extend PRISM, a well-known PLP language, with Gaussian random variables and linear equality constraints over reals. We provide a well-defined distribution semantics for the extended language. We present a <italic>symbolic</italic> inference and parameter-learning algorithms for the extended language that represents sets of explanations without enumeration. This permits us to reason over complex probabilistic models such as Kalman filters and a large subclass of Hybrid Bayesian networks that were hitherto not possible in PLP frameworks. The inference algorithm can be extended to handle programs with Gamma-distributed random variables as well. An interesting aspect of our inference and learning algorithms is that they specialize to those of PRISM in the absence of continuous variables. By using PRISM as the basis, our inference and learning algorithms match the complexity of known specialized algorithms when applied to Hidden Markov Models, Finite Mixture Models and Kalman Filters.
dcterms.available2017-09-20T16:52:21Z
dcterms.contributorRamakrishnan, C.R.en_US
dcterms.contributorRamakrishnan, I.V.en_US
dcterms.contributorWarren, Daviden_US
dcterms.contributorCosta, Vitor.en_US
dcterms.creatorIslam, Muhammad Asiful
dcterms.dateAccepted2017-09-20T16:52:21Z
dcterms.dateSubmitted2017-09-20T16:52:21Z
dcterms.descriptionDepartment of Computer Science.en_US
dcterms.extent119 pg.en_US
dcterms.formatMonograph
dcterms.formatApplication/PDFen_US
dcterms.identifierhttp://hdl.handle.net/11401/77289
dcterms.issued2012-12-01
dcterms.languageen_US
dcterms.provenanceMade available in DSpace on 2017-09-20T16:52:21Z (GMT). No. of bitstreams: 1 Islam_grad.sunysb_0771E_11004.pdf: 1182805 bytes, checksum: c5e030d4f48c9df990a846ec08c33dfc (MD5) Previous issue date: 1en
dcterms.publisherThe Graduate School, Stony Brook University: Stony Brook, NY.
dcterms.subjectComputer science
dcterms.titleInference and Learning in Probabilistic Logic Programs with Continuous Random Variables
dcterms.typeDissertation


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record