Show simple item record

dc.identifier.urihttp://hdl.handle.net/11401/77432
dc.description.sponsorshipThis work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.en_US
dc.formatMonograph
dc.format.mediumElectronic Resourceen_US
dc.language.isoen_US
dc.publisherThe Graduate School, Stony Brook University: Stony Brook, NY.
dc.typeDissertation
dcterms.abstractEvents that occur randomly over time or space result in count data. Poisson models are widely used for analyses. However, simple log-linear forms are often insufficient for complex relationship between variables. Thus we study tree-structured log-linear models and latent variables models for count data. First, we consider extending Poisson regression for independent observations. Decision trees exhibit the advantage of interpretation. Constant fits are too simple to interpret within strata nonetheless. We hence propose to embed log-linear models to decision trees, and use negative binomial distribution for overdispersion. Second, we consider latent variable models for point process observation in neuroscience. Neurons fire sequences of electrical spikes as signals which can naturally be treated as point processes disregarding the analog difference. Large scale neural recordings have shown evidences of low-dimensional nonlinear dynamics which describe the neural computations implemented by a large neuronal network. Sufficient redundancy of population activity would give us access to the underlying neural process of interest while observing only a small subset of neurons for understanding how neural systems work. Thus we propose a probabilistic method that recovers the latent trajectories non-parametrically under a log-linear generative model with minimal assumptions. Third, we are aim to model the continuous dynamics to further understand the neural computation. Theories of neural computation are characterized by dynamic features such as fixed points and continuous attractors. However, reconstructing the corresponding low-dimensional dynamical system from neural time series are usually difficult. Typical linear dynamical system and autoregressive models either are too simple to reflect complex features or sometimes extrapolate wildly. Thus we propose a flexible nonlinear time series model that directly learns the velocity field associated with the dynamics in the state space and produces reliable future predictions in a variety of dynamical models and in real neural data.
dcterms.available2017-09-20T16:52:41Z
dcterms.contributorAhn, Hongshiken_US
dcterms.contributorPark, Il Memmingen_US
dcterms.contributorHong, Sangjin.en_US
dcterms.contributorFinch, Stephenen_US
dcterms.creatorZhao, Yuan
dcterms.dateAccepted2017-09-20T16:52:41Z
dcterms.dateSubmitted2017-09-20T16:52:41Z
dcterms.descriptionDepartment of Applied Mathematics and Statisticsen_US
dcterms.extent113 pg.en_US
dcterms.formatMonograph
dcterms.formatApplication/PDFen_US
dcterms.identifierhttp://hdl.handle.net/11401/77432
dcterms.issued2016-12-01
dcterms.languageen_US
dcterms.provenanceMade available in DSpace on 2017-09-20T16:52:41Z (GMT). No. of bitstreams: 1 Zhao_grad.sunysb_0771E_13014.pdf: 5503934 bytes, checksum: 8b167de7c6cc8d7b59d2edf4d6397168 (MD5) Previous issue date: 1en
dcterms.publisherThe Graduate School, Stony Brook University: Stony Brook, NY.
dcterms.subjectStatistics -- Neurosciences
dcterms.subjectCount, Decision tree, Dynamics, Log-linear model, Variational Bayes
dcterms.titleLog-linear Model Based Tree and Latent Variable Model for Count Data
dcterms.typeDissertation


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record