Show simple item record

dc.identifier.urihttp://hdl.handle.net/11401/78252
dc.description.sponsorshipThis work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.en_US
dc.formatMonograph
dc.format.mediumElectronic Resourceen_US
dc.language.isoen_US
dc.typeDissertation
dcterms.abstractVisual attention enables primates to prioritize visual information relevant to an ongoing task for selection and further processing. This ability reflects integration and competition among bottom-up signals at multiple stages of processing along the ventral and dorsal visual pathways in the brain. Top-down modulations bias the signal in these pathways to allow for goal-directed behavior. This dissertation introduces a framework for building Deep Neural Network (DDN) models inspired by the anatomical and functional structure of brain's attention network. Two models are built in this framework and tested on eye-movement behavior during categorical search tasks. The first study presents a model of the ventral pathway (processing what object is perceived). This network is built using a pre-trained 8-layer object classification DNN. The feedforward and feedback ventral pathway processing are mapped unto the processing between the layers of this DNN. Building on previous work on predicting fixations, the model also includes the sub-cortical area Superior Colliculus (SC), instrumental in programming eye-movements. The ventral network model is tested against categorical search eye-movement behavior in object array displays to test the learning of feature and object biases in the network. The model predicted attentional guidance as well as recognition accuracy for this task. The second study presents ATTNet, a model of interacting DNNs for ventral and dorsal visual pathways (with the latter processing where and how an object is perceived) with layers in these networks corresponding to key cortical areas involved in prioritizing visual information and planning eye-movements. ATTNet differs from the ventral network model in one major aspect; most of the model training takes place during the search task (as opposed to being entirely pre-trained as in Study 1). Using policy gradient reinforcement learning, ATTNet is trained to detect categorically defined targets in a scene. ATTNet showed evidence for attention being preferentially directed to target goals, behaviorally measured as eye-movements' guidance to the targets. More fundamentally, ATTNet learned to spatially route its visual inputs so as to maximize target detection success and reward, and in so doing learned to shift its attention. By learning the human-like strategy of shifting attention to target-like patterns in an image, ATTNet becomes the first behaviorally validated DNN model of attention prioritization and goal-directed attention control.
dcterms.available2018-06-21T13:38:43Z
dcterms.contributorBrennan, Susanen_US
dcterms.contributorZelinsky, Gregoryen_US
dcterms.contributorAnderson, Brendaen_US
dcterms.contributorMcPeek, Roberten_US
dcterms.contributorHoai Nguyen, Minhen_US
dcterms.creatorAdeli Jelodar, Hossein
dcterms.dateAccepted2018-06-21T13:38:43Z
dcterms.dateSubmitted2018-06-21T13:38:43Z
dcterms.descriptionDepartment of Experimental Psychologyen_US
dcterms.extent84 pg.en_US
dcterms.formatApplication/PDFen_US
dcterms.formatMonograph
dcterms.identifierhttp://hdl.handle.net/11401/78252
dcterms.issued2017-12-01
dcterms.languageen_US
dcterms.provenanceMade available in DSpace on 2018-06-21T13:38:43Z (GMT). No. of bitstreams: 1 AdeliJelodar_grad.sunysb_0771E_13614.pdf: 3075648 bytes, checksum: 61d8d4578cb0381fd1ce598c786c0c7c (MD5) Previous issue date: 12en
dcterms.subjectCognitive psychology
dcterms.subjectAttention
dcterms.subjectComputer science
dcterms.subjectBiased Competition
dcterms.subjectNeurosciences
dcterms.subjectComputational Modeling
dcterms.subjectDeep Learning
dcterms.subjectDeep Neural Networks
dcterms.titleDeep Learning in Attention Networks
dcterms.typeDissertation


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record