Show simple item record

dc.identifier.urihttp://hdl.handle.net/11401/77304
dc.description.sponsorshipThis work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.en_US
dc.formatMonograph
dc.format.mediumElectronic Resourceen_US
dc.language.isoen_US
dc.publisherThe Graduate School, Stony Brook University: Stony Brook, NY.
dc.typeDissertation
dcterms.abstractLarge, high-resolution displays (LHiRDs) are a powerful visualization and data exploration tool. These facilities, with resolutions in the hundreds of millions of pixels, have proliferated in industry and research laboratories, enabling scientists, engineers or physicians to better understand the problems that they face. Recently, the Reality Deck pushed LHiRDs past the gigapixel resolution barrier, offering 1.5 gigapixels and providing a 360 degree horizontal field of view, within a large workspace of 33′ × 19′. Room-sized facilities such as the Reality Deck simultaneously promote and demand physical navigation on behalf of the user. Consequently, static user interfaces (e.g., keyboard and mouse) do not translate themselves well to such systems. Additionally, the sheer size and resolution of the Reality Deck can trigger new and interesting usage patterns in how users navigate within the visualization space. These patterns are worthy of investigation and can also be exploited in order to improve the system performance. The goal of this dissertation is to evaluate, leverage and further enable the physical navigation aspects of room-sized gigapixel resolution displays such as the Reality Deck. This is accomplished via four pillars of research work. The first pillar is the introduction of interfaces for unencumbered, device-less and hand-driven interaction with such systems. The second pillar utilizes the perceptual characteristics of LHiRDs and the human visual system in order to improve performance when displaying gigapixel-resolution data. The third pillar focuses on the evaluation of user performance within LHiRDs, while performing core visualization tasks through physical navigation. The fourth pillar is the introduction of VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is a visual analytics tool that enables the visual exploration of data that stems from visualization, virtual reality and interaction experiments, such as those conducted in LHiRDs, allowing researchers to validate and generate insights and hypotheses in an interactive way.
dcterms.abstractLarge, high-resolution displays (LHiRDs) are a powerful visualization and data exploration tool. These facilities, with resolutions in the hundreds of millions of pixels, have proliferated in industry and research laboratories, enabling scientists, engineers or physicians to better understand the problems that they face. Recently, the Reality Deck pushed LHiRDs past the gigapixel resolution barrier, offering 1.5 gigapixels and providing a 360 degree horizontal field of view, within a large workspace of 33′ × 19′. Room-sized facilities such as the Reality Deck simultaneously promote and demand physical navigation on behalf of the user. Consequently, static user interfaces (e.g., keyboard and mouse) do not translate themselves well to such systems. Additionally, the sheer size and resolution of the Reality Deck can trigger new and interesting usage patterns in how users navigate within the visualization space. These patterns are worthy of investigation and can also be exploited in order to improve the system performance. The goal of this dissertation is to evaluate, leverage and further enable the physical navigation aspects of room-sized gigapixel resolution displays such as the Reality Deck. This is accomplished via four pillars of research work. The first pillar is the introduction of interfaces for unencumbered, device-less and hand-driven interaction with such systems. The second pillar utilizes the perceptual characteristics of LHiRDs and the human visual system in order to improve performance when displaying gigapixel-resolution data. The third pillar focuses on the evaluation of user performance within LHiRDs, while performing core visualization tasks through physical navigation. The fourth pillar is the introduction of VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is a visual analytics tool that enables the visual exploration of data that stems from visualization, virtual reality and interaction experiments, such as those conducted in LHiRDs, allowing researchers to validate and generate insights and hypotheses in an interactive way.
dcterms.available2017-09-20T16:52:24Z
dcterms.contributorSamaras, Dimitrisen_US
dcterms.contributorKaufman, Arie Een_US
dcterms.contributorMueller, Klausen_US
dcterms.contributorVarshney, Amitabh.en_US
dcterms.creatorPapadopoulos, Charilaos
dcterms.dateAccepted2017-09-20T16:52:24Z
dcterms.dateSubmitted2017-09-20T16:52:24Z
dcterms.descriptionDepartment of Computer Science.en_US
dcterms.extent213 pg.en_US
dcterms.formatApplication/PDFen_US
dcterms.formatMonograph
dcterms.identifierhttp://hdl.handle.net/11401/77304
dcterms.issued2015-12-01
dcterms.languageen_US
dcterms.provenanceMade available in DSpace on 2017-09-20T16:52:24Z (GMT). No. of bitstreams: 1 Papadopoulos_grad.sunysb_0771E_12312.pdf: 131905640 bytes, checksum: fae5690dba78c5741cb80ecbfa953df0 (MD5) Previous issue date: 1en
dcterms.publisherThe Graduate School, Stony Brook University: Stony Brook, NY.
dcterms.subjectComputer science
dcterms.subjectGigapixel, Immersion, Interaction, Physical Navigation, Tiled Displays, Visualization
dcterms.titleInteracting with Gigapixel Displays
dcterms.typeDissertation


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record