Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Multimodal Analysis of Gaze in VR Human-Avatar Interaction
Date
2018-11-09
Author
Acartürk, Cengiz
Metadata
Show full item record
Item Usage Stats
74
views
0
downloads
Cite This
URI
https://hdl.handle.net/11511/75896
Collections
Unverified, Conference / Seminar
Suggestions
OpenMETU
Core
Multimodal query-level fusion for efficient multimedia information retrieval
Sattari, Saeid; Yazıcı, Adnan (2018-10-01)
Managing a large volume of multimedia data containing various modalities such as visual, audio, and text reveals the necessity for efficient methods for modeling, processing, storing, and retrieving complex data. In this paper, we propose a fusion-based approach at the query level to improve query retrieval performance of multimedia data. We discuss various flexible query types including the combination of content as well as concept-based queries that provide users with the ability to efficiently perform mu...
Multimodal concept detection in broadcast media: KavTan
SOYSAL, Medeni; Alatan, Abdullah Aydın; TEKİN, Mashar; ESEN, Ersin; SARACOĞLU, Ahmet; Acar, Banu Oskay; Ozan, Ezgi Can; Ates, Tugrul K.; SEVİMLİ, Hakan; SEVİNÇ, Muge; ATIL, Ilkay; Ozkan, Savas; Arabaci, Mehmet Ali; TANKIZ, Seda; KARADENİZ, Talha; ÖNÜR, Duygu; SELÇUK, Sezin; Alatan, A. Aydin; Çiloğlu, Tolga (Springer Science and Business Media LLC, 2014-10-01)
Concept detection stands as an important problem for efficient indexing and retrieval in large video archives. In this work, the KavTan System, which performs high-level semantic classification in one of the largest TV archives of Turkey, is presented. In this system, concept detection is performed using generalized visual and audio concept detection modules that are supported by video text detection, audio keyword spotting and specialized audio-visual semantic detection components. The performance of the p...
Multimodal comprehension of language and graphics Graphs with and without annotations
Acartürk, Cengiz; Çağıltay, Kürşat (2008-11-01)
An experimental investigation into interaction between language and information graphics in multimodal documents served as the basis for this study. More specifically, our purpose was to investigate the role of linguistic annotations in graph-text documents. Participants were presented with three newspaper articles in the following conditions: one text-only, one text plus non-annotated graph, and one text plus annotated graph. Results of the experiment showed that, on one hand, annotations play a bridging r...
Multidimensional approach to link between trust and health before and after COVID-19: Nationwide investigations based on social capital theory
TOSYALI, AHMET FURKAN; Öner Özkan, Bengi; Harma, Mehmet; Department of Psychology (2022-5)
The current research was part of a larger project, funded by the Scientific and Technological Research Council of Turkey (TUBITAK), examining the link between psychosocial factors and health-related outcomes. A pilot study showed that the items could adequately be used, and bivariate relationships were consistent with the hypotheses. In Study 1, the hypotheses were examined in a representative sample of the Turkish population. Bonding and linking aspects were positively related to SRH; however, bridging asp...
Multimodal Stereo Vision Using Mutual Information with Adaptive Windowing
Yaman, Mustafa; Kalkan, Sinan (2013-05-23)
This paper proposes a method for computing disparity maps from a multimodal stereovision system composed of an infrared and a visible camera pair. The method uses mutual information (MI) as the basic similarity measure where a segmentation-based adaptive windowing mechanism is proposed for greatly enhancing the results. On several datasets, we show that (i) our proposal improves the quality of existing MI formulation, and (ii) our method can provide depth comparable to the quality of Kinect depth data.
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
C. Acartürk, “Multimodal Analysis of Gaze in VR Human-Avatar Interaction,” 2018, Accessed: 00, 2021. [Online]. Available: https://hdl.handle.net/11511/75896.