Please use this identifier to cite or link to this item: https://hdl.handle.net/1783.1/59677

VAIT: A Visual Analytics System for Metropolitan Transportation

Bibliographic Details
Author Liu, Siyuan
Pu, Jiansu
Luo, Qiong View this author's profile
Qu, Huamin View this author's profile
Ni, Lionel Ming-shuan View this author's profile
Krishnan, Ramayya
Issue Date 2013
Source IEEE Transactions on Intelligent Transportation Systems, v. 14, (4), December 2013, article number 6522454, p. 1586-1596
Abstract With the increasing availability of metropolitan transportation data, such as those from vehicle Global Positioning Systems (GPSs) and road-side sensors, it has become viable for authorities, operators, and individuals to analyze the data for better understanding of the transportation system and, possibly, improved utilization and planning of the system. We report our experience in building the Visual Analytics for Intelligent Transportation (VAIT) system, which is the first system on real-life large-scale data sets for intelligent transportation. Our key observation is that metropolitan transportation data are inherently visual as they are spatio-temporal around road networks. Therefore, we visualize and manage traffic data, together with digital maps, and support analytical queries through this interactive visual interface. As a case study, we demonstrate VAIT on real-world taxi GPS and meter data sets from 15 000 taxis running for two months in a Chinese city of over 10 million people. We discuss the technical challenges in data calibration, storage, visualization, and query processing and offer first-hand lessons learned from developing the system. Based on our extensive empirical experiment results, VAIT beats state-of-the-art methods and systems in terms of scalability, efficiency, and effectiveness and offers us an easy-to-use, efficient, and scalable platform to shed more light on intelligent transportation research.
Subject
DOI 10.1109/TITS.2013.2263225
ISSN 1524-9050
1558-0016
Language English
Type Article
Access View full-text via Browzine
View details via DOI
View details via Web of Science
View details via Scopus