DeepRep

Learning features or representations from data is a longstanding goal of data mining and machine learning. In scientific visualization, feature definitions are usually application-specific, and in many cases, they are vague or even unknown. Representation learning is often the first and crucial step toward effective scientific data analysis and visualization (SDAV). This step has become increasingly important and necessary as the size and complexity of scientific simulation data continue to grow. For more than three decades, manual feature engineering has been the standard practice in scientific visualization. With the thriving of AI and machine learning, leveraging deep neural networks for automatic feature discovery has emerged as a promising and reliable alternative. The overarching goal of this project is to develop DeepRep, a systematic deep representation learning framework for SDAV. The outcomes will provide a paradigm shift to best represent scientific data in the abstract feature space, helping scientists better understand various physical, chemical, and medical phenomena such as those from climate, combustion, and cardiovascular applications. This project thus serves the national interest, as stated by NSF’s mission: to promote the progress of science; to advance the national health, prosperity, and welfare.

SDAV mainly deals with unlabeled data. Therefore, the project team will investigate unsupervised learning techniques and explore their uses in learning abstract, deep, and expressive features. The proposed framework considers a broad range of inputs, including three-dimensional scalar and vector data and their visual representations (i.e., line, surface, and subvolume). Specifically, the team will study different unsupervised deep representation learning techniques, including distributed learning, disentangled learning, and self-supervised learning. The DeepRep project aims to demonstrate their utility in various subsequent SDAV tasks, such as dimensionality reduction, data clustering, representative selection, anomaly detection, data classification, and data generation. The proposed research includes four primary tasks: (1) autoencoders for distributed learning of volumetric data and their visual representations, (2) graph convolutional networks for representation learning of surface data to support node-level and graph-level operations, (3) ensemble data generation from independent features via disentangled learning, and (4) self-supervised solutions for robust data representation via contrastive learning. Furthermore, the team will perform comprehensive objective and subjective evaluations using multilevel metrics to evaluate the framework’s effectiveness.

Project Team

Chaoli Wang, Principal Investigator
Jian-Xun Wang, Collaborator
Pengfei Gu, Graduate Student (now at University of Texas Rio Grande Valley)
Jun Han, Graduate Student (now at Hong Kong University of Science and Technology)
Yunfei Lu, Graduate Student
Kaiyuan Tang, Graduate Student
Siyuan Yao, Graduate Student

Publications

Kaiyuan Tang and Chaoli Wang. ECNR: Efficient Compressive Neural Representation of Time-Varying Volumetric Datasets. In Proceedings of IEEE Pacific Visualization Conference, Tokyo, Japan, pages 72-81, Apr 2024.
[PDF] (30.7MB) | [MP4] (53.5MB)

Kaiyuan Tang and Chaoli Wang. STSR-INR: Spatiotemporal Super-Resolution for Multivariate Time-Varying Volumetric Data via Implicit Neural Representation. Computers & Graphics, 119:103874, Apr 2024.
[PDF] (23.2MB) | [MP4] (63.8MB) | [HTM] [Code Available for Download]

Jun Han and Chaoli Wang. CoordNet: Data Generation and Visualization Generation for Time-Varying Volumes via a Coordinate-Based Neural Network. IEEE Transactions on Visualization and Computer Graphics, 29(12): 4951-4963, Dec 2023.
[Presented at IEEE VIS 2023]
[PDF] (40.4MB) | [WMV] (111.8MB) | [HTM] [Code Available for Download]

Pengfei Gu, Danny Z. Chen, and Chaoli Wang. NeRVI: Compressive Neural Representation of Visualization Images for Communicating Volume Visualization Results. Computers & Graphics, 116:216-227, Nov 2023.
[PDF] (17.7MB) | [MP4] (85.5MB) | [HTM] [Code Available for Download]

Zhiyuan Cheng, Zeyuan Li, Zhepeng Luo, Mayleen Liu, Jonathan D’Alonzo, and Chaoli Wang. ArcheryVis: A Tool for Analyzing and Visualizing Archery Performance Data. In Proceedings of International Symposium on Visual Computing, Lake Tahoe, NV, pages 97-108, Oct 2023.
[PDF] (6.0MB)

Chaoli Wang and Jun Han. DL4SciVis: A State-of-the-Art Survey on Deep Learning for Scientific Visualization. IEEE Transactions on Visualization and Computer Graphics, 29(8):3714-3733, Aug 2023.
[Presented at IEEE VIS 2022]
[PDF] (508KB)

Zhichun Guo, Jun Tao, Siming Chen, Nitesh V. Chawla, and Chaoli Wang. SD2: Slicing and Dicing Scholarly Data for Interactive Evaluation of Academic Performance. IEEE Transactions on Visualization and Computer Graphics, 29(8):3569-3585, Aug 2023.
[Presented at IEEE VIS 2022]
[PDF] (7.5MB) | [MP4] (51.2MB) | [HTM] [Code Available for Download]

Siyuan Yao, Jun Han, and Chaoli Wang. GMT: A Deep Learning Approach to Generalized Multivariate Translation for Scientific Data Analysis and Visualization. Computers & Graphics, 112:92-104, May 2023.
[PDF] (29.4MB) | [MP4] (125.1MB) | [HTM] [Code Available for Download]

Chase J. Brown, Siyuan Yao, Xiaoyun Zhang, Chad J. Brown, John B. Caven, Krupali U. Krusche, and Chaoli Wang. Visualizing Digital Architectural Data for Heritage Education. In Proceedings of IS&T Conference on Visualization and Data Analysis, San Francisco, CA, pages 393-1-393-7, Jan 2023.
[PDF] (4.5MB) | [MP4] (145.8MB)

Jun Han and Chaoli Wang. SurfNet: Learning Surface Representations via Graph Convolutional Network. Computer Graphics Forum (EuroVis 2022), 41(3):109-120, Jun 2022.
[Presented at EuroVis 2022]
[PDF] (69.4MB) | [WMV] (53.8MB) | [HTM] [Code Available for Download]

Jun Han and Chaoli Wang. VCNet: A Generative Model for Volume Completion. Visual Informatics (IEEE PacificVis 2022 Workshop), 6(2):62-73, Jun 2022.
[Presented at IEEE PacificVis 2022]
[PDF] (11.6MB) | [WMV] (46.7MB)

Pengfei Gu, Jun Han, Danny Z. Chen, and Chaoli Wang. Scalar2Vec: Translating Scalar Fields to Vector Fields via Deep Learning. In Proceedings of IEEE Pacific Visualization Symposium, Virtual, pages 31-40, Apr 2022.
[PDF] (14.4MB) | [HTM] [Code Available for Download]

Jun Han and Chaoli Wang. TSR-VFD: Generating Temporal Super-Resolution for Unsteady Vector Field Data. Computers & Graphics, 103:168-179, Apr 2022.
[PDF] (54.9MB) | [HTM] [Code Available for Download]

Brendan J. O’Handley, Morgan K. Ludwig, Samantha R. Allison, Michael T. Niemier, Shreya Kumar, Ramzi Bualuan, and Chaoli Wang. CoursePathVis: Course Path Visualization Using Flexible Grouping and Funnel-Augmented Sankey Diagram. In Proceedings of IS&T Conference on Visualization and Data Analysis, Virtual, pages 431-1-431-9, Jan 2022.
[PDF] (3.9MB) | [MP4] (9.9MB) | [DEMO]

Jun Han, Hao Zheng, Danny Z. Chen, and Chaoli Wang. STNet: An End-to-End Generative Framework for Synthesizing Spatiotemporal Super-Resolution Volumes. IEEE Transactions on Visualization and Computer Graphics (IEEE VIS 2021), 28(1):270-280, Jan 2022.
[Presented at IEEE VIS 2021]
[PDF] (22.3MB) | [WMV] (40.2MB) | [HTM] [Code Available for Download]

Pengfei Gu, Jun Han, Danny Z. Chen, and Chaoli Wang. Reconstructing Unsteady Flow Data from Representative Streamlines via Diffusion and Deep Learning Based Denoising. IEEE Computer Graphics and Applications (Special Issue on Powering Visualization with Deep Learning), 41(6):111-121, Nov/Dec 2021.
[IEEE CG&A 2021 Best Paper Award]
[PDF] (12.5MB) | [HTM] [Code Available for Download]

William P. Porter, Conor P. Murphy, Dane R. Williams, Brendan J. O’Handley, and Chaoli Wang. Hierarchical Sankey Diagram: Design and Evaluation. In Proceedings of International Symposium on Visual Computing, Virtual, part II, pages 386-397, Oct 2021.
[PDF] (2.0MB) | [DEMO]

Jun Zhang, Jun Tao, Jian-Xun Wang, and Chaoli Wang. SurfRiver: Flattening Stream Surfaces for Comparative Visualization. IEEE Transactions on Visualization and Computer Graphics (IEEE PacificVis 2021), 27(6):2783-2795, Jun 2021.
[Presented at IEEE PacificVis 2021]
[PDF] (15.8MB) | [MP4] (52.3MB)

This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 2101696. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.