DeepRep

Learning features or representations from data is a longstanding goal of data mining and machine learning. In scientific visualization, feature definitions are usually application-specific, and in many cases, they are vague or even unknown. Representation learning is often the first and crucial step toward effective scientific data analysis and visualization (SDAV). This step has become increasingly important and necessary as the size and complexity of scientific simulation data continue to grow. For more than three decades, manual feature engineering has been the standard practice in scientific visualization. With the thriving of AI and machine learning, leveraging deep neural networks for automatic feature discovery has emerged as a promising and reliable alternative. The overarching goal of this project is to develop DeepRep, a systematic deep representation learning framework for SDAV. The outcomes will provide a paradigm shift to best represent scientific data in the abstract feature space, helping scientists better understand various physical, chemical, and medical phenomena such as those from climate, combustion, and cardiovascular applications. This project thus serves the national interest, as stated by NSF’s mission: to promote the progress of science; to advance the national health, prosperity, and welfare.

SDAV mainly deals with unlabeled data. Therefore, the project team will investigate unsupervised learning techniques and explore their uses in learning abstract, deep, and expressive features. The proposed framework considers a broad range of inputs, including three-dimensional scalar and vector data and their visual representations (i.e., line, surface, and subvolume). Specifically, the team will study different unsupervised deep representation learning techniques, including distributed learning, disentangled learning, and self-supervised learning. The DeepRep project aims to demonstrate their utility in various subsequent SDAV tasks, such as dimensionality reduction, data clustering, representative selection, anomaly detection, data classification, and data generation. The proposed research includes four primary tasks: (1) autoencoders for distributed learning of volumetric data and their visual representations, (2) graph convolutional networks for representation learning of surface data to support node-level and graph-level operations, (3) ensemble data generation from independent features via disentangled learning, and (4) self-supervised solutions for robust data representation via contrastive learning. Furthermore, the team will perform comprehensive objective and subjective evaluations using multilevel metrics to evaluate the framework’s effectiveness.

Project Team

Chaoli Wang, Principal Investigator
Jian-Xun Wang, Collaborator
Pengfei Gu, Graduate Student
Jun Han, Graduate Student (now at Chinese University of Hong Kong, Shenzhen)
Kaiyuan Tang, Graduate Student
Siyuan Yao, Graduate Student

Publications

Jun Han and Chaoli Wang. CoordNet: Data Generation and Visualization Generation for Time-Varying Volumes via a Coordinate-Based Neural Network. IEEE Transactions on Visualization and Computer Graphics, 29(12): 4951-4963, Dec 2023.
[Presented at IEEE VIS 2023]
[PDF] (40.4MB) | [WMV] (111.8MB) | [HTM] [Code Available for Download]

Chaoli Wang and Jun Han. DL4SciVis: A State-of-the-Art Survey on Deep Learning for Scientific Visualization. IEEE Transactions on Visualization and Computer Graphics, 29(8):3714-3733, Aug 2023.
[Presented at IEEE VIS 2022]
[PDF] (508KB)

Zhichun Guo, Jun Tao, Siming Chen, Nitesh V. Chawla, and Chaoli Wang. SD2: Slicing and Dicing Scholarly Data for Interactive Evaluation of Academic Performance. IEEE Transactions on Visualization and Computer Graphics, 29(8):3569-3585, Aug 2023.
[Presented at IEEE VIS 2022]
[PDF] (7.5MB) | [MP4] (51.2MB) | [HTM] [Code Available for Download]

Siyuan Yao, Jun Han, and Chaoli Wang. GMT: A Deep Learning Approach to Generalized Multivariate Translation for Scientific Data Analysis and Visualization. Computers & Graphics, 112:92-104, May 2023.
[PDF] (29.4MB) | [MP4] (125.1MB) | [HTM] [Code Available for Download]

Chase J. Brown, Siyuan Yao, Xiaoyun Zhang, Chad J. Brown, John B. Caven, Krupali U. Krusche, and Chaoli Wang. Visualizing Digital Architectural Data for Heritage Education. In Proceedings of IS&T Conference on Visualization and Data Analysis, San Francisco, CA, pages 393-1-393-7, Jan 2023.
[PDF] (4.5MB) | [MP4] (145.8MB)

Jun Han and Chaoli Wang. SurfNet: Learning Surface Representations via Graph Convolutional Network. Computer Graphics Forum (EuroVis 2022), 41(3):109-120, Jun 2022.
[Presented at EuroVis 2022]
[PDF] (69.4MB) | [WMV] (53.8MB) | [HTM] [Code Available for Download]

Jun Han and Chaoli Wang. VCNet: A Generative Model for Volume Completion. Visual Informatics (IEEE PacificVis 2022 Workshop), 6(2):62-73, Jun 2022.
[Presented at IEEE PacificVis 2022]
[PDF] (11.6MB) | [WMV] (46.7MB)

Pengfei Gu, Jun Han, Danny Z. Chen, and Chaoli Wang. Scalar2Vec: Translating Scalar Fields to Vector Fields via Deep Learning. In Proceedings of IEEE Pacific Visualization Symposium, Virtual, pages 31-40, Apr 2022.
[PDF] (14.4MB) | [HTM] [Code Available for Download]

Jun Han and Chaoli Wang. TSR-VFD: Generating Temporal Super-Resolution for Unsteady Vector Field Data. Computers & Graphics, 103:168-179, Apr 2022.
[PDF] (54.9MB) | [HTM] [Code Available for Download]

Brendan J. O’Handley, Morgan K. Ludwig, Samantha R. Allison, Michael T. Niemier, Shreya Kumar, Ramzi Bualuan, and Chaoli Wang. CoursePathVis: Course Path Visualization Using Flexible Grouping and Funnel-Augmented Sankey Diagram. In Proceedings of IS&T Conference on Visualization and Data Analysis, Virtual, pages 431-1-431-9, Jan 2022.
[PDF] (3.9MB) | [MP4] (9.9MB) | [DEMO]

Jun Han, Hao Zheng, Danny Z. Chen, and Chaoli Wang. STNet: An End-to-End Generative Framework for Synthesizing Spatiotemporal Super-Resolution Volumes. IEEE Transactions on Visualization and Computer Graphics (IEEE VIS 2021), 28(1):270-280, Jan 2022.
[Presented at IEEE VIS 2021]
[PDF] (22.3MB) | [WMV] (40.2MB) | [HTM] [Code Available for Download]

Pengfei Gu, Jun Han, Danny Z. Chen, and Chaoli Wang. Reconstructing Unsteady Flow Data from Representative Streamlines via Diffusion and Deep Learning Based Denoising. IEEE Computer Graphics and Applications (Special Issue on Powering Visualization with Deep Learning), 41(6):111-121, Nov/Dec 2021.
[IEEE CG&A 2021 Best Paper Award]
[PDF] (12.5MB) | [HTM] [Code Available for Download]

William P. Porter, Conor P. Murphy, Dane R. Williams, Brendan J. O’Handley, and Chaoli Wang. Hierarchical Sankey Diagram: Design and Evaluation. In Proceedings of International Symposium on Visual Computing, Virtual, part II, pages 386-397, Oct 2021.
[PDF] (2.0MB) | [DEMO]

Jun Zhang, Jun Tao, Jian-Xun Wang, and Chaoli Wang. SurfRiver: Flattening Stream Surfaces for Comparative Visualization. IEEE Transactions on Visualization and Computer Graphics (IEEE PacificVis 2021), 27(6):2783-2795, Jun 2021.
[Presented at IEEE PacificVis 2021]
[PDF] (15.8MB) | [MP4] (52.3MB)

This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 2101696. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.