Skip to content

Commit 6a307ef

Browse files
committed
now for real..
1 parent 972d460 commit 6a307ef

23 files changed

+429
-0
lines changed
Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
---
2+
layout: publication
3+
# The quotes make the : possible, otherwise you can do it without quotes
4+
title: "A Comprehensive Evaluation of Life Sciences Data Resources Reveals Significant Accessibility Barriers"
5+
key: 2025_scientific-reports_accessibility-barriers
6+
# paper | preprint | poster
7+
type: paper
8+
order: 2025-4
9+
10+
#paper_content_url:
11+
12+
13+
# The shortname is used for auto-generated titles
14+
shortname: Accessibility Barriers
15+
16+
# add a 2:1 aspect ratio (e.g., width: 400px, height: 200px) to the folder /assets/images/papers/
17+
image: 2025_scientific-reports_accessibility-barriers.png
18+
# add a 2:1 aspect ratio teaser figure (e.g., width: 1200px, height: 600px) to the folder /assets/images/papers/
19+
image_large: 2025_scientific-reports_accessibility-barriers_teaser.png
20+
21+
22+
external-project: http://inscidar.org/
23+
24+
# Authors in the "database" can be used with just the key (lastname). Others can be written properly.
25+
authors:
26+
- Sehi L’Yi
27+
- Harrison G. Zhang
28+
- Andrew P. Mar
29+
- Thomas C. Smits
30+
- Lawrence Weru
31+
- Sofía Rojas
32+
- lex
33+
- gehlenborg
34+
35+
36+
year: 2025
37+
journal-short: Scientific Reports
38+
39+
bibentry: article
40+
bib:
41+
journal: Scientific Reports
42+
booktitle:
43+
editor:
44+
publisher:
45+
address:
46+
doi: 10.1038/s41598-025-08731-7
47+
url:
48+
volume: 15
49+
number: 1
50+
pages: 23676
51+
month:
52+
53+
preprint_server: https://osf.io/preprints/osf/5v98j_v1
54+
55+
56+
# Add things like "Best Paper Award at InfoVis 2099, selected out of 4000 submissions"
57+
award:
58+
59+
60+
pdf: 2025_scientific-reports_accessibility-barriers.pdf
61+
62+
#supplement: 2025_eurovis_text-descriptions_supplement.zip
63+
64+
# Extra supplements, such as talk slides, data sets, etc.
65+
# supplements:
66+
# - name: Supplementary Material OSF
67+
# # Use link instead of abslink if you want to link to the master directory
68+
# abslink: https://osf.io/kbvs9/
69+
# # Defaults to a download icon, use this if you want a link-out icon
70+
# linksym: true
71+
72+
73+
74+
75+
abstract: "
76+
Individuals with disabilities participate notably less in the scientific workforce. While the reasons for this discrepancy are multifaceted, accessibility of knowledge is likely a factor. In the life sciences, digital resources play an important role in gaining new knowledge and conducting data-driven research. However, there is little data on how accessible essential life sciences resources are for people with disabilities. Our work is the first to comprehensively evaluate the accessibility of life sciences resources. To understand the current state of accessibility of digital data resources in the life sciences, we pose three research questions: (1) What are the most common accessibility issues?; (2) What factors may have contributed to the current state of accessibility?; and (3) What is the potential impact of accessibility issues in real-world use cases? To answer these questions, we collected large-scale accessibility data about two essential resources: data portals (n = 3,112) and journal websites (n = 5,099). Our analysis shows that many life sciences resources contain severe accessibility issues (74.8% of data portals and 69.1% of journal websites) and are significantly less accessible than US government websites, which we used as a baseline. Focusing on visual impairment, we further conducted a preliminary study to evaluate three data portals in-depth with a blind user, unveiling the practical impact of the identified accessibility issues on common tasks (53.3% success rate), such as data discovery tasks. Based on our results, we find that simply implementing accessibility standards does not guarantee real-world accessibility of life sciences data resources. We believe that our data and analysis results bring insights into how the scientific community can address critical accessibility barriers and increase awareness of accessibility, leading to more inclusive life sciences research and education. Our analysis results are publicly available at http://inscidar.org/.
77+
"
78+
79+
# After the --- you can put information that you want to appear on the website using markdown formatting or HTML. A good example are acknowledgements, extra references, an erratum, etc.
80+
81+
---
82+
83+
# Acknowledgements
84+
85+
This study was in part funded by NIH grants R01HG011773 and K99HG013348.
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
---
2+
layout: publication
3+
title: "Here’s what you need to know about my data: Exploring Expert Knowledge’s Role in Data Analysis"
4+
key: 2025_vis_data_hunch_interview
5+
type: paper
6+
order: 2025-7
7+
redirect_from: /publications/2023_preprint_data_hunch_interview
8+
9+
shortname: data_hunch_interview
10+
image: 2025_vis_data_hunch_interview.png
11+
image_large: 2025_vis_data_hunch_interview_teaser.png
12+
13+
authors:
14+
- lin
15+
- lisnic
16+
- akbaba
17+
- meyer
18+
- lex
19+
20+
journal-short: IEEE VIS
21+
year: 2026
22+
23+
bibentry: article
24+
bib:
25+
journal: IEEE Transactions on Visualization and Computer Graphics (VIS)
26+
booktitle:
27+
editor:
28+
publisher:
29+
address:
30+
doi:
31+
url:
32+
volume: 32
33+
number:
34+
pages:
35+
month: jan
36+
37+
award: IEEE VIS 2025 Honorable Mention Award
38+
39+
# Links to a project hosted on VDL, or else externally on your own site
40+
41+
42+
# Video entries, a preview , talk, and intro video. Vimeo IDs or youtube IDs are supported
43+
# you need to pick either a vimeo or youtube ID. We definitely want a downloadable video too.
44+
45+
# videos:
46+
# - name: 'Loon Introduction'
47+
# youtube-id: Y7u3Kg3At9A
48+
# file: 2021_vis_loon.mp4
49+
# - name: 'Loon VIS Preview'
50+
# youtube-id: iRsL3WiZbhI
51+
# file: 2021_vis_loon_preview.mp4
52+
# - name: 'Loon VIS Talk'
53+
# youtube-id: Xz5VrBXk5J0
54+
# file: 2021_vis_loon_talk.mp4
55+
56+
57+
58+
59+
# Provide a preprint and supplement pdf
60+
61+
pdf: 2025_vis_data-hunch_interview.pdf
62+
# supplement: 2021_vis_loon_supplement.pdf
63+
64+
# Link to an official preprint server
65+
preprint_server: https://doi.org/10.31219/osf.io/dn32z_v2
66+
67+
68+
# # Extra supplements, such as talk slides, data sets, etc.
69+
70+
# supplements:
71+
# - name: VIS Talk Slides
72+
# link: 2021_vis_loon_talk_slides.pdf
73+
74+
# Supplemental, cc-by images. Make caption brief (at most 60 chars)
75+
76+
77+
78+
79+
80+
abstract: "
81+
<p>Data-driven decision making has become a popular practice in science, industry, and public policy. Yet data alone, as an imperfect and partial representation of reality, is often insufficient to make good analysis decisions. Knowledge about the context of a dataset, its strengths and weaknesses, and its applicability for certain tasks is essential. Analysts are often not only familiar with the data itself, but also have data hunches about their analysis subject. In this work, we present an interview study with analysts from a wide range of domains and with varied expertise and experience, inquiring about the role of contextual knowledge. We provide insights into how data is insufficient in analysts’ workflows and how they incorporate other sources of knowledge into their analysis. We analyzed how knowledge of data shaped their analysis outcome. Based on the results, we suggest design opportunities to better and
82+
more robustly consider both knowledge and data in analysis processes.</p>
83+
"
84+
---
85+
86+
# Acknowledgements
87+
88+
We would like to thank our interviewees for their time and participation in the study, and the Visualization Design Lab for the fruitful discussions and feedback. We gratefully acknowledge funding from the National Science Foundation (OAC 1835904) and from the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
---
2+
layout: publication
3+
# The quotes make the : possible, otherwise you can do it without quotes
4+
title: "Reframing Pattern: A Comprehensive Approach to a Composite Visual Variable"
5+
key: 2025_vis_reframing-pattern
6+
# paper | preprint | poster
7+
type: paper
8+
order: 2025-5
9+
10+
#paper_content_url:
11+
12+
13+
# The shortname is used for auto-generated titles
14+
shortname: Reframing Pattern
15+
16+
# add a 2:1 aspect ratio (e.g., width: 400px, height: 200px) to the folder /assets/images/papers/
17+
image: 2025_vis_reframing-pattern.png
18+
# add a 2:1 aspect ratio teaser figure (e.g., width: 1200px, height: 600px) to the folder /assets/images/papers/
19+
image_large: 2025_vis_reframing-pattern_teaser.png
20+
21+
# Authors in the "database" can be used with just the key (lastname). Others can be written properly.
22+
authors:
23+
- he
24+
- Jason Dykes
25+
- Petra Isenberg
26+
- Tobias Isenberg
27+
28+
29+
year: 2026
30+
journal-short: IEEE VIS
31+
32+
bibentry: article
33+
bib:
34+
journal: IEEE Transactions on Visualization and Computer Graphics (VIS)
35+
booktitle:
36+
editor:
37+
publisher:
38+
address:
39+
doi:
40+
url:
41+
volume: 32
42+
number:
43+
pages:
44+
month: jan
45+
46+
preprint_server: https://arxiv.org/abs/2508.02639
47+
48+
49+
50+
# Add things like "Best Paper Award at InfoVis 2099, selected out of 4000 submissions"
51+
award:
52+
53+
54+
pdf: 2025_vis_reframing-pattern.pdf
55+
56+
57+
# Extra supplements, such as talk slides, data sets, etc.
58+
supplements:
59+
- name: Supplementary Material OSF
60+
# Use link instead of abslink if you want to link to the master directory
61+
abslink: https://osf.io/z7ae2/
62+
# Defaults to a download icon, use this if you want a link-out icon
63+
linksym: true
64+
65+
66+
67+
abstract: |
68+
We present a new comprehensive theory for explaining, exploring, and using pattern as a visual variable in visualization. Although patterns have long been used for data encoding and continue to be valuable today, their conceptual foundations are precarious: the concepts and terminology used across the research literature and in practice are inconsistent, making it challenging to use patterns effectively and to conduct research to inform their use. To address this problem, we conduct a comprehensive cross-disciplinary literature review that clarifies ambiguities around the use of "pattern" and "texture". As a result, we offer a new consistent treatment of pattern as a composite visual variable composed of structured groups of graphic primitives that can serve as marks for encoding data individually and collectively. This new and widely applicable formulation opens a sizable design space for the visual variable pattern, which we formalize as a new system comprising three sets of variables: the spatial arrangement of primitives, the appearance relationships among primitives, and the retinal visual variables that characterize individual primitives. We show how our pattern system relates to existing visualization theory and highlight opportunities for visualization design. We further explore patterns based on complex spatial arrangements, demonstrating explanatory power and connecting our conceptualization to broader theory on maps and cartography. An author version and additional materials are available on OSF: osf.io/z7ae2.
69+
70+
# After the --- you can put information that you want to appear on the website using markdown formatting or HTML. A good example are acknowledgements, extra references, an erratum, etc.
71+
72+
---
73+
74+
# Acknowledgements
75+
76+
We thank all members of the Aviz team and the University of Utah’s SCI Institute for their insightful input throughout this theory-building process, especially A.-F. Cabouat, F. Cabric, C. Han, and Y. Lu.

_publications/2025_vis_revisit.md

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
---
2+
layout: publication
3+
# The quotes make the : possible, otherwise you can do it without quotes
4+
title: "ReVISit 2: A Full Experiment Life Cycle User Study Framework"
5+
key: 2025_vis_revisit
6+
# paper | preprint | poster
7+
type: paper
8+
order: 2025-8
9+
10+
#paper_content_url:
11+
12+
13+
# The shortname is used for auto-generated titels
14+
shortname: ReVISit
15+
16+
# add a 2:1 aspect ratio (e.g., width: 400px, height: 200px) to the folder /assets/images/papers/
17+
image: 2025_vis_revisit.png
18+
# add a 2:1 aspect ratio teaser figure (e.g., width: 1200px, height: 600px) to the folder /assets/images/papers/
19+
image_large: 2025_vis_revisit_teaser.png
20+
21+
images:
22+
- path: 2025_vis_revisit_texture_study.png
23+
caption: ReVISit replay view showing the provenance history and current state of a participant
24+
- path: 2025_vis_revisit_timeline_view.png
25+
caption: ReVISit timeline view of participants who completed a study
26+
27+
# Authors in the "database" can be used with just the key (lastname). Others can be written properly.
28+
authors:
29+
- zcutler
30+
- wilburn
31+
- Hilson Shrestha
32+
- Yiren Ding
33+
- bollen
34+
- abrar
35+
- he
36+
- Andrew McNutt
37+
- Lane Harrison
38+
- lex
39+
40+
year: 2026
41+
journal-short: IEEE VIS
42+
43+
bibentry: article
44+
bib:
45+
journal: IEEE Transactions on Visualization and Computer Graphics (VIS)
46+
booktitle:
47+
editor:
48+
publisher:
49+
address:
50+
doi:
51+
url:
52+
volume: 32
53+
number:
54+
pages:
55+
month: jan
56+
57+
# Link to an official preprint server
58+
preprint_server: https://arxiv.org/abs/2508.03876
59+
60+
61+
# Add things like "Best Paper Award at InfoVis 2099, selected out of 4000 submissions"
62+
award: IEEE VIS 2025 Best Paper Award
63+
64+
# Use this if you have an external project website
65+
external-project: https://revisit.dev
66+
67+
pdf: 2025_vis_revisit.pdf
68+
69+
# Extra supplements, such as talk slides, data sets, etc.
70+
supplements:
71+
- name: Supplemental Material
72+
# Use link instead of abslink if you want to link to the master directory
73+
abslink: https://osf.io/e8anx
74+
# Defaults to a download icon, use this if you want a link-out icon
75+
linksym: true
76+
77+
# Link to the repository where the code is hostet
78+
code: https://github.com/revisit-studies/study
79+
80+
videos:
81+
- name: 'Paper Video'
82+
youtube-id: 1t3nWNnv6BE
83+
file: 2025_vis_revisit.mp4
84+
85+
86+
abstract: "Online user studies of visualizations, visual encodings, and interaction techniques are ubiquitous in visualization research. Yet, designing, conducting, and analyzing studies effectively is still a major burden.Although various packages support such user studies, most solutions address only facets of the experiment life cycle, make reproducibility difficult, or do not cater to nuanced study designs or interactions. We introduce reVISit 2, a software framework that supports visualization researchers at all stages of designing and conducting browser-based user studies. ReVISit supports researchers in the design, debug & pilot, data collection, analysis, and dissemination experiment phases by providing both technical affordances (such as replay of participant interactions) and sociotechnical aids (such as a mindfully maintained community of support). It is a proven system that can be (and has been) used in publication-quality studies---which we demonstrate through a series of experimental replications. We reflect on the design of the system via interviews and an analysis of its technical dimensions. Through this work, we seek to elevate the ease with which studies are conducted, improve the reproducibility of studies within our community, and support the construction of advanced interactive studies.
87+
"
88+
89+
# After the --- you can put information that you want to appear on the website using markdown formatting or HTML. A good example are acknowledgements, extra references, an erratum, etc.
90+
91+
---

0 commit comments

Comments
 (0)