Skip to content

Commit a82c029

Browse files
author
Chunk Tso
committed
update pages for Z. Wang, J. Zheng, and J. Zeng.
1 parent 31cd861 commit a82c029

File tree

17 files changed

+574
-0
lines changed

17 files changed

+574
-0
lines changed
Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
---
2+
# Display name
3+
title: Jianqiang Zeng
4+
5+
# Full Name (for SEO)
6+
first_name: Jianqiang
7+
last_name: Zeng
8+
9+
# Is this the primary user of the site?
10+
superuser: true
11+
12+
# Role/position
13+
role: Ph.D Student
14+
15+
# homepage: https://zouyonghao.github.io/ # If you do not want to use the internal homepage, set this to your external homepage
16+
17+
# Organizations/Affiliations
18+
organizations:
19+
- name: Peking University
20+
url: 'https://www.pku.edu.cn/'
21+
22+
# # Short bio (displayed in user profile at end of posts)
23+
# bio: My research interests include distributed robotics, mobile computing and programmable matter.
24+
25+
# interests:
26+
# - Artificial Intelligence
27+
# - Computational Linguistics
28+
# - Information Retrieval
29+
30+
# education:
31+
# courses:
32+
# - course: PhD in Artificial Intelligence
33+
# institution: Stanford University
34+
# year: 2012
35+
# - course: MEng in Artificial Intelligence
36+
# institution: Massachusetts Institute of Technology
37+
# year: 2009
38+
# - course: BSc in Artificial Intelligence
39+
# institution: Massachusetts Institute of Technology
40+
# year: 2008
41+
42+
# Social/Academic Networking
43+
# For available icons, see: https://docs.hugoblox.com/getting-started/page-builder/#icons
44+
# For an email link, use "fas" icon pack, "envelope" icon, and a link in the
45+
# form "mailto:[email protected]" or "#contact" for contact widget.
46+
# social:
47+
# - icon: envelope
48+
# icon_pack: fas
49+
# link: 'mailto:[email protected]'
50+
# - icon: twitter
51+
# icon_pack: fab
52+
# link: https://twitter.com/GeorgeCushen
53+
# - icon: google-scholar
54+
# icon_pack: ai
55+
# link: https://scholar.google.co.uk/citations?user=sIwtMXoAAAAJ
56+
# - icon: github
57+
# icon_pack: fab
58+
# link: https://github.com/gcushen
59+
# Link to a PDF of your resume/CV from the About widget.
60+
# To enable, copy your resume/CV to `static/files/cv.pdf` and uncomment the lines below.
61+
# - icon: cv
62+
# icon_pack: ai
63+
# link: files/cv.pdf
64+
65+
# Enter email to display Gravatar (if Gravatar enabled in Config)
66+
67+
68+
# Highlight the author in author lists? (true/false)
69+
highlight_name: false
70+
71+
# Organizational groups that you belong to (for People widget)
72+
# Set this to `[]` or comment out if you are not using People widget.
73+
user_groups:
74+
- Graduate Students
75+
---
76+
<!--
77+
Nelson Bighetti is a professor of artificial intelligence at the Stanford AI Lab. His research interests include distributed robotics, mobile computing and programmable matter. He leads the Robotic Neurobiology group, which develops self-reconfiguring robots, systems of self-organizing robots, and mobile sensor networks.
78+
79+
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed neque elit, tristique placerat feugiat ac, facilisis vitae arcu. Proin eget egestas augue. Praesent ut sem nec arcu pellentesque aliquet. Duis dapibus diam vel metus tempus vulputate. -->
5.93 MB
Loading

content/authors/Ju Zheng/_index.md

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
---
2+
# Display name
3+
title: Ju Zheng
4+
5+
# Full Name (for SEO)
6+
first_name: Ju
7+
last_name: Zheng
8+
9+
# Is this the primary user of the site?
10+
superuser: false
11+
12+
# Role/position
13+
role: Research Assistant
14+
15+
# homepage: https://pages.mtu.edu/~zlwang/
16+
17+
# Organizations/Affiliations
18+
organizations:
19+
- name: Peking University
20+
url: 'https://www.pku.edu.cn'
21+
22+
# # Short bio (displayed in user profile at end of posts)
23+
# bio: My research interests include distributed robotics, mobile computing and programmable matter.
24+
25+
# interests:
26+
# - Artificial Intelligence
27+
# - Computational Linguistics
28+
# - Information Retrieval
29+
30+
# education:
31+
# courses:
32+
# - course: PhD in Artificial Intelligence
33+
# institution: Stanford University
34+
# year: 2012
35+
# - course: MEng in Artificial Intelligence
36+
# institution: Massachusetts Institute of Technology
37+
# year: 2009
38+
# - course: BSc in Artificial Intelligence
39+
# institution: Massachusetts Institute of Technology
40+
# year: 2008
41+
42+
# Social/Academic Networking
43+
# For available icons, see: https://docs.hugoblox.com/getting-started/page-builder/#icons
44+
# For an email link, use "fas" icon pack, "envelope" icon, and a link in the
45+
# form "mailto:[email protected]" or "#contact" for contact widget.
46+
social:
47+
- icon: envelope
48+
icon_pack: fas
49+
link: 'mailto:[email protected]'
50+
# - icon: twitter
51+
# icon_pack: fab
52+
# link: https://twitter.com/GeorgeCushen
53+
# - icon: google-scholar
54+
# icon_pack: ai
55+
# link: https://scholar.google.co.uk/citations?user=sIwtMXoAAAAJ
56+
# - icon: github
57+
# icon_pack: fab
58+
# link: https://github.com/gcushen
59+
# Link to a PDF of your resume/CV from the About widget.
60+
# To enable, copy your resume/CV to `static/files/cv.pdf` and uncomment the lines below.
61+
# - icon: cv
62+
# icon_pack: ai
63+
# link: files/cv.pdf
64+
65+
# Enter email to display Gravatar (if Gravatar enabled in Config)
66+
67+
68+
# Highlight the author in author lists? (true/false)
69+
highlight_name: false
70+
71+
# Organizational groups that you belong to (for People widget)
72+
# Set this to `[]` or comment out if you are not using People widget.
73+
user_groups:
74+
- Administration
75+
---
76+
(temporary blank)
Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
---
2+
# Display name
3+
title: Zhenlin Wang
4+
5+
# Full Name (for SEO)
6+
first_name: Zhenlin
7+
last_name: Wang
8+
9+
# Is this the primary user of the site?
10+
superuser: false
11+
12+
# Role/position
13+
role: Professor
14+
15+
homepage: https://pages.mtu.edu/~zlwang/
16+
17+
# Organizations/Affiliations
18+
organizations:
19+
- name: Michigan Technological University
20+
url: 'https://www.mtu.edu'
21+
22+
# # Short bio (displayed in user profile at end of posts)
23+
# bio: My research interests include distributed robotics, mobile computing and programmable matter.
24+
25+
# interests:
26+
# - Artificial Intelligence
27+
# - Computational Linguistics
28+
# - Information Retrieval
29+
30+
# education:
31+
# courses:
32+
# - course: PhD in Artificial Intelligence
33+
# institution: Stanford University
34+
# year: 2012
35+
# - course: MEng in Artificial Intelligence
36+
# institution: Massachusetts Institute of Technology
37+
# year: 2009
38+
# - course: BSc in Artificial Intelligence
39+
# institution: Massachusetts Institute of Technology
40+
# year: 2008
41+
42+
# Social/Academic Networking
43+
# For available icons, see: https://docs.hugoblox.com/getting-started/page-builder/#icons
44+
# For an email link, use "fas" icon pack, "envelope" icon, and a link in the
45+
# form "mailto:[email protected]" or "#contact" for contact widget.
46+
social:
47+
- icon: envelope
48+
icon_pack: fas
49+
link: 'mailto:[email protected]'
50+
# - icon: twitter
51+
# icon_pack: fab
52+
# link: https://twitter.com/GeorgeCushen
53+
# - icon: google-scholar
54+
# icon_pack: ai
55+
# link: https://scholar.google.co.uk/citations?user=sIwtMXoAAAAJ
56+
# - icon: github
57+
# icon_pack: fab
58+
# link: https://github.com/gcushen
59+
# Link to a PDF of your resume/CV from the About widget.
60+
# To enable, copy your resume/CV to `static/files/cv.pdf` and uncomment the lines below.
61+
# - icon: cv
62+
# icon_pack: ai
63+
# link: files/cv.pdf
64+
65+
# Enter email to display Gravatar (if Gravatar enabled in Config)
66+
67+
68+
# Highlight the author in author lists? (true/false)
69+
highlight_name: false
70+
71+
# Organizational groups that you belong to (for People widget)
72+
# Set this to `[]` or comment out if you are not using People widget.
73+
user_groups:
74+
- Professors
75+
---
76+
Zhenlin Wang received his BS degree in 1992 and MS degree in 1995 both in Computer Science and from Peking University, China. He received his PhD in Computer Science in 2004 from the University of Massachusetts, Amherst. He joined the Department of Computer Science at Michigan Technological University as an assistant professor in 2003, became an associate professor in 2009, and was promoted to a full professor in 2015. His research interests are broadly in the areas of compilers, operating systems and computer architecture with a focus on memory system optimization and system virtualization. He is a recipient of NSF career award.
25.7 KB
Loading
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
@inproceedings{10.1145/3422575.3422799,
2+
author = {Wang, Yuchen and Yang, Junyao and Wang, Zhenlin},
3+
title = {Dynamically Configuring LRU Replacement Policy in Redis},
4+
year = {2021},
5+
isbn = {9781450388993},
6+
publisher = {Association for Computing Machinery},
7+
address = {New York, NY, USA},
8+
url = {https://doi.org/10.1145/3422575.3422799},
9+
doi = {10.1145/3422575.3422799},
10+
abstract = {To reduce the latency of accessing backend servers, today’s web services usually adopt in-memory key-value stores in the front end which cache the frequently accessed objects. Memcached and Redis are two most popular key-value cache systems. Due to the limited size of memory, an in-memory key-value store needs to be configured with a fixed amount of memory, i.e., cache size, and cache replacement is unavoidable when the footprint of accessed objects is larger than the cache size. Memcached implements the least recently used (LRU) policy. Redis adopts an approximated LRU policy to avoid maintaining LRU list structures. On a replacement, Redis samples pre-configured K keys, adds them to the eviction pool, and then chooses the LRU key from the eviction pool for eviction. We name this policy approx-K-LRU. We find that approx-K-LRU behaves close to LRU when K is large. However, different Ks can yield different miss ratios. On the other hand, the sampling and replacement decision itself results in an overhead that is related to K. This paper proposes DLRU (Dynamic LRU), which explores this configurable parameter and dynamically sets K. DLRU utilizes a low-overhead miniature cache simulator to predict miss ratios of different Ks and adopts a cost model to estimate the performance trade-offs. Our experimental results show that DLRU is able to improve Redis throughput over the recommended, default approx-5-LRU by up to 32.5\% for a set of storage traces.},
11+
booktitle = {Proceedings of the International Symposium on Memory Systems},
12+
pages = {272–280},
13+
numpages = {9},
14+
keywords = {Cache Replacement, In-Memory Key-Value Stores, LRU, Memory Allocation, Memory Caches, Redis},
15+
location = {Washington, DC, USA},
16+
series = {MEMSYS '20}
17+
}
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
title: 'Dynamically Configuring LRU Replacement Policy in Redis'
3+
4+
# Authors
5+
authors:
6+
- Yuchen Wang
7+
- Junyao Yang
8+
- Zhenlin Wang
9+
10+
date: '2020-09-01T00:00:00Z'
11+
doi: ''
12+
13+
# Schedule page publish date (NOT publication's date).
14+
publishDate: '2020-09-01T00:00:00Z'
15+
16+
# Publication type.
17+
publication_types: ['paper-conference']
18+
19+
# Publication name and optional abbreviated publication name.
20+
publication: In *The International Symposium on Memory Systems (MemSys)*
21+
publication_short: In *MemSys 20*
22+
23+
abstract: 'To reduce the latency of accessing backend servers, today’s web services usually adopt in-memory key-value stores in the front end which cache the frequently accessed objects. Memcached and Redis are two most popular key-value cache systems. Due to the limited size of memory, an in-memory key-value store needs to be configured with a fixed amount of memory, i.e., cache size, and cache replacement is unavoidable when the footprint of accessed objects is larger than the cache size. Memcached implements the least recently used (LRU) policy. Redis adopts an approximated LRU policy to avoid maintaining LRU list structures. On a replacement, Redis samples pre-configured K keys, adds them to the eviction pool, and then chooses the LRU key from the eviction pool for eviction. We name this policy approx-K-LRU. We find that approx-K-LRU behaves close to LRU when K is large. However, different Ks can yield different miss ratios. On the other hand, the sampling and replacement decision itself results in an overhead that is related to K. This paper proposes DLRU (Dynamic LRU), which explores this configurable parameter and dynamically sets K. DLRU utilizes a low-overhead miniature cache simulator to predict miss ratios of different Ks and adopts a cost model to estimate the performance trade-offs. Our experimental results show that DLRU is able to improve Redis throughput over the recommended, default approx-5-LRU by up to 32.5% for a set of storage traces.'
24+
25+
# Summary. An optional shortened abstract.
26+
summary: ''
27+
28+
tags: []
29+
30+
# Display this page in the Featured widget?
31+
featured: false
32+
33+
url_pdf: 'https://dl.acm.org/doi/pdf/10.1145/3422575.3422799'
34+
url_code: ''
35+
url_dataset: ''
36+
url_poster: ''
37+
url_project: ''
38+
url_slides: ''
39+
url_source: ''
40+
url_video: ''
41+
42+
---
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
@inproceedings{10.1145/3472456.3472514,
2+
author = {Yang, Junyao and Wang, Yuchen and Wang, Zhenlin},
3+
title = {Efficient Modeling of Random Sampling-Based LRU},
4+
year = {2021},
5+
isbn = {9781450390682},
6+
publisher = {Association for Computing Machinery},
7+
address = {New York, NY, USA},
8+
url = {https://doi.org/10.1145/3472456.3472514},
9+
doi = {10.1145/3472456.3472514},
10+
abstract = {The Miss Ratio Curve (MRC) is an important metric and effective tool for caching system performance prediction and optimization. Since the Least Recently Used (LRU) replacement policy is the de facto policy for many existing caching systems, most previous studies on efficient MRC construction are predominantly focused on the LRU replacement policy. Recently, the random sampling-based replacement mechanism, as opposed to replacement relying on the rigid LRU data structure, gains more popularity due to its lightweight and flexibility. To approximate LRU, at replacement times, the system randomly selects K objects and replaces the least recently used object among the sample. Redis implements this approximated LRU policy. We observe that there can exist a significant miss ratio gap between exact LRU and random sampling-based LRU under different sampling size K; therefore existing LRU MRC construction techniques cannot be directly applied to random sampling based LRU cache without loss of accuracy. In this work, we present a new probabilistic stack algorithm named KRR which can be used to accurately model random sampling based-LRU under arbitrary sampling size K. We propose two efficient stack update algorithms which reduce the expected running time of KRR from O(N*M) to O(N*log2M) and O(N*logM), respectively, where N is the workload length and M is the number of distinct objects. Furthermore, we adopt spatial sampling which further reduces the running time of KRR by several orders of magnitude, and thus enables practical, low overhead online application of KRR.},
11+
booktitle = {Proceedings of the 50th International Conference on Parallel Processing},
12+
articleno = {32},
13+
numpages = {11},
14+
keywords = {stack algorithm, miss ratio curve, Random sampling-based LRU, LRU},
15+
location = {Lemont, IL, USA},
16+
series = {ICPP '21}
17+
}
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
---
2+
title: 'Efficient Modeling of Random Sampling-Based LRU'
3+
4+
# Authors
5+
authors:
6+
- Junyao Yang
7+
- Yuchen Wang
8+
- Zhenlin Wang
9+
10+
date: '2021-08-01T00:00:00Z'
11+
doi: ''
12+
13+
# Schedule page publish date (NOT publication's date).
14+
publishDate: '2021-08-01T00:00:00Z'
15+
16+
# Publication type.
17+
publication_types: ['paper-conference']
18+
19+
# Publication name and optional abbreviated publication name.
20+
publication: In *50th International Conference on Parallel Processing (ICPP)*
21+
publication_short: In *ICPP 21*
22+
23+
abstract: 'The Miss Ratio Curve (MRC) is an important metric and effective tool for caching system performance prediction and optimization. Since the Least Recently Used (LRU) replacement policy is the de facto policy for many existing caching systems, most previous studies on efficient MRC construction are predominantly focused on the LRU replacement policy. Recently, the random sampling-based replacement mechanism, as opposed to replacement relying on the rigid LRU data structure, gains more popularity due to its lightweight and flexibility. To approximate LRU, at replacement times, the system randomly selects K objects and replaces the least recently used object among the sample. Redis implements this approximated LRU policy. We observe that there can exist a significant miss ratio gap between exact LRU and random sampling-based LRU under different sampling size K; therefore existing LRU MRC construction techniques cannot be directly applied to random sampling based LRU cache without loss of accuracy.
24+
In this work, we present a new probabilistic stack algorithm named KRR which can be used to accurately model random sampling based-LRU under arbitrary sampling size K. We propose two efficient stack update algorithms which reduce the expected running time of KRR from O(N*M) to O(N*log2M) and O(N*logM), respectively, where N is the workload length and M is the number of distinct objects. Furthermore, we adopt spatial sampling which further reduces the running time of KRR by several orders of magnitude, and thus enables practical, low overhead online application of KRR.'
25+
26+
# Summary. An optional shortened abstract.
27+
summary: ''
28+
29+
tags: []
30+
31+
# Display this page in the Featured widget?
32+
featured: false
33+
34+
url_pdf: 'https://dl.acm.org/doi/pdf/10.1145/3472456.3472514'
35+
url_code: ''
36+
url_dataset: ''
37+
url_poster: ''
38+
url_project: ''
39+
url_slides: ''
40+
url_source: ''
41+
url_video: ''
42+
43+
---
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
@inproceedings{10.5555/3571885.3571998,
2+
author = {Li, Pengcheng and Guo, Yixin and Luo, Yingwei and Wang, Xiaolin and Wang, Zhenlin and Liu, Xu},
3+
title = {Graph neural networks based memory inefficiency detection using selective sampling},
4+
year = {2022},
5+
isbn = {9784665454445},
6+
publisher = {IEEE Press},
7+
abstract = {Production software of data centers oftentimes suffers from unnecessary memory inefficiencies caused by inappropriate use of data structures, conservative compiler optimizations, and so forth. Nevertheless, whole-program monitoring tools often incur incredibly high overhead due to fine-grained memory access instrumentation. Consequently, the fine-grained monitoring tools are not viable for long-running, large-scale data center applications due to strict latency criteria (e.g., service-level agreement or SLA).To this end, this work presents a novel learning-aided system, namely Puffin, to identify three kinds of unnecessary memory operations including dead stores, silent loads and silent stores, by applying gated graph neural networks onto fused static and dynamic program semantics with respect to relative positional embedding. To deploy the system in large-scale data centers, this work explores a sampling-based detection infrastructure with high efficacy and negligible overhead. We evaluate Puffin upon the well-known SPEC CPU 2017 benchmark suite for four compilation options. Experimental results show that the proposed method is able to capture the three kinds of memory inefficiencies with as high accuracy as 96\% and a reduced checking overhead by 5.66\texttimes{} over the state-of-the-art tool.},
8+
booktitle = {Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis},
9+
articleno = {85},
10+
numpages = {14},
11+
keywords = {sampling, program embedding, memory inefficiency detection, graph neural network},
12+
location = {Dallas, Texas},
13+
series = {SC '22}
14+
}

0 commit comments

Comments
 (0)