Skip to content

Commit 6ae2c1b

Browse files
authored
adding new CESMIX project (#60)
* adding new CESMIX project * removing valentin's projects (as requested)
1 parent 94934a8 commit 6ae2c1b

File tree

1 file changed

+76
-82
lines changed

1 file changed

+76
-82
lines changed

projects.md

Lines changed: 76 additions & 82 deletions
Original file line numberDiff line numberDiff line change
@@ -1,82 +1,76 @@
1-
# Projects (MEng/UROP)
2-
3-
If you are interested in any of these projects and are a current MIT student looking for a UROP or MEng please reach out to the mentor listed next to project.
4-
5-
## Methods in Scientific Machine Learning
6-
7-
A large list of projects in scientific machine learning can be found [here](https://sciml.ai/dev/#projects_lists). Take that list as a set of ideas from which larger projects can be chosen.
8-
9-
## Julia Compiler/Runtime for HPC
10-
Mentor: Valentin Churavy
11-
12-
We have many projects for working on compiler or runtimes in the context of scientific computing, the topics below can serve as inspiration.
13-
14-
15-
### Accelerated computing
16-
- Caching for GPU kernel compilation
17-
18-
#### KernelAbstractions.jl
19-
[KernelAbstractions.jl](https://github.com/JuliaGPU/KernelAbstractions.jl) provides a common interface for writing GPU kernels in Julia and executing them on multiple platforms.
20-
21-
22-
#### AMDGPU.jl
23-
Mentor: Julian Samaroo
24-
25-
- Implement support for various ROCm libraries: rocSOLVER, rocSPARSE, MIOpen, etc.
26-
- Build ROCm libraries as JLLs
27-
- Explore integration with ROCm debugging and profiling tooling
28-
29-
### Compiler based automatic-differentiation -- Enzyme.jl
30-
31-
[Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl) is the Julia frontend to the Enzyme automatic-differentiation engine.
32-
33-
- Improved JIT compilation for Enzyme
34-
- Compile on Demand / Parallel JIT
35-
- Caching of Enzyme AD results
36-
- Caching of inference results for reducing inital latency
37-
38-
### General Julia compiler infrastructure
39-
40-
- Improvements to Julia integration with native debuggers and profilers
41-
- Better native debug-information (DWARF)
42-
- Pretty-printers for GDB
43-
- Debug-information on demand
44-
- Exploring profile-guided optimization
45-
46-
## CESMIX
47-
48-
### Accelerate learning by automatically reducing the size of the training dataset.
49-
50-
Feasibility study on reducing the size of an a-HfO2 dataset using a parallel method based on HDBSCAN and ACE. A parallel Julia implementation of a state of the art method will be required as well as the proposal of an improved version aligned to [CESMIX](https://computing.mit.edu/cesmix/) objectives.
51-
Description [here](https://docs.google.com/document/d/1SWAanEWQkpsbr2lqetMO3uvdX_QK-Z7dwrgPaM1Dl0o/edit?usp=sharing).
52-
Contact: Emmanuel Lujan (eljn AT mit DOT edu)
53-
54-
### Accelerate interatomic force calculations by composing novel machine learning potentials in Julia.
55-
56-
One of the main challenges of atomistic simulations is the acceleration of force calculations. Machine learning potentials promise the accuracy of first-principles methods at a lower computational cost. Simplifying the creation of these potentials (composed of data, descriptors and learning methods) enables systematizing the search for those combinations that exceed the accuracy and performance of the state of the art. This requires the development of new software abstractions and parallel tools.
57-
A more detailed description of the project can be found [here](https://docs.google.com/document/d/1mcZlfOULcqglCNqnCJ-ya1E39CLUircjMhfBtQhXP0k/edit?usp=sharing).
58-
Contact: Emmanuel Lujan (eljn AT mit DOT edu).
59-
60-
## Matrix decompositions for GPUs
61-
Mentor: Evelyne Ringoot
62-
63-
We have a number of UROP projects in the implementation of matrix decompositions for GPUs in Julia Language. No experience needed, only an interest in numerical linear algebra, a desire to learn Julia and to contribute to the development of the next-generation GPU/HPC algorithms.
64-
Please contact Evelyne Ringoot before September 10 if interested.
65-
66-
# Projects (additional for 18.337)
67-
68-
### Gaussian Elimination Growth
69-
70-
In 1990 Trefethen and Schreiber produced an influential paper on the average case stability of Gaussian elimination with partial and
71-
complete pivoting: [paper link](https://people.maths.ox.ac.uk/trefethen/publication/PDF/1990_44.pdf). In Eq. (6.2) and Figure 6.2 they
72-
suggest (with a clear caveat) that the growth is n^(2/3) and n^(1/2). Some years later I histogrammed some values of n maybe
73-
1000, 2000, and 4000 (I'd have to dig it up -- buried in my files), and perhaps I histogrammed g/n^(1/2) or g/n^(2/3) and found
74-
one lined up nice and the other did not. See what you can find.
75-
76-
### Generic LAPACK
77-
78-
Over the years people have said that an LAPACK rewritten in Julia could have more interesting properties, and also have a smaller codebase
79-
if done carefully. Find something in [Generic Linear Algebra.jl](https://github.com/JuliaLinearAlgebra/GenericLinearAlgebra.jl) that
80-
is not there currently and add to it, and check that it runs at least as fast as original LAPACK, but perhaps works on quaternions, or funny
81-
number fields, or matrices of matrices etc., and that you can run autodiff on these constructs.
82-
1+
# Projects (MEng/UROP)
2+
3+
If you are interested in any of these projects and are a current MIT student looking for a UROP or MEng please reach out to the mentor listed next to project.
4+
5+
## Methods in Scientific Machine Learning
6+
7+
A large list of projects in scientific machine learning can be found [here](https://sciml.ai/dev/#projects_lists). Take that list as a set of ideas from which larger projects can be chosen.
8+
9+
#### AMDGPU.jl
10+
Mentor: Julian Samaroo
11+
12+
- Implement support for various ROCm libraries: rocSOLVER, rocSPARSE, MIOpen, etc.
13+
- Build ROCm libraries as JLLs
14+
- Explore integration with ROCm debugging and profiling tooling
15+
16+
### Compiler based automatic-differentiation -- Enzyme.jl
17+
18+
[Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl) is the Julia frontend to the Enzyme automatic-differentiation engine.
19+
20+
- Improved JIT compilation for Enzyme
21+
- Compile on Demand / Parallel JIT
22+
- Caching of Enzyme AD results
23+
- Caching of inference results for reducing inital latency
24+
25+
### General Julia compiler infrastructure
26+
27+
- Improvements to Julia integration with native debuggers and profilers
28+
- Better native debug-information (DWARF)
29+
- Pretty-printers for GDB
30+
- Debug-information on demand
31+
- Exploring profile-guided optimization
32+
33+
## CESMIX
34+
35+
### Fast code, machine learning, or physics. Your choice.
36+
37+
Graphics Processing Units (GPUs) are efficient computational devices for a variety of tasks, including gaming, AI, and large scale scientific computation. We are looking for an undergraduate student to help develop a performant molecular dynamics engine in Julia that works on the GPU. The student will have their choice to focus on software abstractions for the entire JuliaGPU ecosystem, machine learning and it's interface with physics, or methods to improve communication between atoms. This project will provide the student with key software development experience in Julia, GPU computing, supercomputing, etc.
38+
39+
Contact: James Schloss (jars AT mit DOT edu)
40+
41+
42+
### Accelerate learning by automatically reducing the size of the training dataset.
43+
44+
Feasibility study on reducing the size of an a-HfO2 dataset using a parallel method based on HDBSCAN and ACE. A parallel Julia implementation of a state of the art method will be required as well as the proposal of an improved version aligned to [CESMIX](https://computing.mit.edu/cesmix/) objectives.
45+
Description [here](https://docs.google.com/document/d/1SWAanEWQkpsbr2lqetMO3uvdX_QK-Z7dwrgPaM1Dl0o/edit?usp=sharing).
46+
Contact: Emmanuel Lujan (eljn AT mit DOT edu)
47+
48+
### Accelerate interatomic force calculations by composing novel machine learning potentials in Julia.
49+
50+
One of the main challenges of atomistic simulations is the acceleration of force calculations. Machine learning potentials promise the accuracy of first-principles methods at a lower computational cost. Simplifying the creation of these potentials (composed of data, descriptors and learning methods) enables systematizing the search for those combinations that exceed the accuracy and performance of the state of the art. This requires the development of new software abstractions and parallel tools.
51+
A more detailed description of the project can be found [here](https://docs.google.com/document/d/1mcZlfOULcqglCNqnCJ-ya1E39CLUircjMhfBtQhXP0k/edit?usp=sharing).
52+
Contact: Emmanuel Lujan (eljn AT mit DOT edu).
53+
54+
## Matrix decompositions for GPUs
55+
Mentor: Evelyne Ringoot
56+
57+
We have a number of UROP projects in the implementation of matrix decompositions for GPUs in Julia Language. No experience needed, only an interest in numerical linear algebra, a desire to learn Julia and to contribute to the development of the next-generation GPU/HPC algorithms.
58+
Please contact Evelyne Ringoot before September 10 if interested.
59+
60+
# Projects (additional for 18.337)
61+
62+
### Gaussian Elimination Growth
63+
64+
In 1990 Trefethen and Schreiber produced an influential paper on the average case stability of Gaussian elimination with partial and
65+
complete pivoting: [paper link](https://people.maths.ox.ac.uk/trefethen/publication/PDF/1990_44.pdf). In Eq. (6.2) and Figure 6.2 they
66+
suggest (with a clear caveat) that the growth is n^(2/3) and n^(1/2). Some years later I histogrammed some values of n maybe
67+
1000, 2000, and 4000 (I'd have to dig it up -- buried in my files), and perhaps I histogrammed g/n^(1/2) or g/n^(2/3) and found
68+
one lined up nice and the other did not. See what you can find.
69+
70+
### Generic LAPACK
71+
72+
Over the years people have said that an LAPACK rewritten in Julia could have more interesting properties, and also have a smaller codebase
73+
if done carefully. Find something in [Generic Linear Algebra.jl](https://github.com/JuliaLinearAlgebra/GenericLinearAlgebra.jl) that
74+
is not there currently and add to it, and check that it runs at least as fast as original LAPACK, but perhaps works on quaternions, or funny
75+
number fields, or matrices of matrices etc., and that you can run autodiff on these constructs.
76+

0 commit comments

Comments
 (0)