|
1 | 1 | # Projects (MEng/UROP)
|
2 | 2 |
|
3 |
| -If you are interested in any of these projects and are a current MIT student looking for a UROP or MEng please reach out |
4 |
| -to the mentor listed next to project. |
| 3 | +If you are interested in any of these projects and are a current MIT student looking for a UROP or MEng please reach out to the mentor listed next to project. |
5 | 4 |
|
6 | 5 | ## Methods in Scientific Machine Learning
|
7 | 6 |
|
8 |
| -A large list of projects in scientific machine learning can be found [here](https://sciml.ai/dev/#projects_lists). Take that |
9 |
| -list as a set of ideas from which larger projects can be chosen. |
| 7 | +A large list of projects in scientific machine learning can be found [here](https://sciml.ai/dev/#projects_lists). Take that list as a set of ideas from which larger projects can be chosen. |
10 | 8 |
|
11 | 9 | ## Julia Compiler/Runtime for HPC
|
12 | 10 | Mentor: Valentin Churavy
|
13 | 11 |
|
14 |
| -We have many projects for working on compiler or runtimes in the context of scientific computing, the topics below can |
15 |
| -serve as inspiration. |
| 12 | +We have many projects for working on compiler or runtimes in the context of scientific computing, the topics below can serve as inspiration. |
16 | 13 |
|
17 | 14 |
|
18 | 15 | ### Accelerated computing
|
19 | 16 | - Caching for GPU kernel compilation
|
20 | 17 |
|
21 | 18 | #### KernelAbstractions.jl
|
22 |
| -[KernelAbstractions.jl](https://github.com/JuliaGPU/KernelAbstractions.jl) provides a common interface for writing |
23 |
| -GPU kernels in Julia and executing them on multiple platforms. |
| 19 | +[KernelAbstractions.jl](https://github.com/JuliaGPU/KernelAbstractions.jl) provides a common interface for writing GPU kernels in Julia and executing them on multiple platforms. |
24 | 20 |
|
25 | 21 |
|
26 | 22 | #### AMDGPU.jl
|
@@ -54,3 +50,11 @@ Mentor: Julian Samaroo
|
54 | 50 | Feasibility study on reducing the size of an a-HfO2 dataset using a parallel method based on HDBSCAN and ACE. A parallel Julia implementation of a state of the art method will be required as well as the proposal of an improved version aligned to [CESMIX](https://computing.mit.edu/cesmix/) objectives.
|
55 | 51 | Description [here](https://docs.google.com/document/d/1SWAanEWQkpsbr2lqetMO3uvdX_QK-Z7dwrgPaM1Dl0o/edit?usp=sharing).
|
56 | 52 | Contact: Emmanuel Lujan (eljn AT mit DOT edu)
|
| 53 | + |
| 54 | +### Accelerate interatomic force calculations by composing novel machine learning potentials in Julia. |
| 55 | + |
| 56 | +One of the main challenges of atomistic simulations is the acceleration of force calculations. Machine learning potentials promise the accuracy of first-principles methods at a lower computational cost. Simplifying the creation of these potentials (composed of data, descriptors and learning methods) enables systematizing the search for those combinations that exceed the accuracy and performance of the state of the art. This requires the development of new software abstractions and parallel tools. |
| 57 | +A more detailed description of the project can be found [here](https://docs.google.com/document/d/1mcZlfOULcqglCNqnCJ-ya1E39CLUircjMhfBtQhXP0k/edit?usp=sharing). |
| 58 | +Contact: Emmanuel Lujan (eljn AT mit DOT edu). |
| 59 | + |
| 60 | + |
0 commit comments