Replies: 1 comment
-
|
There are some ways to get free cloud credits for academic research. So one way to improve our compute situation is offering academic partnerships on lafleur. It can be a nice playground for research projects, not sure at which level (graduate? PhD?) and I require no authorship nor compensation. So, besides all the ways of helping listed above, if you have any leads or contacts of researchers looking for a nice project to adopt, please get in touch. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Introduction
Fuzzing the CPython JIT with lafleur has been shown to be a sound approach, able to find JIT crashes. But it has found very few issues so far and I attempt to explain why below. Increasing the available compute to run lafleur can improve results, so I have opened this to discuss ways of getting more compute credits or similar resources to run fuzzing campaigns on.
The Request
I need help increasing the computing resources available to run lafleur on.
If you, someone you know, your company or any of the cloud providers (or other VM providers) you know about can provide us with more compute, it will be greatly helpful.
If you have any contacts or leads, it can be very helpful too.
If you want to run lafleur on your machines instead of making them available it also greatly helps and I can help you set things up.
Fuzzing Coverage
There is some evidence that lafleur covers very little of the possible JIT state in a given amount of time/compute, as well as signs we might not be detecting interesting cases well enough.
The first indication is the low number of issues found in : only two different crashes were found. This could be explained by the robustness of the JIT code, which is new, small (compared to the whole CPython codebase) and developed with care.
The second indication goes against that proposed explanation: these two crashes came from only three fuzzing hits. That is, only one of the crashes was found again, and only once in all the fuzzing effort so far. It means we don't have a high chance to hit bugs we know are there. This is in stark contrast to fuzzing CPython with fusil, where most issues would be found in many similar hits in short time.
The third indication is that when mutate files related to those known to trigger JIT crashes we end up with test cases that do not crash. This is also observable by how finicky the process of reducing the test cases that do crash is. And there is the matter that the crashes found so far are probabilistic, that is, they don't trigger on every run.
Possible Approaches
These indications tell me that, in order to find more JIT crashes, we can:
Current Efforts
Making lafleur more efficient and effective is an ongoing effort, and we currently have it:
We're constantly studying ways to improve lafleur's chances of finding hits and crashes and have many ideas yet to be implemented.
At the same time, I have been trying to increase the compute available to run lafleur. I currently run it on two personal computers (usually only 3 instances, up to 7 in some rare occasions), two free Oracle Cloud VMs (6 more instances) and a Google Cloud VM (4 more instances, running on their free credits offer).
Currently there is no perspective of increasing the compute available other than burning more free cloud credits, which is a short term solution that will soon be unavailable.
Beta Was this translation helpful? Give feedback.
All reactions