Skip to content

Latest commit

 

History

History
31 lines (18 loc) · 835 Bytes

File metadata and controls

31 lines (18 loc) · 835 Bytes
  • For some reason all of the m_aipw, m_np, m_dr methods do not work.

  • I think the ideal is actually to split up some of the replications.

I'd like to build a big array that, in theory, could have:

  • job i
  • sample size
  • dgp

that goes for some specified number of replications.

So let's say in total I have

  • 4 dgps
  • 6 sample sizes I want to look at
  • and I want to run each for 1000 replications

I want to be able to make an array of 240 jobs, each running the 1 dgp for 1 sample size 100 times. Or 2400 jobs, each only running 10 times.

And then to save all the results, be able to combine them, and use the visualizer and render_docs(experiment) function.

Multi Core

It sounds like I can use --cpus-per-task to give the SLURM job array multiple cores for each job in the job array.