diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 0000000..40a515a
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,30 @@
+# Contribution Guidelines
+
+Jumping into a new project can be hard and intimidating. This document is designed to help you understand the community and how to interact.
+
+### What is this repository about?
+**Open data is data that is made freely and easily available to anyone to use, re-use and distribute.**
+*(Definition adapted from [Open Knowledge](http://opendatahandbook.org/guide/en/what-is-open-data/))*
+
+Mozilla Science Lab is dedicated to encouraging the use of open practices and web technologies to do better science. We believe open data is essential to maximizing the potential of research. This repository is designed to be a central portal for access to resources and curricular materials around open data.
+
+### How do I contribute?
+1. Please introduce yourself and join the conversation through [](https://gitter.im/mozillascience/open-data-training?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
+2. Submit an [issue] (https://github.com/mozillascience/open-data-training/issues) with your correction for an error in the documents, a question about the materials, or a suggestion for a resource to be included in the repository.
+ * Go to [Issues](https://github.com/mozillascience/open-data-training/issues) and click the green "New Issue" button.
+ * If you are **new to the repository**, click on issues w/ the  label.
+ These are issues we've identified as good things to work on as your first collaboration with the repository.
+ * If you are submitting a **suggestion**, label the issue w/ the  label
+ * If you are submitting a **question**, label the issue w/ the  label
+ * If you are submitting a **correction**, label the issue w/ the  label
+ * Use the **Modules** and **Primers** labels for issues related to those two types of resources.
+3. [Fork] (https://help.github.com/articles/fork-a-repo/) this repository to suggest changes to our repository or as a starting point to make your own.
+
+### What is the Code of Conduct?
+Mozilla Study Groups are for everyone - we abide by a [set of rules] (https://www.mozillascience.org/code-of-conduct) that require everyone be treated with respect. Help us make a space where everyone feels welcome, and we'll all have a better time!
+
+### Where can I ask for help?
+* Go to the Gitter Room: https://gitter.im/mozillascience/open-data-training [](https://gitter.im/mozillascience/open-data-training?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
+We are a friendly lot and it is a safe place to ask questions and get advice.
+* Tweet to us [@MozillaScience](https://twitter.com/MozillaScience)
+
diff --git a/CONTRIBUTION.md b/CONTRIBUTION.md
deleted file mode 100644
index 57d83e3..0000000
--- a/CONTRIBUTION.md
+++ /dev/null
@@ -1,19 +0,0 @@
-# Contribution Guidelines
-
-Jumping into a new project can be hard and intimidating. This document is designed to help you understand the community and how to interact.
-
-### What is this repository about?
-This repository is designed to be a central portal for access to information about and resources and curricular materials for the Mozilla Science Lab open data training.
-
-### How do I join the community?
-Submit an issue with your suggestion for a resource to be included in the repository.
-
-### What is the Code of Conduct?
-Mozilla Study Groups are for everyone - we abide by a [set of rules] (https://www.mozillascience.org/code-of-conduct) that require everyone be treated with respect. Help us make a space where everyone feels welcome, and we'll all have a better time!
-
-### How do I report a bug?
-Submit an issue and label with the red bug label
-
-### Where can I ask for help?
-- Gitter Room: https://gitter.im/mozillascience/open-data-training [](https://gitter.im/mozillascience/open-data-training?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
-
diff --git a/Materials/Handouts/ODChallengesQI.md b/Materials/Handouts/ODChallengesQI.md
index 8c00d08..efdac71 100644
--- a/Materials/Handouts/ODChallengesQI.md
+++ b/Materials/Handouts/ODChallengesQI.md
@@ -1,33 +1,52 @@
-##CHALLENGES TO OPEN DATA AND HOW TO RESPOND
+## CHALLENGES TO OPEN DATA AND HOW TO RESPOND
**“Someone may scoop me and find something interesting in it before I have a chance to publish it!”**
+
There are anecdotal stories of this but very little evidence of this happening in any significant way. Regardless of how often it happens, by making your data open, accessible, and citable, you are publicly staking our claim of authorship for that data. See the first reference link below for more thoughts on this (1).
**“Why should I let others have my data when I’ve done all the work? It doesn’t do anything for me.”**
+
There has actually been research on this (2, 3) and making your data openly available and linked to your publication increases citations to your publication. It also increases citations rates when someone else reuses and cites your data in their publication. Of course, if your data is part of a federally funded project, many funders require the public have access to it.
**“I’m in a niche field. Nobody else could possibly be interested in my data.”**
+
There are many examples of reuse of data for other than original intent that have improved the quality of life for others (4-6). If you make your data citable, you can find out who those other people are and maybe find new collaborators. If nothing else, there is always a demand for open data to use as examples by those teaching others how to do research.
**“Documenting data so someone else can understand it is complicated. Who has time?”**
+
Following data management best practices (7, 8) and planning for open data at the beginning and managing it throughout a project takes less time than trying to do data forensics at the end. It also saves you time five or ten years down the road when you’re trying to remember how you got this data and what it all means. By making your data open, it accelerates the advancement of science allowing others to add their brainpower and prohibits wasted time through recollection of data.
**“If I put it out there, someone won’t understand it and will use it to come up with wrong conclusions.”**
+
If you provide a detailed abstract including a “constraints of use” statement, as well as a data reuse plan providing a list of all the files in the dataset and the names and types of data in each field, you can prevent misunderstandings concerning your data.
**“My institution doesn’t have a repository. I don’t have anywhere to share it.”**
+
Check the re3data (http://www.re3data.org/) repository catalog to find repositories available for storage of your data based on content type, discipline, or geographic location, including freely available repositories such as **figshare** (https://figshare.com/) and **Zenodo** (http://zenodo.org/).
**“My data is human subjects data with personally identifiable information. I can’t share it.”**
+
You should always check with your IRB before sharing human subjects data. There are actions you can take to still make the data shareable through consent forms, anonymization techniques to remove personally identifiable data, (9, 10) and limiting your data sharing to the metadata about the dataset.
**“The data I’m using is owned/copyrighted by someone else. The license forbids me from sharing it.”**
+
If you are not the creator of the data for your research, you may not have the authority to share your data. Be sure to check the terms of the licensing agreement from the data owner before sharing it with anyone else. When you are negotiating for the use of someone else’s data, take the opportunity to promote the idea of making it openly available.
-**“It’s so confusing! I don’t know where to start.”**
+**“It’s so confusing! I don’t know where to start.”**
+
Start small with one or two steps, you don’t have to do it all at once. Take a look at the DataONE Best Practices database (7) and see what you can incorporate into your practices now.
-####REFERENCES:
+**"But how does this relate to open science? And how can I foster both open data/science practice?"**
+
+In a recent [Nature piece](http://www.nature.com/news/five-ways-consortia-can-catalyse-open-science-1.21706), five tips were given to catalyze openness:
+
+* 1) Build out from the middle
+* 2) Forge a shared vision
+* 3) Accommodate diverse, changing interests
+* 4) Multiply impacts
+* 5) Co-evolve
+
+#### REFERENCES:
1. Stack Exchange thread on “scooping”: http://academia.stackexchange.com/questions/52016/an-example-of-a-researcher-being-scooped-as-a-result-of-working-openly
2. Piwowar HA, Day RS, Fridsma DB (2007) Sharing Detailed Research Data Is Associated with Increased Citation Rate. PLoS ONE 2(3): e308. https://doi.org/10.1371/journal.pone.0000308
3. Piwowar HA, Vision TJ. (2013) Data reuse and the open data citation advantage. PeerJ 1:e175 https://doi.org/10.7717/peerj.175
@@ -39,10 +58,12 @@ Start small with one or two steps, you don’t have to do it all at once. Take
9. UK Data Archive “Anonymization - Overview”: http://www.data-archive.ac.uk/create-manage/consent-ethics/anonymisation
10. ICPSR Guide to Social Science Data Preparation and Archiving Phase 5: Preparing Data for Sharing: https://www.icpsr.umich.edu/icpsrweb/content/deposit/guide/chapter5.html
-#####OPEN RESOURCES FOR MORE INFO:
-* Tenopir, C. et al. (2011) Data Sharing by Scientists: Practices and Perceptions. PLoS ONE 6(6): e21101. [doi:10.1371/journal.pone.0021101] (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0021101)
-* Wallis JC, Rolando E, Borgman CL (2013) If We Share Data, Will Anyone Use Them? Data Sharing and Reuse in the Long Tail of Science and Technology. PLoS ONE 8(7): e67332. [doi:10.1371/journal.pone.0067332] (https://doi.org/10.1371/journal.pone.0067332)
-* Molloy JC (2011) The Open Knowledge Foundation: Open Data Means Better Science. PLoS Biol 9(12): e1001195. [doi:10.1371/journal.pbio.1001195] (http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001195)
-* [Why Manage Your Data?] (http://d7.library.gatech.edu/research-data/home) *(Georgia Tech Library)*
-* [Who’s Afraid of Open Data?] (http://deevybee.blogspot.com/2015/11/whos-afraid-of-open-data.html) *(Bishop Blog)*
-* [Closed Data... Excuses, Excuses] (http://iassistdata.org/blog/share-your-story-case-studies-data-reuse) *(Carly Strasser's Blog)*
+##### OPEN RESOURCES FOR MORE INFO:
+* Tenopir, C. et al. (2011) Data Sharing by Scientists: Practices and Perceptions. PLoS ONE 6(6): e21101. [doi:10.1371/journal.pone.0021101](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0021101)
+* Wallis JC, Rolando E, Borgman CL (2013) If We Share Data, Will Anyone Use Them? Data Sharing and Reuse in the Long Tail of Science and Technology. PLoS ONE 8(7): e67332. [doi:10.1371/journal.pone.0067332](https://doi.org/10.1371/journal.pone.0067332)
+* Molloy JC (2011) The Open Knowledge Foundation: Open Data Means Better Science. PLoS Biol 9(12): e1001195. [doi:10.1371/journal.pbio.1001195](http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001195)
+* [Why Manage Your Data?](http://d7.library.gatech.edu/research-data/home) *(Georgia Tech Library)*
+* [Who’s Afraid of Open Data?](http://deevybee.blogspot.com/2015/11/whos-afraid-of-open-data.html) *(Bishop Blog)*
+* [Closed Data... Excuses, Excuses](http://iassistdata.org/blog/share-your-story-case-studies-data-reuse) *(Carly Strasser's Blog)*
+* [Why Open Research](http://whyopenresearch.org/)
+* [5 Ways to Catalyze Open Science](http://www.nature.com/news/five-ways-consortia-can-catalyse-open-science-1.21706)
diff --git a/Materials/Modules/mod_template.md b/Materials/Modules/mod_template.md
new file mode 100644
index 0000000..55f985c
--- /dev/null
+++ b/Materials/Modules/mod_template.md
@@ -0,0 +1 @@
+place holder
diff --git a/Materials/Primers/P1_WhyOpenData.md b/Materials/Primers/P1_WhyOpenData.md
new file mode 100644
index 0000000..a5e4a26
--- /dev/null
+++ b/Materials/Primers/P1_WhyOpenData.md
@@ -0,0 +1,119 @@
+# Why Open Data: A Primer
+## Hello.
+
+<*insert interactive here: What three words would you use to describe open data?*>
+
+You’re probably here because you’ve heard a lot about open data recently and you want to know more. This primer is a very quick introduction to the topic. We’ll be talking about the kind of data collected or observed by researchers, governments, and other groups to study problems or questions in fields as diverse as astrophysics, urban planning, and linguistics. This primer was produced by Mozilla Science Lab, a program dedicated encouraging the use of open source practices and web technologies to do better science.
+
+## Let's talk open data.
+By data we mean numbers, but also geospatial coordinates, text, images, multimedia items, and other types of information that can be used to answer questions or solve problems. We may think of data as being collected by researchers and scientists-- for example, information on the spread of a population of ladybugs in a particular region, or the wavelengths of light emitted by a particular star. But data is also collected by governments, who may be interested in the number and location of potholes on a city street, or the geospatial pattern of new cases in an outbreak of the flu. Corporations and businesses collect data, too.
+
+**All of this data is potentially useful and powerful. “Opening” data means maximizing that potential.**
+
+So what is open data? Here’s a working definition: **Open data is data that is made freely and easily available to anyone to use, re-use and distribute**.
+
+The Open Knowledge Foundation, an organization dedicated to bringing “openness” to the mainstream, defines the following key factors that make data “open”:
+* **Access & availability** - data is available to all in a convenient and modifiable form
+* **Re-use & redistribution** - terms of use allow for reusing, remixing and redistributing the data
+* **Universal participation** - there are no restrictions on who may do any of the above with the data
+
+But why bother taking your carefully collected, hard-earned data, and setting it free on the internet, for strangers to reuse, remix, and redistribute? There are lots of reasons-- we explore just a few of those next.
+
+## Why open data?
+Open data helps to:
+
+**1. Maintain Accountability**
+*When data is open, the results or findings of that data can be more easily verified, increasing public trust in research institutions and governments.*
+
+**2. Speed Discovery**
+*When data is open, researchers working in parallel on similar problems don’t need to duplicate work and can use each others’ data to advance knowledge across their research area.*
+
+**3. Encourage collaboration across disciplines**
+*When data is open, researchers can more easily find those outside their subject area doing related or relevant work; these connections and collaborations may lead to new ways of approaching and solving problems.*
+
+**4. Solve a broader range of problems**
+*When data is open, new users-- from academic researchers to citizen scientists to artists- may use it to answer questions, solve problems, or create public understanding in ways that the original data collector may never have imagined.*
+
+**5. Encourage public engagement with research**
+*When data is open and the findings are shared in a clear and accessible way, it increases public understanding, creates opportunities for public participation, and bolsters public support of research initiatives.*
+
+**6. Ensure that data is preserved**
+*When data is open and widely shared, the responsibility for the long-term archiving and preservation of that data is distributed across broader group of interested users.*
+
+What are your reasons for opening your data? Let us know! <*insert link*>
+
+## Open data IRL
+Open data sounds great on paper. But what does deciding to share-- or deciding not to share-- your data look like in real life? Here’s a recent example of open data in action.
+
+When an outbreak of the previously rare Zika Virus Disease emerged in South America in early 2015, it was clear that the threat to public health was--and remains--urgent. Researchers in David O’Connor’s lab at the University of Wisconsin, Madison decided that the epidemic, which may lead to devastating birth defects in babies born to infected women, called for a new level of collaboration. He decided to release real-time, day by day results of his studies on the effects of Zika on Macaque monkeys, rather that waiting months (or years) for the results of his work to be published in a traditional journal. By releasing both his data and findings online, O’Connor invited international collaboration. Researchers from around the world downloaded and reviewed the data, made comments, provided advice, and even offered to lend expertise and equipment to run tests that O’Connor’s lab wasn’t equipped to perform. Studies are still underway, but it’s likely that opening this data-- and creating connections among researchers globally-- will aid efforts to understand and combat Zika. See what data sharing looks like in real life by visiting O’Connor’s [Zika Open-Research Portal] (https://zika.labkey.com/project/home/begin.view). To read more, see the story covered in [Nature] (http://www.nature.com/news/zika-researchers-release-real-time-data-on-viral-infection-study-in-monkeys-1.19438), [The Economist] (http://www.economist.com/news/science-and-technology/21694990-old-fashioned-ways-reporting-new-discoveries-are-holding-back-medical-research), and on [National Public Radio] (http://www.npr.org/sections/health-shots/2016/03/08/469653715/scientists-report-in-real-time-on-challenging-zika-research) in the USA.
+
+The Zika story shows how open data might help speed scientific discovery and improve research practice. See below for more stories that support each of our six reasons to open your data.
+
+**Maintain Accountability.** Expand to learn how open data helped researchers evaluate critical findings on global economic policy.
+
+<*expanded content*>
+In 2010, two economics professors from Harvard published a paper on economic policy that was widely publicized and very influential during a time of global economic instability. In 2013, University of Massachusetts researchers were unable to replicate the results of this study; they requested the data set from the original authors and discovered coding errors, data omissions, and unconventional methods of analysis that call into question the validity of the original conclusion. [Read more about this case here] (http://blog.okfn.org/2013/04/22/reinhart-rogoff-revisited-why-we-need-open-data-in-economics/).
+
+**Speed Discovery.** Expand to learn how a revolutionary open research project generated a wealth of genomics knowledge.
+
+<*expanded content*>
+In the early 2000s, two competing research projects, the Human Genome Project (HGP) and efforts by biotech company Celera, worked to sequence the human genome. HGP put all of its data in the public domain, while Celera tried to patent its findings. A 2013 study by an MIT economist shows HGP’s open data generated more knowledge and innovation (as measured by publications and the development of diagnostic tests) than Celera’s patented sequences. [Learn more about the study here] (https://www.techdirt.com/articles/20130403/09501122561/public-domain-human-genome-project-generated-more-research-more-commercial-activity-than-proprietary-competitor.shtml).
+
+**Encourage collaboration across disciplines.** Expand to learn how researchers, from soil scientists to meteorologists are creating a data sharing network to better understand and prepare for climate change.
+
+<*expanded content*>
+The Midwestern U.S. is known as the “country’s breadbasket”-- a major food producing region for that nation. Climate change is expected to affect production in coming decades, possibly dramatically. Agricultural scientists, meteorologists, and climate modelers all gather or generate data separately that, when pieced together, may provide a comprehensive understanding of these effects, but data and findings are rarely shared across disciplines. In a [2015 article] (http://bioscience.oxfordjournals.org/content/early/2015/12/10/biosci.biv164.full.pdf+html), a group of scientists from disparate disciplines laid out a plan to share and use data across a network of research sites, in order to create better climate models and apply data from models to the design of mitigation and adaptation strategies. [Learn more about the network here] (http://www.scientificamerican.com/article/u-s-bread-basket-shifts-thanks-to-climate-change/).
+
+**Solve a broader range of problems.** Click to learn how an Australian initiative to link divergent data sets is creating opportunities to make discoveries and solve problems on a continental scale.
+
+<*expanded content*>
+Australia’s Oznome Project aims to amass all available data on that nation’s economy, infrastructure, agriculture, public health, energy and water systems, and more into one centralized, accessible database. The name “Oznome” is a hat-tip to the Human Genome project, the effort to gather data on every single gene in the human genome. By 2025, Oznome aspires to be just as comprehensive, creating a “historical, current and future digital representation of everything” in Australia. Linking diverse data sets will allow researchers to explore the relationships between systems to anticipate and solve problems that may not have been apparent in a single data set. And tackling problems of compatibility across diverse data sets, while daunting, may lead to better, richer predictive models as well as lower costs associated with data discovery, access and preparation. [Learn more about the Oznome project] (https://www.newscientist.com/article/2076539-australias-plan-to-make-a-digital-representation-of-everything/).
+
+**Encourage public engagement with research.** Click to learn how a schoolteacher helped discover a new kind of celestial object.
+
+<*expanded content*>
+In 2007, a Dutch schoolteacher was browsing through images from the Hubble space telescope, made freely available online via the platform Galaxy Zoo. The teacher spotted a glowing green cloud floating next to a distant galaxy, and alerted researchers to her discovery. Galaxy Zoo is a project of Zooniverse, a platform for participatory research that invites the public to study objects or artifacts collected in research, and answer simple questions about them. The platform enables analysis of large data sets by crowdsourcing this work to human volunteers, who provide results that are superior to pattern recognition algorithms. On the Zooniverse platform, volunteers and researchers engage, discuss results, and even make significant discoveries together. That Dutch schoolteacher’s glowing cloud turned out to be a new kind of celestial object, produced by dust from ancient galactic collisions interacting with black holes. Working together, researchers and citizen scientists have since found 19 more of these objects, known as Hanny’s Voorwerpjes (“Hanny’s Objects,” in Dutch). Read more about [Hanny’s Voorwerp] (http://www.wired.com/2015/04/citizen-scientists-find-green-blobs-hubble-galaxy-shots/), and browse a list of [publications from all Zooniverse projects] (https://www.zooniverse.org/about/publications).
+
+**Ensure that data is preserved.** Learn how historical images of the first moon landing were lost, and then reconstructed from copies archived elsewhere.
+
+<*expanded content*>
+In 1969, when the United State’s Apollo 11 mission brought the first humans to the moon, the astronauts recorded their historic spacewalk using a special lunar camera and unique film format. Images were beamed back to earth, converted to a broadcast-friendly format, and shown live on TV screens around the globe. The now-familiar footage of those first steps is actually a significantly degraded version of the original, due to conversion from the lunar camera’s special format. Almost 40 years later, engineers decided to apply 21st century imaging technology to the original footage, to see if a crisper, higher resolution dub was possible. NASA tried to locate the master magnetic tapes but they’d gone missing. Following an epic search through the organization’s massive archives, officials concluded that the original moon landing footage was recorded over with satellite footage captured sometime in the 1980s, during a period when the agency was short on magnetic tape. Fortunately, media agencies worldwide archived the 1969 broadcast so the historical record, albeit low-resolution, endures. Engineers were able make an [HD restoration of the footage] (https://www.nasa.gov/multimedia/hd/apollo11_hdpage.html) by compiling all available archived versions. This cautionary tale highlights how sharing critical data widely ensures its preservation-- [read more here] (http://www.npr.org/templates/story/story.php?storyId=106637066).
+
+Do you have a great open data success story? Or a cautionary tale about what can go wrong or what opportunities are missed when data remains closed? We’d love to hear from you. Click here to share your story. <*insert link to submit story*>
+
+##Open data *sounds* great, but ...
+You’ve probably heard some counter-arguments to open data and you may have a few lingering questions yourself. Here are three of the most frequently voiced challenges to open data, and our answers to them.
+
+**“I’ll be scooped! Someone could discover something amazing in my data before I have a chance publish.”**
+Of course, you want to get as much as you can out of the data you collect and maintain. However, when you make data publicly available, you’re letting everyone know that you did it first. Open data is the ultimate security: no one can steal what has been freely shared.
+
+**“Documenting my data so someone else can understand it is too complicated. Who has time?”**
+Actually, documenting your data isn’t just good for science and collective human knowledge. Any time spent adding context and meta-information to your data now will save you hours and hours of trying to decipher your research a year or two or ten years down the road. Spend a half an hour writing up a [Data Reuse Plan] (http://mozillascience.github.io/working-open-workshop/data_reuse/). Other researchers-- and your future self-- will thank you for it!
+
+**“It’s so confusing! I don’t know where to start.”**
+We’re here to help! And you don’t have to do all at once. There are lots of simple ways to get started, and we’ve outlined them in the second primer in this series, How to Open Your Data. Take a look and see if there are one or two best practices you can incorporate into your workflow now.
+
+For 6 more challenges to open data and snappy and convincing responses to each of them, [click here] (https://github.com/mozillascience/open-data-training/blob/master/Materials/Handouts/ODChallengesQI.md). These are great talking points to use when telling your colleagues, friends, and loved ones about open data. There are valid concerns about privacy and ethics when it comes to open data. We will be diving into those topics in future primers.
+
+## Why does Mozilla <3 open data?
+
+The idea of open data is nothing new, but has gained popularity with the rise of the World Wide Web. The easily accessible, information-rich web is home to countless online communities where people from around the world freely and asynchronously share and discuss content and ideas. This technology has already transformed nearly every sphere of our lives… and it has the potential to do much more.
+
+This Open Data Primer was produced by Mozilla, the organization that makes the Firefox web browser. Why Mozilla? Well, Firefox is open source, meaning the project’s code is freely available online for anyone to use and reuse, and the project invites participation and collaboration from all. At Mozilla, we recognize that the web is an awesome tool for creation and collaboration, a space where people can find each other, communicate, and work together. **We’re convinced we can use the web, emerging technologies, and open practices to do better science.** To learn more, check out Mozilla’s Science Lab program <*link to main MSL site*>. We offer trainings <*insert link to our training pages*> and support communities of people working with and for open data <*insert link to Community page*>.
+
+
+#### ADDITIONAL RESOURCES
+
+**Communities:**
+
+Open Knowledge Foundation: https://okfn.org/
+*Worldwide non-profit network focused on making openness a mainstream concept through advocacy, technology, and training.*
+
+Open Data Institute: http://opendata.institute/
+*Non-profit organization which trains, supports, and collaborates with people around the world to promote innovation through open data.*
+
+**Information:**
+
+* Open Knowledge Foundation Open Data Handbook: http://opendatahandbook.org/guide/en/what-is-open-data/
+
+* Sunlight Foundation: Empowering the Open Data Dialogue: http://sunlightfoundation.com/blog/2013/10/22/empowering-the-open-data-dialogue/
diff --git a/Materials/Primers/P2_HowToOpenYourData.md b/Materials/Primers/P2_HowToOpenYourData.md
new file mode 100644
index 0000000..e846031
--- /dev/null
+++ b/Materials/Primers/P2_HowToOpenYourData.md
@@ -0,0 +1,148 @@
+#How To Open Your Data: A Primer
+##If you love your data, set it free.
+
+**The aim of open data is to maximize the potential of your data, to help it find new users and new uses.**
+
+Your data may hold the answers to unexpected questions and enable other researchers to further knowledge in surprising ways (for more about this, see our “Why Open Data” Primer). To encourage reuse, you’ll need to save your data files and put them where lots of people can access them freely-- ideally on the world wide web. But before you rush off to do just that, you’ll need to take steps to prepare your data for its new life in the open. **In this second primer we’ll explore how you can make your data findable, accessible, comprehensible, and easily usable by others, before you share it widely.**
+
+As you read through this primer, keep two things in mind: 1) your own dataset, which you probably have spent time carefully and lovingly collecting, curating, and analyzing, and 2) an imaginary stranger, somewhere out there on the internet, who knows absolutely nothing about your dataset, but might have an exciting, novel use for it. **The process of opening your data is the process of bringing #1 and #2 together-- introducing your dataset to a complete stranger. Let’s get started!**
+
+##A Data Blind Date
+To introduce our dataset to a complete stranger who might want to reuse it, we’ll need some really great metadata. Metadata is information about information. What does that mean? Let’s look at an example-- let’s meet some strange data on the internet, and try to get to know it better:
+
+*insert image1*
+
+This is a number, a value, and it certainly looks like a data point. It could be the most important value in a data set, the value that proves a groundbreaking theory... or the one that sinks it. But it’s obviously meaningless if we don’t know what it measures. Here’s this data point, with a bit of context:
+
+*insert image2*
+
+From this expanded dataset we can begin to make out a story. The first column seems to be date and time, so this is probably part of a time series. In the last field, each entry is tagged as an “earthquake,” and there are locations listed, too-- we might guess that we’re looking at seismic data. It seems that our highlighted value is linked to an earthquake West of Anchor Point, Alaska, but we still don’t know what that “3.2” value actually represents. **A researcher intimately acquainted with this dataset and collection methods might have no trouble reading and understanding it. But if we imagine that we’re newcomers to data and that this is all the information we have, the data set is unusable because we just don’t know enough about it.** What happens if we have access to a bit more information?
+
+*insert image3*
+
+Finally, labels for every column! These labels are metadata, or information that describes data. This is much better-- at last we see the highlighted value is labeled “mag,” since we’re talking earthquakes, perhaps that’s an abbreviation for magnitude. Some of the other labels are familiar: latitude and longitude, depth and place. We can infer that his data shows seismic events logged in early May of 2016. And yet, mysteries remain. For example, what is the unit of measure for depth? And what do “nst,” “dmin,” and “rms” stand for? What scale is used for “mag”? What is “mag type"? **Even if we had more familiarity with the terminology, we still don’t know the origins of the data, who collected it, and why and how. Even with labels, we don’t have the whole picture.**
+
+##Metadata, A Love Note to the Future
+**Our future users-- and even our future selves, who may eventually forget some of the finer details of even our own datasets months or years on-- need well-described, thoroughly documented data sets to ensure proper, responsible use and reuse.** In the example above, we saw how baffling a dataset can be when we don’t have access to good metadata about it. The example data set we looked at was (...big reveal…) sourced from the United States Geographical Survey (USGS), which makes earthquake data available on its website, as downloadable CSV or comma separated values files, or even as live streams. The downloaded file we saw above was the result of a query to the USGS database for earthquakes detected within a certain time frame-- 24 hours in early May of 2016.
+
+Luckily for us, the USGS has carefully documented their data sets on their web site, and provides a terrific reference [guide to the CSV spreadsheet format](http://earthquake.usgs.gov/earthquakes/feed/v1.0/csv.php), including a glossary of terms to explain the metadata! Here are example definitions from that glossary:
+
+*insert images 4-6*
+
+The magnitude-- the data type of that initial highlighted value-- links to yet more information.
+
+*insert image 7*
+
+Anyone using this data will benefit from detailed documentation. **Even the researcher who collected this data years back will need this information, if collection methods, instrumentation, or scales have changed in the intervening time period.** Note that some disciplines have rules for metadata called metadata schemas. Check with someone in your department or at your library to determine if there is a standard metadata schema you should be using for your data.
+
+Now we’ve got our metadata and good information about each data product, yet there’s one more thing, something that could easily be missed if we downloaded this file and then shared it with others without any other descriptors. If you look closely, you’ll see that none of the earthquakes in this data set (pictured here in its full glory) register less than a magnitude 2.5. USGS allows researchers to query a large database, and it’s request certain subsets of data that are pertinent to a particular question or study. When this CSV file was created, someone helpfully named it “USGSquery>2.5,” to indicate that these earthquakes that registered 2.5 magnitude or greater. But this information is not documented elsewhere in the actual file. **A newcomer who’s not clued in to this notation could very easily, mistakenly assume that this is the full set of data for all earthquakes, when it’s only a curated subset.**
+
+*insert image 8*
+
+**Our example data set highlights the need for external documentation that goes with your dataset wherever it goes.** We need to create supplementary data files that tell the full story of the data set-- what the data is about, who collected it, where it was collected, when, and how. This is your “love note to the future”-- your way of preparing your data with those future users (or your future self) in mind, so they can reuse it effectively and responsibly.
+##Creating a DATA_README
+In the world of computer programing and the world wide web, a file that comes with a piece of software or code and contains critical information about its origins and how to use it is called a README file. The README’s all-caps title emphasizes how urgent it is that you read it carefully before you use the code or software. We can borrow this documentation convention when sharing our data, and bundle a README file with our data to ensure that new users have all the information they need to responsibly and effectively use the data. README files (usually text files) are stored with the rest of your data files, frequently in a top-level folder so they can be easily found. (For more information on different types of data documentation files, see the Additional Resources section at the end of this primer.)
+
+We can think of our DATA_README file as a data reuse plan. Let’s look at creating a DATA_README for our USGS earthquake data, answering five key questions that new users of your data must know: what, who, when, where, and how. Our document will have five sections: data summary (what), source and contact details (who), location and access information (where), dates of collection and coverage (when), and collection methods (how).
+
+###1.Data Summary###
+This first section of the DATA_README file tells users **what** the data is about. A good description is especially important when we’re sharing our data on the web., as sSome data repositories are indexed by search engines, such as Google, making it easier for new users to quickly find your data through a simple online search. Include the following:
+
+* A descriptive title, to help users know immediately if this data set is likely towill be useful to them.
+* A brief yet detailed summary including origins, scope, limitations, and types of data included in the set.
+* A list of previous publications, if any, resulting from this dataset, to provide context on research use.
+
+#####*Example:*#####
+**Title:** USGS M2.5+ Earthquakes 2016-05-02
+**Description:** This is earthquake data collected by the United States Geological Survey in a 24-hour
+period on May 2, 2016. This is a subset of that time period’s data, containing only earthquakes of 2.5
+magnitude or greater. Includes the date/time of the earthquake along with the following products:
+latitude, longitude, depth, magnitude, magnitude Type, nst, gap, dmin, rms, net, id,updated, place, type,
+horizontalError, depthError, magError, magNst, status, locationSource, magSource. See “Data
+Collection Methods & Processing” (below) for more information on these data products.
+
+###2. Source & Contact Details###
+This section of your DATA_README file tells users **who** was involved in creating or collecting the data set and **who** can be contacted if there are any questions about the data set. This information will enable new users to credit the appropriate people and institutions when using the data. Include the following.
+
+* The name and affiliation of the person(s) or organization(s) who collected the data
+* Contact information for a person or group who can answer questions about the data long into the futureof the above, so questions can be directed to the appropriate person.
+* The names and affiliations of any collaborators on the research.
+* The name of the organization or institution that sponsored or funded the research.
+
+#####*Example:*#####
+**Source:** This data was compiled by the United States Geologic Survey, a United States Federal
+Agency. The data was collected at research stations registered in the International Registry of
+Seismograph Stations (IR), jointly maintained by the ISC and the World Data Center for Seismology
+(NEIC/USGS). Questions about this data set can be directed to: GS_Data_Management@usgs.gov.
+
+###3. Location & Access Information###
+This section of your DATA_README covers locations **where** the data was collected and **where** it is stored and can be accessed again. For geographically-related data, the location info helps a new user determine whether the data’s geographic coverage falls in their area of interest. By including info on where the data can be accessed, you enable anyone who downloads the file to to find it again later, and ensure the data is cited correctly in any future publications. Be sure to include:
+
+* Information on where the data was collected, and specify single or multiple sites of collection. Be as specific as possible (geographic coordinates provide most specificity).
+* DOI or other persistent URL that links to a landing page for your datadocument. Check with your data repository to see if they provide this service.
+
+#####*Example:*#####
+**Location:** Data collected internationally by registered seismograph stations: http://www.isc.ac.uk/registries/.
+**Place of Publication:** Published on USGS Earthquake Hazards Program Real-Time Feeds website: http://earthquake.usgs.gov/earthquakes/feed/v1.0/csv.php
+
+###4. Date of Collection & Coverage###
+This section of your DATA_README deals with all the time and/or date information about your data set. In this section, use the international standard date format (YYYY-MM-DD hh:mm.ss) and try to be as specific as possible. Be sure to include:
+
+* **When** the data was originally collected.
+* **When** the data was published or made available in the repository or database.
+The time periods covered by the dataset.
+
+#####*Example:*#####
+**Date of Collection:** Data collected on 2016-05-02.
+**Date Published:** Published by USGS Earthquake Hazards Program in a real-time feed; each datapoint made available as it was collected on 2016-05-02.
+**Dates of Coverage:** This data covers the period from 2016-05-02 00:00:00 to 2016-05-02 23:59:59.
+
+###5. Collection Methods###
+This section of your DATA_README should answer questions about **how** the data was collected and processed. This information is critical as these methods may constrain or alter how the data can be used and interpreted. Be sure to include:
+* Collection methods, addressing issues such as what instruments were used to collect the data, how frequently data were collected, how data collection sites were selected, If there was a sample population, and if so, how it was selected, etc. If you are following a standard procedure or set of methods, provide or link to documentation of those methods.
+* Data pProcessing, addressing issues such as was the data cleaned, how were missing or null values handled, was code used to processing the data and, if so, where might it be found.
+* File iIndex & fFormats, specifying what files are included in the data set (as a list), and how they are organized, any naming conventions that are used, and what formats and what software is required.
+* Glossary, data dictionary, or other file of metadata definitions, which may be a separate file or linked file (see example definitions above from USGS).
+
+#####*Example:*#####
+**Collection Methods:** For information about collection methods for this data set, refer to the ANSS Comprehensive Earthquake Catalog (ComCat): http://earthquake.usgs.gov/data/comcat/, which contains earthquake source parameters and other products produced by contributing seismic networks.
+**Data Processing:** see documentation on ComCat.
+**File Index and Format:** This is a single CSV, or comma separated value file. It can be opened with any spreadsheet software such as Open Office, Microsoft Excel, etc.
+**Glossary:** For a comprehensive glossary related to data products, see ComCat.
+
+If you have a great DATA_README, your dataset will truly be free for proper, responsible re-use on the internet. There’s just one last thing, which is selecting the best format for release of your data.
+
+##File Formats FTW!
+Though it’s not the most thrilling of topics, file formats for data sharing is very, very important. You want to select a format that is accessible to as many people as possible. Avoid proprietary formats if you can! Make sure your file is readable by computers so no human is forced to re-enter your data into another document to use it. For example, the ever-popular PDF, while it can be opened on many systems, doesn’t allow other software programs to “read” the contents and extract data for analysis and re-use. Here are a few other data “don’ts” -- formatting issues related specifically to spreadsheet data-- that will make your data difficult for any machine to read:
+* Don’t include data visualizations or summaries in the same worksheet as raw data. Raw data should be kept separately for easiest processing and readability.
+* Don’t include multiple tables in the same worksheet.
+* Don’t put special characters or values in field names; these will make it difficult for your data to be opened/loaded cleanly by a new user.
+* Data contained in formatting rather than values Don’t use formatting to add meaning to your data. For example, using fonts or text colors to identify certain values. Machines won’t read the formatting when running analysis and that information will be lost.
+
+
+**Most numeric and textual data can be reformatted to be communicated in text-based forms, like the comma separated value (CSV) format that can be opened with any text editor. This is the best possible format to enable wide, easy, reuse.** For other sustainable format types, refer to the File Formats guide distributed by the Australian National Data Service in the “Information” section of the Additional Resources, below.
+
+Now that you have a DATA_README and your data is in great shape to meet any potential new user, take a look at the third in our series of Open Data Primers, which will help you decide *where* to share your data. In that primer, we’ll take a whirlwind tour of the world of online data repositories.
+
+####ADDITIONAL RESOURCES####
+
+####Glossary:####
+
+**Data dictionary** - a text file defining field names and values (sometimes used interchangeably with the term “codebook”). Includes: a list of all field names, a description of fields & values (e.g. units of measurement, formulas used for calculation, abbreviations, value ranges) as well as the relationship of fields to one another. Example of a data dictionary: http://www.utexas.edu/cola/redcap/_files/data_dictionary_example.jpg
+
+**Metadata schema** - are sets of rules for how to describe a certain type of information. There are many different metadata schema primarily organized by information format and/or discipline.
+
+**Permanent identifier** - A permanent identifier (or PID) is a set of numbers and/or characters, frequently in the form of a URL, that points to the location of a resource. PIDs are set up in such a way that even though the storage location of the resource may change over time (e.g. moving data from one university server to another), the PID will always point to the correct location. DOI (Digital Object Identifier) is a commonly known type of PID.
+
+####Trainings:####
+* Mozilla Science Lab Open Data Module 2: How to Open Data - *insert link*
+* [Mozilla Science Lab Working Open Workshop Presentation on “Open Data and Data Reuse Plans”](https://docs.google.com/presentation/d/1kZd-ZD5lru5a7jIbyi9q8cBYCCAKRnIBSRvixYFtoF0/edit?pref=2&pli=1#slide=id.g1088c5b110_0_183)
+* [Mozilla Science Lab Working Open Workshop Exercise on “Open Data and Data Reuse Plans”](http://mozillascience.github.io/working-open-workshop/data_reuse/)
+
+####Information:####
+* Digital Curation Centre’s List of Disciplinary Metadata: http://www.dcc.ac.uk/resources/metadata-standards
+* Research Data Alliance Community-Maintained List of Metadata Schema: http://rd-alliance.github.io/metadata-directory/subjects/
+* UK Data Archive Documenting Your Data Overview: http://www.data-archive.ac.uk/create-manage/document
+* Australian National Data Service Working-Level Metadata Guide: http://ands.org.au/guides/metadata-working
+* Northwest Environmental Data Network Best Practices for Data Dictionary Definitions and Usage: http://www.pnamp.org/sites/default/files/best_practices_for_data_dictionary_definitions_and_usage_version_1.1_2006-11-14.pdf
+* Australian National Data Service File Formats Guide: http://www.ands.org.au/guides/file-formats
diff --git a/README.md b/README.md
index 4149cf1..fcf0b3b 100644
--- a/README.md
+++ b/README.md
@@ -1,9 +1,39 @@
-# Open Data Training Program
+
-The Mozilla Science Lab is developing an Open Data Training Program. This repository will be used for issues and sharing of curriculum.
+# WELCOME to the Open Data Training Program Repository!
+
+The Mozilla Science Lab is developing an Open Data Training Program. This repository will be where we build and share our curriculum and resources for open data.
+
+Here's what's been developed so far.
+
+* [Primers](https://mozillascience.github.io/open-data-primers/index.html)
+* [Instructor Guides](https://mozillascience.github.io/open-data-guides/)
+
+Our first priority is to complete the first five primers as laidout in the Roadmap linked below. Following that, we will be writing up Instructor Guides based off those primers. You can see an example at the link to Instructor Guides above.
+
+If you'd like to see our current plan for development of this program, check out our [Roadmap] (/planning/ROADMAP.md).
If you are looking for more information on Mozilla Science Lab, please see our [website](https://www.mozillascience.org/).
+## [Current Authors](#current-authors)
+* [Stephanie Wright] (https://github.com/stephwright), Program Lead, Mozilla Science Lab
+* [Zannah Marsh] (https://github.com/zee-moz), Learning Strategist, Mozilla Science Lab
+
+Huge thanks to contributors from 2016 Global Sprint who aren't noted in GitHub because we were working in the Google Docs!
+* [Amel Ghouila] (https://github.com/amelgh)
+* [Dhafer Laouini] (https://github.com/Dhaferl)
+* [Fatma Guerfali] (https://github.com/FatmaZG)
+* [John Kratz] (https://github.com/JEK-III)
+* [Alexander Morley] (https://github.com/alexmorley)
+* [Matthew Marcello] (https://github.com/mmarcello)
+* Natalie Foo
+* Katie Fortney
+* Stephanie Simms
+* [Siwar-BLK] (https://github.com/Siwar-BLK)
+* [zbouslama] (https://github.com/zbouslama)
+
+See also the list of [contributors] (https://github.com/mozillascience/open-data-training/graphs/contributors) who participated in this project.
+
## License

diff --git a/UseCases/MRCOxfordTrainingDayOUTLINE.md b/UseCases/MRCOxfordTrainingDayOUTLINE.md
new file mode 100644
index 0000000..010588f
--- /dev/null
+++ b/UseCases/MRCOxfordTrainingDayOUTLINE.md
@@ -0,0 +1,15 @@
+# Open Data Training For Neuroscience
+### Why is this here?
+The aims of this part of the repo are two-fold:
+- Firstly to help me plan / get feedback on an Open-Science workshop for our department's training day which is happening in January '17
+- Secondly to provide a specific use-case of how the materials in this repo can be applied
+
+### Planning
+- [ ] Describe the audience
+- [ ] What do we want them to come away with?
+- [ ] Is there a good way of getting both immediate and medium-term (6 months?) feedback on the impact of the training
+
+### ToDo
+- [ ] Get lecture slides from Data Sharing meeting at FENS
+- [ ] Get in touch with Stephen Eglen
+- [ ] Get more familiar with tools such as binder and figshare, which is best solution for reserchers at our unit
diff --git a/UseCases/Resources.md b/UseCases/Resources.md
new file mode 100644
index 0000000..feeccdd
--- /dev/null
+++ b/UseCases/Resources.md
@@ -0,0 +1,16 @@
+## Other people's slides who's ideas I will probably steal
+
+#### From @TAT_ITB on twitter (Hydrology Open Science)
+https://github.com/dasaptaerwin/opensciencetalk
+
+#### From Stephen Eglen's talk @ FENS 2016 (Code Sharing)
+http://sje30.github.io/talks/2016/fens_eglen.html#
+
+#### David Menon's talk @FENS 2016 (Data Sharing in TBI)
+https://www.dropbox.com/s/qz65wld9ci00vmh/Datasharing%20in%20neuroscience%20-%20Clinical%20research%20in%20TBI.pdf?dl=0
+
+#### Best Practices
+[PLOS Biology: Best Practices for Scientific Computing](http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001745)
+
+#### Overview
+[Slides](http://scholar.harvard.edu/mercecrosas/presentations/research-and-academic-software-projects-developed-iqss) that reference the paper above (good figure on second slide)
diff --git a/UseCases/junkIdeas.md b/UseCases/junkIdeas.md
new file mode 100644
index 0000000..c9ce926
--- /dev/null
+++ b/UseCases/junkIdeas.md
@@ -0,0 +1,6 @@
+# Other ideas for OpenData workshop
+- Chat about NEJM paper [Toward Fairness in Data Sharing](http://www.nejm.org/doi/pdf/10.1056/NEJMp1605654)
+- Can we take others at their word [nullis in verba](http://blogs.discovermagazine.com/neuroskeptic/2016/08/16/science-without-open-data-isnt-science/#.V7SAP3pjJhE)
+- What are small steps that people can take to start the process
+ - i.e. even if someone is super busy right now how can the prepare themselves to make a change
+- [McKiernan's awesome paper in elife](https://elifesciences.org/content/5/e16800)
diff --git a/assets/docs/index.md b/assets/docs/index.md
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/assets/docs/index.md
@@ -0,0 +1 @@
+
diff --git a/assets/images/1stIssues.png b/assets/images/1stIssues.png
new file mode 100644
index 0000000..483d344
Binary files /dev/null and b/assets/images/1stIssues.png differ
diff --git a/assets/images/Fork.gif b/assets/images/Fork.gif
new file mode 100644
index 0000000..ba056db
Binary files /dev/null and b/assets/images/Fork.gif differ
diff --git a/assets/images/Issue.gif b/assets/images/Issue.gif
new file mode 100644
index 0000000..d5f7373
Binary files /dev/null and b/assets/images/Issue.gif differ
diff --git a/assets/images/bug.png b/assets/images/bug.png
new file mode 100644
index 0000000..c46f455
Binary files /dev/null and b/assets/images/bug.png differ
diff --git a/assets/images/question.png b/assets/images/question.png
new file mode 100644
index 0000000..1652e00
Binary files /dev/null and b/assets/images/question.png differ
diff --git a/assets/images/suggestion.png b/assets/images/suggestion.png
new file mode 100644
index 0000000..ac821fb
Binary files /dev/null and b/assets/images/suggestion.png differ
diff --git a/code-of-conduct.md b/code-of-conduct.md
new file mode 100644
index 0000000..08b0709
--- /dev/null
+++ b/code-of-conduct.md
@@ -0,0 +1,84 @@
+# Code of Conduct
+#### 1. Purpose
+
+The primary goal of this Code of Conduct is to be inclusive to the largest number of contributors, with the most varied and diverse backgrounds possible. As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion (or lack thereof).
+
+This code of conduct outlines our expectations for all those who participate in our community, as well as the consequences for unacceptable behavior.
+
+We invite all those who participate in this repository to help us create safe and positive experiences for everyone.
+#### 2. Open Citizenship
+
+A supplemental goal of this Code of Conduct is to increase open citizenship by encouraging participants to recognize and strengthen the relationships between our actions and their effects on our community.
+
+Communities mirror the societies in which they exist and positive action is essential to counteract the many forms of inequality and abuses of power that exist in society.
+
+If you see someone who is making an extra effort to ensure our community is welcoming, friendly, and encourages all participants to contribute to the fullest extent, we want to know.
+#### 3. Expected Behavior
+
+The following behaviors are expected and requested of all community members:
+
+* Participate in an authentic and active way. In doing so, you contribute to the health and longevity of this community.
+* Exercise consideration and respect in your speech and actions. Attempt collaboration before conflict.
+* Refrain from demeaning, discriminatory, or harassing behavior and speech.
+* Be mindful of your surroundings and of your fellow participants. Alert community leaders if you notice a dangerous situation, someone in distress, or violations of this Code of Conduct, even if they seem inconsequential.
+* Remember that community event venues may be shared with members of the public; please be respectful to all patrons of these locations.
+
+#### 4. Unacceptable Behavior
+
+The following behaviors are considered harassment and are unacceptable within our community:
+
+* Violence, threats of violence or violent language directed against another person.
+* Sexist, racist, homophobic, transphobic, ableist or otherwise discriminatory jokes and language.
+* Posting or displaying sexually explicit or violent material.
+* Posting or threatening to post other people’s personally identifying information ("doxing").
+* Personal insults, particularly those related to gender, sexual orientation, race, religion, or disability.
+* Inappropriate photography or recording.
+* Inappropriate physical contact. You should have someone’s consent before touching them.
+* Unwelcome sexual attention. This includes, sexualized comments or jokes; inappropriate touching, groping, and unwelcomed sexual advances.
+* Deliberate intimidation, stalking or following (online or in person).
+* Advocating for, or encouraging, any of the above behavior.
+* Sustained disruption of community events, including talks and presentations.
+
+#### 5. Consequences of Unacceptable Behavior
+
+Unacceptable behavior from any community member, including sponsors and those with decision-making authority, will not be tolerated.
+
+Anyone asked to stop unacceptable behavior is expected to comply immediately.
+
+If a community member engages in unacceptable behavior, the community organizers may take any action they deem appropriate, up to and including a temporary ban or permanent expulsion from the community without warning (and without a refund of any charge that may have been levied).
+#### 6. Reporting Guidelines
+
+If you are subject to or witness unacceptable behavior, or have any other concerns, please notify a community organizer as soon as possible by emailing sciencelab at mozillafoundation dot org.
+
+Additionally, community organizers are available to help community members engage with local law enforcement or to otherwise help those experiencing unacceptable behavior feel safe. In the context of in-person events, organizers will also provide escorts as desired by the person experiencing distress.
+#### 7. Addressing Grievances
+
+If you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify the Mozilla Science Lab staff with a concise description of your grievance.
+#### 8. Scope
+
+We expect all community participants (contributors, paid or otherwise; sponsors; and other guests) to abide by this Code of Conduct in all community venues–online and in-person–as well as in all one-on-one communications pertaining to community business.
+
+#### 9. Contact info
+
+To report or discuss a suspected violation of this code of conduct by a community member, you may contact any of the team directly and in confidence:
+
+* Abby Cabunoc Mayes:
+* Arliss Collins:
+* Aurelia Moser:
+* Kaitlin Thaney:
+* Stephanie Wright:
+* Zannah Marsh:
+
+To report or discuss a suspected violation of this code of conduct by a member of the core team, you may contact this person in confidence:
+
+Stephanie Wright, Program Lead @ Mozilla Science Lab
+
+twitter: [@shefw] (https://twitter.com/shefw)
+
+#### 10. License and attribution
+
+This Code of Conduct is distributed under a Creative Commons Attribution-ShareAlike license.
+
+It is derived from the [Citizen Code of Conduct] (http://citizencodeofconduct.org/) with portions of text derived from the [Django Code of Conduct] (https://www.djangoproject.com/conduct/reporting/) and the [Geek Feminism Anti-Harassment Policy] (http://geekfeminism.org/about/code-of-conduct/), [Mozilla Science Lab Code of Conduct] (https://science.mozilla.org/code-of-conduct) and [Slidewinder Code of Conduct] (http://www.slidewinder.io/docs/01_code_of_conduct.html).
+
+Updated March 21, 2017
diff --git a/planning/ROADMAP.md b/planning/ROADMAP.md
new file mode 100644
index 0000000..a5483b8
--- /dev/null
+++ b/planning/ROADMAP.md
@@ -0,0 +1,106 @@
+## ROADMAP FOR OPEN DATA
+
+### MARCH
+###### WEEK 4: 21-25
+* Open Data Fundamentals (Level 1): Write up outline of level 1 primers & modules
+* Begin writing up primers for each module
+
+###### WEEK 5: 28-1
+* Finish up writing primer 1
+* Begin writing primer 2
+
+### APRIL
+
+###### WEEK 1: 4-8
+*Work Week in Toronto*
+
+###### WEEK 2: 11-15
+* Finish up primer 2
+* Begin developing module 1 from primer 1
+
+###### WEEK 3: 18-22
+*FORCE in Portland*
+* Begin writing up primer 3
+* Finish up developing module 1
+
+###### WEEK 4: 25-29
+* Finish up primer 3
+* Begin dveloping module 2 from primer 2
+
+### MAY
+###### WEEK 1: 2-6
+*DataONE in Santa Barbara*
+* Develop module 3 from primer 3
+
+###### WEEK 2: 9-13
+* Put primers & modules 1-3 into delivery format
+
+###### WEEK 3: 16-20
+* Begin developing primer 4
+
+###### WEEK 4: 23-27
+*Fellows Work Week in Michigan*
+* Review modules 1-3 w/Christie in Michigan
+* Finish up primer 4
+
+### JUNE
+###### WEEK 1: 30-3
+*Global Sprint (2-3)*
+* Develop module 4 from primer 4
+* Level 1 Curriculum sprint on modules / primers 1-3
+
+###### WEEK 2: 6-10
+*Steph PTO in Dublin*
+
+###### WEEK 3: 13-17
+*MOZ WorkWeek in London*
+* Put primer & module 4 into delivery format
+
+###### WEEK 4: 20-24
+*Steph PTO in London 20-22*
+* Begin primer 5
+
+###### WEEK 5: 28-1
+* Finish primer 5
+* Begin module 5
+
+### JULY
+###### WEEK 1: 4-8
+* Open Data Fundamentals: Full Level 1 workshop at CDL in SF (WEEK 1 or 2?)
+ (Steph, Zannah, Christie?)
+* Revise ODF based on feedback from CDL workshop
+
+###### WEEK 2: 11-15
+* Finish module 5 and put into delivery format
+
+###### WEEK 3: 18-22
+* Prep for Nairobi (delivery of materials?)
+
+###### WEEK 4: 25-29
+*Nairobi for Fellows Work Week*
+
+### AUGUST
+* Write up primer 6 & 7
+* Develop modules 6 & 7
+* Finalize all level 1 primers & modules into delivery format
+
+### SEPTEMBER
+* Test all level 1 & Assess
+
+### OCTOBER
+* Test all level 1 & Assess
+
+###### WEEK 4: 24-28
+*MozFest in London*
+
+### NOVEMBER
+* Level 2 Curriculum Sprint
+* Finish write up of level 2 primers
+
+### DECEMBER
+* Finish write up of level 2 modules
+* Put level 2 modules & primers into delivery format
+
+###### WEEK 1: 5-9
+*Work Week in Hawaii*
+
diff --git a/planning/topics.md b/planning/topics.md
new file mode 100644
index 0000000..1c7d58e
--- /dev/null
+++ b/planning/topics.md
@@ -0,0 +1,62 @@
+# TOPICS & DELIVERY METHODS
+
+### PRIMERS & MODULES
+#### LEVEL 1 - Focus on real life examples to highlight advantages of open data / disadvantages of closed data.
+1. Why Open Data
+2. How to Open Your Data
+3. Sharing Your Data
+4. Become a Data Hunter (Finding data for reuse)
+5. Making Friends with Other People's Data (Things to know abt using data from others)
+~~6. Open Facilitation, Teaching & Community Building (around open data)~~
+~~7. Tools and Workflows for Working Open~~
+
+#### LEVEL 2
+Here we're thinking of doing a series of discipline-specific training
+Topics may be:
+* Health Sciences
+ * IRB
+ * Qualitative & quantitative
+ * where to find data
+ * physical specimens
+ * where to publish
+ * issues / considerations w/ large and/or dynamic data sets
+* Life Sciences
+ * mostly quantitative
+ * IRB
+ * where to find data
+ * physical specimens
+ * where to publish
+ * issues / considerations w/ large and/or dynamic data sets
+* Humanities
+ * more multimedia forms of data - storage issues
+ * publishing considerations
+ * copyright issues
+ * qualitative
+ * where to find
+ * where to publish
+ * physical specimens
+* Social Sciences
+ * qualitative & quantitative
+ * IRB
+ * privacy / confidentiality
+ * where to find
+ * where to publish
+* Physical Sciences
+ * quantitative
+ * where to find
+ * where to publish
+ * issues / considerations w/ large and/or dynamic data sets
+
+
+* ...
+
+#### LEVEL 3
+More in-depth topics
+* Data visualization
+* Advocacy
+* Preservation & archiving considerations
+* Deep dive into privacy and ethics in open data
+* Data analysis methods
+* Sharing code (or introduction to this in Level 1?)
+
+