diff --git a/config/_default/menus.yaml b/config/_default/menus.yaml
index d668fee..bc23a9b 100644
--- a/config/_default/menus.yaml
+++ b/config/_default/menus.yaml
@@ -91,6 +91,10 @@ main:
weight: 5
url: "training-materials/"
parent: "Resources"
+ - name: "FAQ"
+ weight: 6
+ url: "faq/"
+ parent: "Resources"
# Contact
- name: "Contact"
@@ -149,3 +153,7 @@ resources:
- name: "Training Materials"
weight: 5
url: "/training-materials"
+ - name: "FAQ"
+ weight: 6
+ url: "/faq"
+
\ No newline at end of file
diff --git a/content/faq/_index.md b/content/faq/_index.md
new file mode 100644
index 0000000..851665c
--- /dev/null
+++ b/content/faq/_index.md
@@ -0,0 +1,91 @@
+---
+title: "Frequently Asked Questions"
+weight: 5
+---
+
+## Using NWB
+
+
+### Is NWB 2 stable? {#is-nwb-2-stable}
+
+Yes! NWB 2.0 was officially released in January 2019, and the schema is stable. A key goal of the NWB endavour is to ensure that NWB 2 remains accessible. As NWB evolves we strive to ensure that any changes we make do not break backwards compatiblility.
+
+### I would like to use NWB. How do I get started? {#how-to-get-started}
+
+See the [Converting neurophysiology data to NWB](/converting-data-to-nwb/) page for more information.
+
+### How do I cite NWB 2 in my research? {#how-to-cite}
+
+Oliver RĂ¼bel, Andrew Tritt, Ryan Ly, Benjamin K. Dichter, Satrajit Ghosh, Lawrence Niu, Pamela Baker, Ivan Soltesz, Lydia Ng, Karel Svoboda, Loren Frank, Kristofer E. Bouchard, The Neurodata Without Borders ecosystem for neurophysiological data science, Oct. 2022, eLife 11:e78362. [https://doi.org/10.7554/eLife.78362](https://doi.org/10.7554/eLife.78362)
+
+### How do I install PyNWB? {#install-pynwb}
+
+See the [Installing PyNWB](https://pynwb.readthedocs.io/en/stable/install_users.html) for details.
+
+### How do I install MatNWB? {#install-matnwb}
+
+See the [MatNWB documentation](https://matnwb.readthedocs.io/en/latest/pages/getting_started/installation_users.html) for details.
+
+### What is the difference between PyNWB and nwb-schema? {#pynwb-vs-schema}
+
+[PyNWB](https://pynwb.readthedocs.io/en/stable/) is the Python reference read/write API for the current NWB 2.x format. The [nwb-schema](https://github.com/NeurodataWithoutBorders/nwb-schema/) is used to manage development of the data standard schema. End-users who want to use NWB typically do not need to worry about the nwb-schema repo as the current schema is always installed with the corresponding API (whether it is [PyNWB](https://pynwb.readthedocs.io/en/stable/) for Python or [MatNWB](https://matnwb.readthedocs.io/en/latest/) for Matlab).
+
+### How do I read NWB files in different programming languages? {#read-nwb-files}
+
+For Python and Matlab we recommend using the [PyNWB](https://pynwb.readthedocs.io/en/stable/) and [MatNWB](https://matnwb.readthedocs.io/en/latest/) reference APIs. For C++ you can use [AqNWB](https://neurodatawithoutborders.github.io/aqnwb/read_page.html). To get started see also the [Reading NWB Files](https://nwb-overview.readthedocs.io/en/latest/file_read/file_read.html) page.
+
+If you are using other programming languages (such as R, Julia, Java, or Javascript) you can use the standard HDF5 readers that are available for these languages. In contrast to the NWB native API (PyNWB, MatNWB, AqNWB), the HDF5 readers are not aware of NWB schema details. This can make writing valid NWB files in other languages (without PyNWB, MatNWB, or AqNWB) tricky, but for read they nonetheless provide access to the full data. For write, applications (e.g., MIES written in Igor) often chose to implement only the parts of the NWB standard that are relevant to the particular application.
+
+### Where should I publish my NWB files? {#publish-nwb-files}
+
+You can publish NWB files in many different archives. Funding or publishing requirements may require you to publish your data in a particular archive. Many such archives already support NWB. If not, please let us know and we will be happy to assist you and the archive developers with supporting the NWB standard.
+
+If you are free to publish data wherever, we would recommend [DANDI](https://dandiarchive.org/). DANDI has built-in support for NWB that validates NWB files, automatically extracts key metadata to enable search, and provides tools for interactively exploring and analyzing NWB files. Furthermore, it provides an efficient interface for publishing neuroscience datasets on the TB scale, and can do so for free.
+
+### Who can I contact for questions?
+
+- **General questions:** For general questions, use the [NWB Helpdesk](https://github.com/dandi/helpdesk/discussions/).
+- **Bugs and issues:** To contribute, or to report a bug, create an issue on the appropriate GitHub repository. To find relevant repositories see the [Glossary of Core NWB Tools](/tools/core/) and [Accessing NWB Sources](https://nwb-overview.readthedocs.io/en/latest/nwb_source_codes.html) pages.
+- **Stay tuned:** To receive updates about NWB at large, sign up for the [NWB mailing list](https://mailchi.mp/fe2a9bc55a1a/nwb-signup/).
+
+For details, please also review our Contributing Guidelines.
+
+## Alternative data standards and formats
+
+### How does NWB 2.0 compare to other standards?
+
+See page: [How does NWB 2.0 compare to other neurodata standards?](/faq/comparison_to_other_standards/)
+
+### Why use HDF5 as the primary backend for NWB?
+
+See page: [Why use HDF5 as the primary backend for NWB?](/faq/why_hdf5/)
+
+### Are you aware of the Rossant blog posts about moving away from HDF5?
+
+Yes. See above for our motivations for using HDF5. Many of the technical issues raised in the blog post have been addressed and in our experience HDF5 is reliable and is performing well for NWB users.
+
+### Why not just use HDF5 on its own?
+
+The goal of NWB is to package neurophysiology data with metadata sufficient for reuse and reanalysis of the data by other researchers. HDF5 enables users to provide very rich metadata, sufficient for describing neuroscience data for this purpose. The problem with HDF5 on its own is that it is just too flexible. Without a schema, files could be missing key metadata like the sampling rate of a time series. Furthermore, different labs that use HDF5 would use completely different methods for organizing and annotating experiment data. It would be quite difficult to aggregate data across labs or build common tools without imposing structure on the HDF5 file. This is the purpose of the NWB schema. The NWB schema formalizes requirements that ensure reusability of the data and provides a common structure that enables interoperability across the global neurophysiology community. Users can use extensions to build from schema and describe new types of neurophysiology data.
+
+### Why is it discouraged to write videos from lossy formats (mpg, mp4) to internal NWB datasets?
+
+The NWB team strongly encourages that users do NOT package videos of natural behavior or other videos that are stored in lossy compressed formats, such as MP4, in the NWB file. Instead, these data can be included in the NWB file as an `ImageSeries` that has an external file reference to the relative path of the MP4 file. An MP4 file is significantly smaller in file size compared to both the uncompressed frame-by-frame video data (often by about 10X) and such data compressed using algorithms available in HDF5 (e.g., gzip, blosc). Users _could_ store the binary data read from an MP4 file in the `data` array of an `ImageSeries`, but this data cannot be read as a video directly from the HDF5 file. The binary data can only be read as a video by first writing the data into a new MP4 file and then using a software tool like FFmpeg to read the MP4 file. This creates a burden on the data user to have enough space on their filesystem to write the MP4 file and have an appropriate decompression tool installed to decode and read the MP4 file. As a result, putting compressed video data inside an HDF5 file reduces the accessibility of that data and limits its reuse.
+
+## NWB 1 vs 2
+
+### What has changed between NWB 1 and 2?
+
+See the [release notes of the NWB format schema](http://nwb-schema.readthedocs.io/en/latest/format_release_notes.html) for details about changes to the format schema. For details about changes to the specification language see the specification language release notes. With regard to software, NWB 2 marks a full reboot and introduced with [PyNWB](https://pynwb.readthedocs.io/en/stable/), [MatNWB](https://matnwb.readthedocs.io/en/latest/), [HDMF docutils](https://github.com/hdmf-dev/hdmf-docutils/), [nwb-schema](https://nwb-schema.readthedocs.io/en/latest/) etc. several new packages and repositories while tools, e.g., [api-python](https://neurodatawithoutborders.github.io/api-python/build/html/api_usage.html), that were created for NWB:N 1.x have been deprecated.
+
+### Does PyNWB support NWB:N 1.0.x files?
+
+[PyNWB](https://pynwb.readthedocs.io/en/stable/) includes the pynwb/legacy module which supports reading of NWB:N 1.0.x files from popular data repositories, such as the [Allen Cell Types Atlas](http://celltypes.brain-map.org/). For NWB:N 1.0.x files from other sources the millage may vary in particular when files are not fully format compliant, e.g., include arbitrary custom data or are missing required data fields.
+
+### What is the difference between NWB and NWB:N?
+
+Neurodata Without Borders (NWB) started as a project by the Kavli Foundation with the goal to enhance accessibility of neuroscience data across the community. The intent was to have a broad range of projects under the NWB umbrella. The Neurodata Without Borders: Neurophysiology (NWB:N) data standard was intended to be the first among many such projects. As NWB:N is currently the only project under the NWB umbrella, the terms "NWB" and "NWB:N" are often used interchangeably.
+
+### What is the difference between PyNWB and api-python?
+
+[PyNWB](https://pynwb.readthedocs.io/en/stable/) is the Python reference read/write API for the current NWB 2.x format. [api-python](https://neurodatawithoutborders.github.io/api-python/build/html/api_usage.html) is a deprecated write-only API designed for NWB:N 1.0.x files. [PyNWB](https://pynwb.readthedocs.io/en/stable/) also provides support for reading some NWB:N 1.0.x files from popular data repositories, such as the [Allen Cell Types Atlas](http://celltypes.brain-map.org/) via the pynwb/legacy module.
diff --git a/content/faq/comparison_to_other_standards.md b/content/faq/comparison_to_other_standards.md
new file mode 100644
index 0000000..09e4e31
--- /dev/null
+++ b/content/faq/comparison_to_other_standards.md
@@ -0,0 +1,32 @@
+---
+title: "How does NWB 2.0 compare to other neurodata standards?"
+weight: 1
+---
+
+## Table of Contents
+
+- [What is the difference between NWB 2.0 and NWB 1.0?](#nwb2-vs-nwb1)
+- [What is the difference between NWB and NIX?](#nwb-vs-nix)
+- [What is the difference between NWB and BIDS?](#nwb-vs-bids)
+ - [Approach](#approach)
+ - [Scope](#scope)
+
+## What is the difference between NWB 2.0 and NWB 1.0? {#nwb2-vs-nwb1}
+
+NWB 1.0 was the result of a 1 year pilot project. NWB 1.0 successfully created a comprehensive standard for neurophysiology, it lacked rigour in its specification and a reliable and rigorous software strategy and APIs. This made NWB(v1.0) hard to use and unreliable in practice. NWB 2.0 has been an effort to formalize and modularize the software components of NWB, and to build a sustainable software ecosystem that supports and accelerates collaboration between labs. NWB 2.0 is supported by the NIH BRAIN Initiative and has been a sustained effort since 2017.
+
+## What is the difference between NWB and NIX? {#nwb-vs-nix}
+
+[NIX](https://g-node.github.io/nix/) is another effort to standardize neurophysiology data. NIX defines a sophisticated, generic data model for storage of annotated scientific datasets, with APIs in C++ and Python and bindings for Java and MATLAB. NIX uses HDF5, the same backend as NWB's primary backend, and leverages its main advantages similar to NWB. As such, NIX provides important functionality towards building a FAIR data strategy, but the NIX data model by itself lacks specificity with regard to neurophysiology, leaving it up to the user to define appropriate schema to facilitate FAIR compliance. Due to this lack of specificity, NIX files can also be more varied in structure and naming conventions, which would make it difficult to aggregate across NIX datasets from different labs.
+
+## What is the difference between NWB and BIDS? {#nwb-vs-bids}
+
+The [Brain Imaging Data Standard (BIDS)](https://bids.neuroimaging.io/) is a mature standard for representing neuroimaging data. Both NWB and BIDS emphasize building an ecosystem of tools for analyzing and visualization data. BIDS has quite a few impressive "BIDS apps," which makes it a very powerful format that not only facilitates information exchange, but can also accelerate research. It differs from NWB in two main ways: approach and scope.
+
+### Approach
+
+NWB uses (primarily) HDF5 to represent hierarchical data within a single file, and concerns itself with organizing data for a single session. HDF5 is optimized for efficient handling of large data, but due to its efficient tooling, it usually requires specific HDF5 APIs to read this data, so there is some trade-off in accessibility for less technical users. In contrast, BIDS makes use of directory structure and naming conventions as a central part of the standard. It is used to organize data across an entire study including multiple subjects and recording sessions, and separates data sources and metadata into different files. Metadata is stored as JSON and TSV files, which are easily opened and edited using standard text editors.
+
+### Scope
+
+[BIDS](https://bids.neuroimaging.io/) handles neuroimaging (structural MRI, fMRI, PET, CT, DTI, etc.) and NWB handles neurophysiology (extracellular and intracellular electrophysiology, optical physiology, animal behavior, optogenetics, etc.). Both BIDS and NWB have mechanisms to extend the standard, so each has a well defined scope of its core and a less well defined scope of what types of extensions would be appropriate for the format. There are some cases where data could go in either, for instance with ECoG. In this case, we have worked with the development team of the corresponding BIDS extension (iEEG-BIDS) to make it possible to be mutually compatible with both BIDS and NWB by including NWB files in the BIDS directory and naming structure. This results in some duplication of metadata, but has the advantage of allowing a user to leverage both BIDS and NWB tools.
diff --git a/content/faq/why_hdf5.md b/content/faq/why_hdf5.md
new file mode 100644
index 0000000..5b8b8ff
--- /dev/null
+++ b/content/faq/why_hdf5.md
@@ -0,0 +1,48 @@
+---
+title: "Why use HDF5 as the primary backend for NWB?"
+weight: 2
+---
+
+[HDF5](https://www.hdfgroup.org/solutions/hdf5/) has three main features that make it ideal as our primary backend:
+
+**1. HDF5 has sophisticated structures for organizing complex hierarchical data and metadata, which is critical for handling the complexity and diversity of neurophysiology metadata.**
+
+HDF5 is one of the few standards that supports the four data primitives of the HDMF schema language: Group, Dataset, Attribute and Link. Each of these structures are essential in the full representation of critical metadata. Groups allow NWB to organize information hierarchically. Datasets allow NWB to store the large data blocks of measurement time series data. Attributes allow those datasets to be annotated with metadata necessary for reanalysis such as the units of measurement of that data and conversion factor. Links allow us to store data efficiently and avoid data duplication, and they allow us to create formal links between data elements.
+
+**2. HDF5 is a mature standard with support in a plethora of programming languages and multiple storage backends.**
+
+We have chosen HDF5 to maximize the accessibility of data stored in NWB. In order for NWB to support the diverse needs of the community (i.e., to truly be "without borders") we need to support a variety of access patterns. HDF5 has well-established APIs in many scientific programming languages, including C, Python, MATLAB, R, Julia, Java, and JavaScript. This list includes not only the major programming languages currently used by most neuroscientists (Python, MATLAB, and in some cases C and R) but also newer programming languages like Julia. By leveraging the robust community and support infrastructure behind HDF5, we can continue to achieve readability in diverse languages, far more than would be practical if we were to develop custom data access APIs in each language ourselves. Furthermore, HDF5 prioritizes long-term support, which includes technical support for any bugs, and backwards compatibility of the HDF5 API.
+
+Another important feature of HDF5 is the ability to store it in different backends. A new driver, "ros3", allows HDF5 files to be opened, read, and streamed directly from an S3 bucket, which is a common format for cloud storage.
+
+**3. HDF5 supports random access of chunked and compressed datasets, which is critical for handling the volume of data.**
+
+As recordings enter the TB scale, it is essential that we use a backend storage solution that supports both compression and random access. When large datasets are saved to disk, it is best to use lossless compression, which leverages patterns in the data to reduce the file size without changing the data values. HDF5 natively supports compressing datasets on write and decompressing datasets on read using GZIP (like "unarchiving" a file downloaded from the internet). Another important feature for large datasets is random access, which means that you can access any value within the datasets without reading all the values. If you were to apply GZIP to the entire dataset all at once, then it would require you to decompress the entire dataset and remove the capability for random access. HDF5 solves this problem by first splitting large datasets into "chunks" and compressing each of these chunks individually. This way when values of a particular region of the dataset are requested, only the chunks that contain requested data need to be decompressed. HDF5 has a sophisticated infrastructure for managing chunks of datasets and applying compression/decompression, removing these lower-level concerns from a data user who is reading the data.
+
+These features have proven to be very important for archiving large datasets. For instance, in raw data from Neuropixel recordings, it has been found to reduce the file size by up to 60%. As datasets grow in volume and in number, it will become increasingly important to utilize good data engineering principles to manage them at scale.
+
+## Alternative backends
+
+Below, we briefly explain the pros and cons of alternative storage formats. Depending on the particular application and storage needs, different backends are often preferable. In particular as part of [HDMF](https://hdmf.readthedocs.io/en/stable/), teams are exploring the use of alternate storage solutions with NWB. For the broader NWB community, we have found that HDF5 provides a good standard solution for most common use cases.
+
+### Zarr
+
+[Zarr](https://zarr.readthedocs.io/en/stable/) supports compression and chunking like HDF5. Zarr is the standard we have found that comes closest to HDF5's level of support for complex hierarchical data structures. The [HDMF Zarr](https://hdmf-zarr.readthedocs.io) library implements a Zarr backend for HDMF and provides convenience classes for integrating Zarr with the [PyNWB](https://pynwb.readthedocs.io/en/stable/) Python API for NWB to support writing of NWB files to Zarr. Using Zarr, the NWB file is not stored as a single file, but as a collection many files organized into folders. This storage scheme has some key advantageous when using object-based storage solutions, e.g., cloud-based storage in AWS. Some main limitations of Zarr for NWB are: 1) Zarr only supports Python and the neuroscience community requires APIs in MATLAB and other languages, 2) HDF5 is a much more mature standard with a track record for long-term accessibility but the Zarr community is growing, 3) transferring Zarr files requires moving lots of small files, 4) Zarr does not support Links and References so that [HDMF Zarr](https://hdmf-zarr.readthedocs.io/en/stable/) must implement custom solutions to support this important feature for NWB. Whether HDF5 or Zarr is the right solution for you depends heavily on your use case.
+
+### LINDI
+
+The [Linked Data Interface (LINDI)](https://github.com/NeurodataWithoutBorders/lindi/) provides a JSON representation of NWB data where the large data chunks are stored separately from the main metadata. This enables efficient storage, composition, and sharing of NWB files on cloud systems such as DANDI without duplicating the large data blobs. LINDI can be used to index existing NWB HDF5 files to help speed up remote access to HDF5 files stored in the cloud. LINDI provides a drop-in LindiH5pyFile feature such that LINDI files can be read via PyNWB using the standard NWBHDF5IO backend. LINDI is currently under development and subject to rapid change. The DANDI Archive does not yet provide full support for NWB data in the LINDI format.
+
+### Other alternative storage formats
+
+The following alternative formats are not currently supported by NWB.
+
+#### Binary files (.dat)
+
+Binary files do not allow for complex hierarchical data including Groups, Attributes, and Links. They also do not allow for chunking and compression, which makes them poorly suited for efficient handling of large data files. Furthermore, there is metadata needed to interpret binary files that can be missing, including shape, data type, and endianness. Zarr is an approach that uses binary files and deals with these limitations, using folders and json files to create a hierarchical structure that can manage data chunks and specify the essential parameters of binary files. See our response to Zarr.
+
+#### Relational database (e.g. SQL)
+
+The [HDMF specification language](https://hdmf-schema-language.readthedocs.io/en/latest/) is inherently hierarchical, not tabular, and we need a storage layer that can express the hierarchical nature of the data as well. There are some approaches for mapping between relational tables and hierarchical structures such as object relational mappers, but this is not as good of a solution as using a storage layer that is hierarchical by nature.
+
+While we think relational databases are not ideal as an NWB backend, we do recognize that they can be a powerful choice for storing scientific data because they enforce formal relationships between data and enable flexible, complex queries. If you are interested in using relational databases for neuroscience research, we would recommend exploring [DataJoint](https://www.datajoint.com/), an open-source framework for programming scientific databases with computational workflows with APIs in MATLAB and Python. [DataJoint Elements](https://datajoint.com/docs/elements/) is a collection of curated modules for assembling workflows for the major modalities of neurophysiology experiments. The NWB team is collaborating with DataJoint to build import/export functionality between DataJoint Elements and NWB files. For labs interested in leveraging the benefits of relational databases and NWB, using DataJoint internally and using NWB to archive and share data could provide the best of both worlds.
diff --git a/content/online-resources/_index.md b/content/online-resources/_index.md
index 07e1509..455086f 100644
--- a/content/online-resources/_index.md
+++ b/content/online-resources/_index.md
@@ -62,6 +62,10 @@ training_section:
content: "Comprehensive overview of NWB"
image: "/images/nwb-guide.png"
url: "https://nwb-overview.readthedocs.io/"
+ - title: "Frequently Asked Questions"
+ content: "Common questions about NWB and their answers"
+ image: "/images/code.png"
+ url: "/faq/"
docs_section:
enable: true
diff --git a/layouts/faq/list.html b/layouts/faq/list.html
new file mode 100644
index 0000000..d3c2337
--- /dev/null
+++ b/layouts/faq/list.html
@@ -0,0 +1,32 @@
+{{ define "main" }}
+{{/* Hero Section */}}
+
+