Skip to content

Commit 9071861

Browse files
Merge pull request #273 from keenon/main
Update prod branch with latest changes
2 parents 3a4a606 + 14117ae commit 9071861

File tree

1,466 files changed

+1064991
-10007
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,466 files changed

+1064991
-10007
lines changed

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,7 @@ biomechanics_net.egg-info/
77
.mypy_cache/
88
__pycache__
99
test_dataset.db
10-
TODO
10+
TODO
11+
.DS_Store
12+
/test_data
13+
.idea/

LICENSE.txt

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
###########################
2+
License:
3+
GPL v3.0 - https://opensource.org/license/gpl-3-0/
4+
with the following modification:
5+
###########################
6+
7+
Any software licensed under the terms of the GNU General Public License (GPL)
8+
also requires that any data generated or processed using the software must be
9+
publicly shared in accordance with the Data-Sharing Agreement between
10+
Stanford University and you. By downloading, installing, or using the software,
11+
you agree to be bound by the terms of the GPL license and the Data-Sharing Agreement.
12+
13+
###########################
14+
Data-Sharing Agreement
15+
###########################
16+
17+
This Data-Sharing Agreement ("Agreement") is entered into by and between Stanford University ("Provider") and you ("Recipient") as a condition of receiving a license to use this software.
18+
19+
WHEREAS, Provider has developed and licensed certain software ("Software") that can be used to process motion capture data; and
20+
WHEREAS, Recipient desires to use the Software to process motion capture data, and Provider has agreed to grant Recipient a license to use the Software on the condition that any data generated or processed using the Software is shared publicly;
21+
22+
NOW, THEREFORE, the parties agree as follows:
23+
24+
Definitions.
25+
(a) "Data" means any motion capture data generated or processed using the Software.
26+
(b) "Publicly Share" means to make Data available to the public, free of charge and without restriction, through a publicly accessible platform or repository.
27+
28+
License to Use Software.
29+
(a) Provider hereby grants Recipient a non-exclusive, non-transferable, revocable license to use the Software for the sole purpose of generating and processing Data.
30+
(b) Recipient agrees to use the Software only in accordance with the terms of the GNU General Public License (GPL).
31+
32+
Data-Sharing Obligations.
33+
(a) Recipient agrees to Publicly Share all Data generated or processed using the Software.
34+
(b) Recipient agrees to provide attribution to Provider and the Software in any publication, presentation, or other public use of the Data.
35+
(c) Recipient agrees to provide Provider with a copy of all Data generated or processed using the Software.
36+
37+
Termination.
38+
(a) Either party may terminate this Agreement upon written notice if the other party breaches any of its obligations under this Agreement.
39+
(b) Termination of this Agreement shall not affect Recipient's obligations to Publicly Share any Data generated or processed using the Software prior to termination.
40+
41+
Limitation of Liability.
42+
(a) Provider makes no representations or warranties with respect to the Software or Data, and disclaims all implied warranties, including but not limited to the implied warranties of merchantability, fitness for a particular purpose, and non-infringement.
43+
(b) Provider shall not be liable for any direct, indirect, special, incidental, or consequential damages arising out of or in connection with the use of the Software or Data.
44+
45+
Miscellaneous.
46+
(a) This Agreement constitutes the entire agreement between the parties and supersedes all prior or contemporaneous agreements or understandings, whether written or oral.
47+
(b) This Agreement shall be governed by and construed in accordance with the laws of [YOUR STATE/COUNTRY], without giving effect to any conflict of law principles.
48+
(c) This Agreement may not be amended or modified except in writing signed by both parties.
49+
(d) This Agreement shall be binding upon and inure to the benefit of the parties and their respective successors and assigns.

README.md

Lines changed: 33 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,50 @@
1-
### BiomechanicsNet
1+
### AddBiomechanics
22

3-
This is an open effort to assemble a large dataset of human motion, recorded from several modalities (kinematics, GRF, EMG, and IMU). We compose this dataset out of dozens of heterogeneous motion capture datasets published in both biomechanics and computer graphics. While there are dozens of these datasets, few contain everything we’d like. For example, [Mahmood et. al](https://amass.is.tue.mpg.de/) is very large and covers many motion types, but contains only skeleton kinematics (no GRF, EMG, or accelerometers). [Carmago et. al](http://www.epic.gatech.edu/opensource-biomechanics-camargo-et-al/) contains EMG, motion capture, and GRF data for motions on uneven surfaces, but only for the lower-half of the body. [Lencioni et. al](https://springernature.figshare.com/collections/Human_kinematic_kinetic_and_EMG_data_during_level_walking_toe_heel-walking_stairs_ascending_descending/4494755) contains EMG, motion capture, and GRF data for the whole body, but only for walking up and down stairs. There are dozens more datasets, each with its own idiosyncrasies.
3+
[![DOI](https://zenodo.org/badge/398424759.svg)](https://zenodo.org/badge/latestdoi/398424759)
44

5-
Our goal in this project is to provide a standard format for modality-sparse human motion data, and loaders for as many datasets as we can. We standardize on the [Rajagopal Human Body Model](https://simtk.org/projects/full_body), as implemented in the [Nimble Physics Engine](https://nimblephysics.org).
5+
This is an open effort to assemble a large dataset of human motion. We're hoping to faccilitate this by providing easy-to-use tools that can automatically process motion capture data, and prepare it for biomechanical analysis. We're also working to provide large aggregate datasets in standard formats, along with tools to easily handle the data, at some point in the near future.
66

7-
Licenses-permitting, we plan to make pre-translated aggregate datasets available for public download.
7+
### Getting Set Up (for Stanford Developers)
88

9-
## Getting Set Up For Development (frontend)
9+
*A note for non-Stanford devs: these instructions probably won't help you!* We share the AddBiomechanics source code so that researchers can fully understand the methods. We highly encourage you to use the web application rather than building the code from source. Note that we are a small team and are not able to support individuals wishing to build from source. You're welcome to try, but it's probably going to be harder than you hope, and we're sorry about that. Part of the complexity here is that the cloud application is built to interface directly with a web of different AWS resources, each of which has its own (currently undocumented) IAM setup, which are provisioned and continually maintained by our team for the public instance of AddBiomechanics. If you are trying to run your own independent instance to avoid sharing data, even if we gave you the permissions files referenced in these instructions, your code would by default talk to our AWS resources, and effectively just join our cluster. If you want it to talk to your own resources, we cannot offer support debugging your setup to get everything to work.
1010

11-
1. Install the Amplify CLI `npm install -g @aws-amplify/cli` (may require `sudo`, depending on your setup)
12-
2. From inside the `frontend` folder, run `amplify configure`, and follow the instructions to create a new IAM user for your computer (in the 'us-west-2' region)
13-
3. From inside the `frontend` folder, run `amplify init`
11+
## Getting Set Up for Development (frontend)
12+
13+
1. Download the [aws-exports-dev.js](https://drive.google.com/file/d/1IBr3Fm-8rYeGudyWLvIEGPkdzdpR0I90/view?usp=sharing) file, rename it `aws-exports.js` and put it into the `frontend/src` folder.
14+
2. Run `yarn start` to launch the app!
15+
16+
## Notes (frontend)
17+
18+
Note: the above instructions will cause your local frontend to target the dev servers, if you would rather interact with production servers, download the [aws-exports-prod.js](https://drive.google.com/file/d/1VZVgHHwSP-xmJW-qZeQ6U92FYWoU36aP/view?usp=sharing) file, rename it `aws-exports.js` and put it into the `frontend/src` folder.
19+
20+
Because the app is designed to be served as a static single page application (see the wiki for details) running it locally with the appropriate `aws-exports.js` will behave exactly the same as viewing it from [dev.addbiomechanics.org](https://dev.addbiomechanics.org) (dev servers) or [app.addbiomechanics.org](https://app.addbiomechanics.org) (prod servers)
21+
22+
## Getting Set Up For Deployment (frontend)
23+
24+
1. Log in with the AddBiomechanics AWS root account on your local `aws` CLI.
25+
2. Install the Amplify CLI `npm install -g @aws-amplify/cli` (may require `sudo`, depending on your setup)
26+
3. From inside the `frontend` folder, run `amplify configure`, and follow the instructions to create a new IAM user for your computer (in the 'us-west-2' region)
27+
4. From inside the `frontend` folder, run `amplify init`
1428
a. When asked "Do you want to use an existing environment?" say YES
1529
b. Choose the environment "dev"
1630
c. Choose anything you like for your default editor
1731
d. Select the authentication method "AWS profile", and select the profile you created in step 2
18-
4. Run `yarn start` to launch the app!
19-
## Getting Set Up For Development (server)
32+
5. Run `yarn start` to launch the app!
33+
## Getting Set Up For Development (server processing algorithm)
34+
35+
The core algorithm for processing data exists in `server/engine/engine.py`. To test changes `engine.py`:
2036

21-
1. Download (credentials)[https://drive.google.com/file/d/1okCCdvqaZh20gc4TG152o7yJV9_vnBtf/view?usp=sharing] into `.devcontainer/.aws/credentials` and `server/.aws/credentials`.
22-
2. Download (server_credentials.csv)[https://drive.google.com/file/d/1e1GrwpOm0viZhNGkw_lDNPa_cfYhJ3r3/view?usp=sharing] into `.devcontainer/server_credentials.csv` and `server/server_credentials.csv`.
23-
3. Open this project in VSCode, and then use Ctrl+Shift+P and get to the command "Remote-Containers: Open Folder in Container...". Re-open this folder in a Docker container.
24-
4. Using a VSCode Terminal, navigate to `frontend` and execute `yarn start` to begin serving a live frontend
37+
1. Run `pip3 install -r /engine/requirements.txt`
38+
2. Download the [`test_engine.sh` script](https://drive.google.com/file/d/1n-9KSv-wZevuVNwShb1Ur36MRAZlnNhv/view?usp=share_link), place it in this directory
39+
3. Download the [test_data/ folder](https://drive.google.com/drive/folders/1jGfgM1m13ksqLZByKUEoUwsy22OVtEza?usp=share_link) (ask Keenon for access to this), place it in this directory
40+
4. Run `./test_engine.sh` to test out your changes to `engine.py` on existing data. Change the line `TEST_NAME="opencap_test"` to different run against other folder names you find in `test_data/` (careful, don't include the `_original` part or you'll overwrite your input data by accident)
2541

2642
## Hosting a Processing Server
2743

2844
1. Got into the `server` folder
29-
2. Run `docker build -f Dockerfile.dev .` (to run a dev server) or `docker build -f Dockerfile.prod .` (to run a prod server) to build the Docker container to run the server. It's important that you rebuild the Docker container each time you boot a new server, since that sets it up with its own PubSub connection.
30-
3. Run the docker container you just built! That's your server. Leave it running as a process.
45+
2. Download the `server_credentials.csv` file, which Keenon can give you a link to
46+
3. Run `docker build -f Dockerfile.dev .` (to run a dev server) or `docker build -f Dockerfile.prod .` (to run a prod server) to build the Docker container to run the server. It's important that you rebuild the Docker container each time you boot a new server, since that sets it up with its own PubSub connection.
47+
4. Run the docker container you just built! That's your server. Leave it running as a process.
3148

3249
## Switching between Dev and Prod
3350
By default, the main branch is pointed at the dev servers. We keep the current prod version on the `prod` branch.

analytics/.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
emails.txt
2+
users.json

analytics/get_dev_bucket_size.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
#!/bin/bash
2+
3+
aws s3 ls s3://biomechanics-uploads161949-dev --recursive --human-readable --summarize

analytics/get_num_users.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
#!/bin/bash
2+
aws cognito-idp list-users --user-pool-id us-west-2_vRDVX9u35 > users.json
3+
node parse_users.js

analytics/get_prod_bucket_size.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
#!/bin/bash
2+
3+
aws s3 ls s3://biomechanics-uploads83039-prod --recursive --human-readable --summarize

analytics/parse_users.js

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
const fs = require('fs');
2+
3+
try {
4+
const rawText = fs.readFileSync('./users.json', 'utf8');
5+
const data = JSON.parse(rawText);
6+
let emails = [];
7+
let text = "";
8+
for (const user of data.Users) {
9+
for (const attr of user.Attributes) {
10+
if (attr.Name === 'email') {
11+
emails.push(attr.Value);
12+
text += attr.Value + '\n';
13+
}
14+
}
15+
}
16+
console.log(emails.length);
17+
console.log(emails);
18+
let domains = [];
19+
for (const email of emails) {
20+
const domain = email.split('@')[1];
21+
if (!domains.includes(domain)) {
22+
domains.push(domain);
23+
console.log(domain);
24+
}
25+
}
26+
console.log('Domains: '+domains.length);
27+
fs.writeFileSync('./emails.txt', text);
28+
} catch (err) {
29+
console.error(err);
30+
}
31+

blender/.gitignore

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
data
2+
*.csv
3+
*.blend1
4+
dropped_trials.txt
5+
Geometry/

blender/README.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
# Blender Support
2+
3+
These are scripts to run from inside Blender to load animations, and produce high quality renders.
4+
5+
## Dev Installation
6+
7+
You can setup PyCharm to use your bundled Blender Python interpreter. On Mac, that's located at:
8+
9+
`/Applications/Blender.app/Contents/Resources/3.3/python/bin/python3.10`
10+
11+
Then, you can install the appropriate version of [fake-bpy-model](https://github.com/nutti/fake-bpy-module) to match
12+
your Blender.

0 commit comments

Comments
 (0)