Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 13 additions & 21 deletions BUILD_FROM_SOURCE.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,14 @@ pip install . --no-build-isolation
- Since pip builds out of tree by default, this process will copy quite a lot of data to your TMPDIR. You can change this location by modifying the TMPDIR env variable.
For active development, use an editable install: `pip install -e . --no-build-isolation`.

- Build options are controlled via environment variables. For example:
- By default, we build with **GUI viewer support** and **Bullet physics** enabled.
Override any option via environment variables:
```bash
HABITAT_BUILD_GUI_VIEWERS=OFF pip install . --no-build-isolation # headless build
HABITAT_WITH_BULLET=ON pip install . --no-build-isolation # enable Bullet physics
HABITAT_BUILD_GUI_VIEWERS=OFF pip install . --no-build-isolation # headless build (no GUI)
HABITAT_WITH_BULLET=OFF pip install . --no-build-isolation # disable Bullet physics
HABITAT_WITH_CUDA=ON pip install . --no-build-isolation # enable CUDA
```

- By default, we build a headless version with bullet enabled.


## Build from Source

Expand Down Expand Up @@ -62,14 +61,14 @@ We highly recommend installing a [miniconda](https://docs.conda.io/en/latest/min

1. Build Habitat-Sim

Default build (for machines with a display attached)
Default build (includes GUI viewers and Bullet physics)

```bash
# Assuming we're still within habitat conda environment
pip install . --no-build-isolation
```

For headless systems (i.e. without an attached display, e.g. in a cluster) and multiple GPU systems
For headless systems (i.e. without an attached display, e.g. in a cluster)

```bash
HABITAT_BUILD_GUI_VIEWERS=OFF pip install . --no-build-isolation
Expand All @@ -81,23 +80,16 @@ We highly recommend installing a [miniconda](https://docs.conda.io/en/latest/min
HABITAT_WITH_CUDA=ON pip install . --no-build-isolation
```

With physics simulation via [Bullet Physics SDK](https://github.com/bulletphysics/bullet3/):
To use Bullet, enable bullet physics build via:

```bash
HABITAT_WITH_BULLET=ON pip install . --no-build-isolation
```

With audio sensor via [rlr-audio-propagation](https://github.com/facebookresearch/rlr-audio-propagation/):
To use Audio sensors (Linux only), enable the audio flag via:

```bash
HABITAT_WITH_AUDIO=ON pip install . --no-build-isolation
```

Note1: Build options stack via environment variables, *e.g.* to build in headless mode, with CUDA, and bullet:
Note1: Build options stack via environment variables, *e.g.* to build in headless mode with CUDA:
```bash
HABITAT_BUILD_GUI_VIEWERS=OFF HABITAT_WITH_CUDA=ON HABITAT_WITH_BULLET=ON pip install . --no-build-isolation
HABITAT_BUILD_GUI_VIEWERS=OFF HABITAT_WITH_CUDA=ON pip install . --no-build-isolation
```

Note2: some Linux distributions might require an additional `--user` flag to deal with permission issues.
Expand All @@ -111,22 +103,22 @@ We highly recommend installing a [miniconda](https://docs.conda.io/en/latest/min
| Environment Variable | Default | Description |
|---|---|---|
| `HABITAT_BUILD_GUI_VIEWERS` | `ON` | Build GUI viewer applications (set `OFF` for headless) |
| `HABITAT_WITH_BULLET` | `OFF` | Enable Bullet physics simulation |
| `HABITAT_WITH_BULLET` | `ON` | Enable Bullet physics simulation |
| `HABITAT_WITH_CUDA` | `OFF` | Enable CUDA support |
| `HABITAT_WITH_AUDIO` | `OFF` | Enable audio sensor (Linux only) |
| `HABITAT_LTO` | `OFF` | Enable link-time optimization |
| `HABITAT_BUILD_TESTS` | `OFF` | Build C++ tests |

You can also pass CMake arguments directly via pip's `--config-settings`:
```bash
pip install . --no-build-isolation --config-settings=cmake.build-type=Release
pip install . --no-build-isolation --config-settings=cmake.define.OPTION=VALUE
```

### C++ Viewer and Replayer Applications

When `HABITAT_BUILD_GUI_VIEWERS` is `ON` (the default for non-headless builds),
the C++ `viewer` and `replayer` applications are compiled alongside the Python
bindings.
Since `HABITAT_BUILD_GUI_VIEWERS` is `ON` by default, the C++ `viewer` and
`replayer` applications are compiled alongside the Python bindings in a
standard build.

- **After `pip install .`**: the executables are installed on `PATH` as `viewer`
and `replayer`.
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,4 +60,4 @@ We also use pre-commit hooks to ensure linting and style enforcement. Install th
1. Install `ninja` (`sudo apt install ninja-build` on Linux, or `brew install ninja` on macOS) for significantly faster incremental builds
1. Install `ccache` (`sudo apt install ccache` on Linux, or `brew install ccache` on macOS) for significantly faster clean re-builds and builds with slightly different settings
1. Use editable installs (`pip install -e . --no-build-isolation`) for active development — scikit-build-core will automatically rebuild when you re-import after C++ changes
1. Build options are set via environment variables (e.g. `HABITAT_WITH_BULLET=ON`, `HABITAT_BUILD_GUI_VIEWERS=OFF`). See [BUILD_FROM_SOURCE.md](BUILD_FROM_SOURCE.md) for the full reference table.
1. Build options are set via environment variables (e.g. `HABITAT_BUILD_GUI_VIEWERS=OFF` for headless, `HABITAT_WITH_CUDA=ON` for CUDA). Both GUI viewers and Bullet physics are enabled by default. See [BUILD_FROM_SOURCE.md](BUILD_FROM_SOURCE.md) for the full reference table.
13 changes: 8 additions & 5 deletions build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
# Build habitat-sim using scikit-build-core.
#
# Usage:
# ./build.sh # Default headless build
# ./build.sh # Default build (GUI + Bullet)
# ./build.sh --with-bullet # Build with Bullet physics
# ./build.sh --with-cuda # Build with CUDA support
# ./build.sh --with-audio # Build with audio support
Expand All @@ -22,8 +22,8 @@
set -e

# Default build options (can be overridden by env vars)
: "${HABITAT_BUILD_GUI_VIEWERS:=OFF}"
: "${HABITAT_WITH_BULLET:=OFF}"
: "${HABITAT_BUILD_GUI_VIEWERS:=ON}"
: "${HABITAT_WITH_BULLET:=ON}"
: "${HABITAT_WITH_CUDA:=OFF}"
: "${HABITAT_WITH_AUDIO:=OFF}"
: "${HABITAT_BUILD_TEST:=OFF}"
Expand All @@ -35,12 +35,15 @@ PIP_ARGS=()
# Parse arguments
for arg in "$@"; do
case $arg in
--headless)
--headless)
export HABITAT_BUILD_GUI_VIEWERS=OFF
;;
--gui|--with-gui)
export HABITAT_BUILD_GUI_VIEWERS=ON
;;
--no-bullet)
export HABITAT_WITH_BULLET=OFF
;;
--with-bullet|--bullet)
export HABITAT_WITH_BULLET=ON
;;
Expand All @@ -65,7 +68,7 @@ for arg in "$@"; do
;;
*)
echo "Unknown argument: $arg"
echo "Usage: ./build.sh [--headless] [--gui] [--with-bullet] [--with-cuda] [--with-audio] [--run-tests] [--debug] [--lto] [-v]"
echo "Usage: ./build.sh [--headless] [--gui] [--no-bullet] [--with-bullet] [--with-cuda] [--with-audio] [--run-tests] [--debug] [--lto] [-v]"
exit 1
;;
esac
Expand Down
13 changes: 6 additions & 7 deletions examples/marker_viewer.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,9 @@
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

# NOTE: This example requires building habitat-sim with GUI viewer support:
# HABITAT_BUILD_GUI_VIEWERS=ON pip install . --no-build-isolation
# or:
# ./build.sh --gui
# NOTE: This example requires the GUI viewer, which is built by default.
# If you built in headless mode (HABITAT_BUILD_GUI_VIEWERS=OFF), rebuild with:
# pip install . --no-build-isolation

import ctypes
import math
Expand Down Expand Up @@ -502,7 +501,7 @@ def draw_event(
if self.enable_batch_renderer:
self.render_batch()
else:
self.sim._Simulator__sensors[keys[0]][keys[1]].draw_observation()
self.sim.sensors[keys[1]].draw_observation()
agent = self.sim.get_agent(keys[0])
self.render_camera = agent.scene_node.node_sensor_suite.get(keys[1])
self.debug_draw()
Expand Down Expand Up @@ -660,9 +659,9 @@ def render_batch(self):
keyframe = self.tiled_sims[i].gfx_replay_manager.extract_keyframe()
self.replay_renderer.set_environment_keyframe(i, keyframe)
# Copy sensor transforms
sensor_suite = self.tiled_sims[i]._sensors
sensor_suite = self.tiled_sims[i].sensors
for sensor_uuid, sensor in sensor_suite.items():
transform = sensor._sensor_object.node.absolute_transformation()
transform = sensor.node.absolute_transformation()
self.replay_renderer.set_sensor_transform(i, sensor_uuid, transform)
# Render
self.replay_renderer.render(mn.gl.default_framebuffer)
Expand Down
13 changes: 6 additions & 7 deletions examples/mod_viewer.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,9 @@
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

# NOTE: This example requires building habitat-sim with GUI viewer support:
# HABITAT_BUILD_GUI_VIEWERS=ON pip install . --no-build-isolation
# or:
# ./build.sh --gui
# NOTE: This example requires the GUI viewer, which is built by default.
# If you built in headless mode (HABITAT_BUILD_GUI_VIEWERS=OFF), rebuild with:
# pip install . --no-build-isolation

import ctypes
import json
Expand Down Expand Up @@ -892,7 +891,7 @@ def draw_event(
if self.enable_batch_renderer:
self.render_batch()
else:
self.sim._Simulator__sensors[keys[0]][keys[1]].draw_observation()
self.sim.sensors[keys[1]].draw_observation()
agent = self.sim.get_agent(keys[0])
self.render_camera = agent.scene_node.node_sensor_suite.get(keys[1])
self.debug_draw()
Expand Down Expand Up @@ -1056,9 +1055,9 @@ def render_batch(self):
keyframe = self.tiled_sims[i].gfx_replay_manager.extract_keyframe()
self.replay_renderer.set_environment_keyframe(i, keyframe)
# Copy sensor transforms
sensor_suite = self.tiled_sims[i]._sensors
sensor_suite = self.tiled_sims[i].sensors
for sensor_uuid, sensor in sensor_suite.items():
transform = sensor._sensor_object.node.absolute_transformation()
transform = sensor.node.absolute_transformation()
self.replay_renderer.set_sensor_transform(i, sensor_uuid, transform)
# Render
self.replay_renderer.render(mn.gl.default_framebuffer)
Expand Down
13 changes: 6 additions & 7 deletions examples/spot_viewer.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,9 @@
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

# NOTE: This example requires building habitat-sim with GUI viewer support:
# HABITAT_BUILD_GUI_VIEWERS=ON pip install . --no-build-isolation
# or:
# ./build.sh --gui
# NOTE: This example requires the GUI viewer, which is built by default.
# If you built in headless mode (HABITAT_BUILD_GUI_VIEWERS=OFF), rebuild with:
# pip install . --no-build-isolation
# It also depends on habitat-lab for the Spot robot integration.

import ctypes
Expand Down Expand Up @@ -749,7 +748,7 @@ def draw_event(
if self.enable_batch_renderer:
self.render_batch()
else:
self.sim._Simulator__sensors[keys[0]][keys[1]].draw_observation()
self.sim.sensors[keys[1]].draw_observation()
agent = self.sim.get_agent(keys[0])
self.render_camera = agent.scene_node.node_sensor_suite.get(keys[1])
self.debug_draw()
Expand Down Expand Up @@ -902,9 +901,9 @@ def render_batch(self):
keyframe = self.tiled_sims[i].gfx_replay_manager.extract_keyframe()
self.replay_renderer.set_environment_keyframe(i, keyframe)
# Copy sensor transforms
sensor_suite = self.tiled_sims[i]._sensors
sensor_suite = self.tiled_sims[i].sensors
for sensor_uuid, sensor in sensor_suite.items():
transform = sensor._sensor_object.node.absolute_transformation()
transform = sensor.node.absolute_transformation()
self.replay_renderer.set_sensor_transform(i, sensor_uuid, transform)
# Render
self.replay_renderer.render(mn.gl.default_framebuffer)
Expand Down
2 changes: 1 addition & 1 deletion examples/tutorials/audio_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def main():
sim.add_sensor(audio_sensor_spec)

# Get the audio sensor object
audio_sensor = sim.get_agent(0)._sensors["audio_sensor"]
audio_sensor = sim.get_agent(0).sensors["audio_sensor"]

# set audio source location, no need to set the agent location, will be set implicitly
audio_sensor.setAudioSourceTransform(np.array([3.1035, 1.57245, -4.15972]))
Expand Down
38 changes: 19 additions & 19 deletions examples/tutorials/nb_python/ECCV_2020_Advanced_Features.py
Original file line number Diff line number Diff line change
Expand Up @@ -572,15 +572,15 @@ def simulate(sim, dt=1.0, get_frames=True):
# Used at beginning of cell that directly modifies camera (i.e. tracking an object)
def init_camera_track_config(sim, sensor_name="color_sensor_1st_person", agent_ID=0):
init_state = {}
visual_sensor = sim._sensors[sensor_name]
visual_sensor = sim.sensors[sensor_name]
# save ref to sensor being used
init_state["visual_sensor"] = visual_sensor
init_state["position"] = np.array(visual_sensor._spec.position)
init_state["orientation"] = np.array(visual_sensor._spec.orientation)
init_state["position"] = np.array(visual_sensor.spec.position)
init_state["orientation"] = np.array(visual_sensor.spec.orientation)
# set the color sensor transform to be the agent transform
visual_sensor._spec.position = mn.Vector3(0.0, 0.0, 0.0)
visual_sensor._spec.orientation = mn.Vector3(0.0, 0.0, 0.0)
visual_sensor._sensor_object.set_transformation_from_spec()
visual_sensor.spec.position = mn.Vector3(0.0, 0.0, 0.0)
visual_sensor.spec.orientation = mn.Vector3(0.0, 0.0, 0.0)
visual_sensor.sensor_object.set_transformation_from_spec()
# save ID of agent being modified
init_state["agent_ID"] = agent_ID
# save agent initial state
Expand All @@ -596,9 +596,9 @@ def restore_camera_track_config(sim, init_state):
visual_sensor = init_state["visual_sensor"]
agent_ID = init_state["agent_ID"]
# reset the sensor state for other examples
visual_sensor._spec.position = init_state["position"]
visual_sensor._spec.orientation = init_state["orientation"]
visual_sensor._sensor_object.set_transformation_from_spec()
visual_sensor.spec.position = init_state["position"]
visual_sensor.spec.orientation = init_state["orientation"]
visual_sensor.sensor_object.set_transformation_from_spec()
# restore the agent's state to what was savedd in init_camera_track_config
sim.get_agent(agent_ID).set_state(init_state["agent_state"])

Expand Down Expand Up @@ -870,13 +870,13 @@ def build_widget_ui(obj_attr_mgr, prim_attr_mgr):
# @markdown This example demonstrates updating the agent state to follow the motion of an object during simulation.

rigid_obj_mgr.remove_all_objects()
visual_sensor = sim._sensors["color_sensor_1st_person"]
initial_sensor_position = np.array(visual_sensor._spec.position)
initial_sensor_orientation = np.array(visual_sensor._spec.orientation)
visual_sensor = sim.sensors["color_sensor_1st_person"]
initial_sensor_position = np.array(visual_sensor.spec.position)
initial_sensor_orientation = np.array(visual_sensor.spec.orientation)
# set the color sensor transform to be the agent transform
visual_sensor._spec.position = mn.Vector3(0.0, 0.0, 0.0)
visual_sensor._spec.orientation = mn.Vector3(0.0, 0.0, 0.0)
visual_sensor._sensor_object.set_transformation_from_spec()
visual_sensor.spec.position = mn.Vector3(0.0, 0.0, 0.0)
visual_sensor.spec.orientation = mn.Vector3(0.0, 0.0, 0.0)
visual_sensor.sensor_object.set_transformation_from_spec()

# boost the agent off the floor
sim.get_agent(0).scene_node.translation += np.array([0, 1.5, 0])
Expand Down Expand Up @@ -922,9 +922,9 @@ def build_widget_ui(obj_attr_mgr, prim_attr_mgr):
)

# reset the sensor state for other examples
visual_sensor._spec.position = initial_sensor_position
visual_sensor._spec.orientation = initial_sensor_orientation
visual_sensor._sensor_object.set_transformation_from_spec()
visual_sensor.spec.position = initial_sensor_position
visual_sensor.spec.orientation = initial_sensor_orientation
visual_sensor.sensor_object.set_transformation_from_spec()
# put the agent back
sim.reset()
rigid_obj_mgr.remove_all_objects()
Expand All @@ -944,7 +944,7 @@ def build_widget_ui(obj_attr_mgr, prim_attr_mgr):
# project a 3D point into 2D image space for a particular sensor
def get_2d_point(sim, sensor_name, point_3d):
# get the scene render camera and sensor object
render_camera = sim._sensors[sensor_name]._sensor_object.render_camera
render_camera = sim.sensors[sensor_name].sensor_object.render_camera

# use the camera and projection matrices to transform the point onto the near plane
projected_point_3d = render_camera.projection_matrix.transform_point(
Expand Down
2 changes: 1 addition & 1 deletion examples/tutorials/nb_python/ECCV_2020_Navigation.py
Original file line number Diff line number Diff line change
Expand Up @@ -1059,7 +1059,7 @@ def display_map(topdown_map, key_points=None):
apply_filter=True,
)
else:
for _, v in agent._sensors.items():
for _, v in agent.sensors.items():
habitat_sim.errors.assert_obj_valid(v)
agent.controls.action(
v.object,
Expand Down
18 changes: 9 additions & 9 deletions examples/tutorials/nb_python/asset_viewer.py
Original file line number Diff line number Diff line change
Expand Up @@ -762,9 +762,9 @@ def build_widget_ui(obj_attr_mgr, prim_attr_mgr):
# check if desired object actually exists
if os.path.exists(object_to_view_path) and os.path.isfile(object_to_view_path):
# Acquire the sensor being used
visual_sensor = sim._sensors["color_sensor_3rd_person"]
initial_sensor_position = np.array(visual_sensor._spec.position)
initial_sensor_orientation = np.array(visual_sensor._spec.orientation)
visual_sensor = sim.sensors["color_sensor_3rd_person"]
initial_sensor_position = np.array(visual_sensor.spec.position)
initial_sensor_orientation = np.array(visual_sensor.spec.orientation)

# load an object template and instantiate an object to view
object_template = obj_attr_mgr.create_new_template(str(object_to_view_path), False)
Expand Down Expand Up @@ -802,9 +802,9 @@ def build_widget_ui(obj_attr_mgr, prim_attr_mgr):

# set the sensor to be behind and above the agent's initial loc
# distance is scaled by size of largest object dimension
visual_sensor._spec.position = agent_state.position + sensor_pos
visual_sensor._spec.orientation = mn.Vector3(-0.5, 0.0, 0.0)
visual_sensor._sensor_object.set_transformation_from_spec()
visual_sensor.spec.position = agent_state.position + sensor_pos
visual_sensor.spec.orientation = mn.Vector3(-0.5, 0.0, 0.0)
visual_sensor.sensor_object.set_transformation_from_spec()

# Create observations array
observations = []
Expand Down Expand Up @@ -841,9 +841,9 @@ def build_widget_ui(obj_attr_mgr, prim_attr_mgr):
)

# reset the sensor state for other examples
visual_sensor._spec.position = initial_sensor_position
visual_sensor._spec.orientation = initial_sensor_orientation
visual_sensor._sensor_object.set_transformation_from_spec()
visual_sensor.spec.position = initial_sensor_position
visual_sensor.spec.orientation = initial_sensor_orientation
visual_sensor.sensor_object.set_transformation_from_spec()

# remove added objects
rigid_obj_mgr.remove_all_objects()
Expand Down
2 changes: 1 addition & 1 deletion examples/tutorials/nb_python/coordinate_frame_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def create_sim_helper(scene_id):
agent_node = sim.get_agent(0).body.object
agent_node.translation = [0.0, 0.0, 0.0]
agent_node.rotation = mn.Quaternion()
sensor_node = sim._sensors["color_sensor"]._sensor_object.object
sensor_node = sim.sensors["color_sensor"].node

lr = sim.get_debug_line_render()
lr.set_line_width(3)
Expand Down
Loading
Loading