Skip to content
This repository was archived by the owner on Feb 12, 2022. It is now read-only.

Commit 7c7df84

Browse files
jpeddicordmm318
authored andcommitted
Initial commit
0 parents  commit 7c7df84

28 files changed

+2248
-0
lines changed

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
*Issue #, if available:*
2+
3+
*Description of changes:*
4+
5+
6+
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

.travis.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
install:
2+
- git clone https://github.com/ros-industrial/industrial_ci.git .ros_ci
3+
script:
4+
- .ros_ci/travis.sh

README.md

Lines changed: 191 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,191 @@
1+
# kinesis_video_streamer
2+
3+
4+
## Overview
5+
The Kinesis Video Streams ROS package enables robots to stream video to the cloud for analytics, playback, and archival use. Out of the box, the nodes provided make it possible to encode & stream image data (e.g. video feeds and LIDAR scans)
6+
from a ROS “Image” topic to the cloud, enabling you to view the live video feed through the Kinesis Video Console, consume the stream via other applications, or perform intelligent analysis, face detection and face recognition
7+
using Amazon Rekognition.
8+
9+
The node will transmit standard `sensor_msgs::Image` data from ROS topics to Kinesis Video streams, optionally encoding the images as h264 video frames along the way (using the included h264_video_encoder),
10+
and optionally fetches Amazon Rekognition results from corresponding Kinesis Data Streams and publishing them to local ROS topics.
11+
12+
Note: h.264 hardware encoding is supported out of the box for OMX encoders and has been tested to
13+
work on the Raspberry Pi 3. In all other cases, software encoding would be used, which is significantly more computing intensive and may affect overall system performance. If you wish to use a custom ffmpeg/libav encoder, you may
14+
pass a `codec` ROS parameter to the encoder node (the name provided must be discoverable by [avcodec_find_encoder_by_name]). Certain scenarios may require offline caching of video streams which is not yet performed by this node.
15+
16+
**Amazon Kinesis Video Streams**: Amazon Kinesis Video Streams makes it easy to securely stream video from connected
17+
devices to AWS for analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions
18+
of devices. It also durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to playback video for live and on-demand
19+
viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Recognition Video, and libraries for ML frameworks such as Apache MxNet, TensorFlow, and OpenCV.
20+
21+
**Amazon Rekognition**: The easy-to-use Rekognition API allows you to automatically identify objects, people, text, scenes, and activities, as well as detect any inappropriate content. Developers can quickly build a searchable
22+
content library to optimize media workflows, enrich recommendation engines by extracting text in images, or integrate secondary authentication into existing applications to enhance end-user security. With a wide variety of use
23+
cases, Amazon Rekognition enables you to easily add the benefits of computer vision to your business.
24+
25+
**Keywords**: ROS, AWS, Kinesis Video Streams
26+
27+
### License
28+
The source code is released under [Apache 2.0].
29+
30+
**Author**: AWS RoboMaker<br/>
31+
**Affiliation**: [Amazon Web Services (AWS)]<br/>
32+
**Maintainer**: AWS RoboMaker, [email protected]
33+
34+
### Supported ROS Distributions
35+
- Kinetic
36+
- Lunar
37+
- Melodic
38+
39+
40+
## Installation
41+
42+
### AWS Credentials
43+
You will need to create an AWS Account and configure the credentials to be able to communicate with AWS services. You may find [AWS Configuration and Credential Files] helpful.
44+
45+
The IAM user will need permissions for the following actions:
46+
- `kinesisvideo:CreateStream`
47+
- `kinesisvideo:TagStream`
48+
- `kinesisvideo:DescribeStream`
49+
- `kinesisvideo:GetDataEndpoint`
50+
- `kinesisvideo:PutMedia`
51+
52+
For [Amazon Rekognition] integration, the user will also need permissions for these actions:
53+
- `kinesis:ListShards`
54+
- `kinesis:GetShardIterator`
55+
- `kinesis:GetRecords`
56+
57+
### Building from Source
58+
Create a ROS workspace and a source directory
59+
60+
mkdir -p ~/ros-workspace/src
61+
62+
To build from source, clone the latest version from master branch and compile the package
63+
64+
- Clone the package into the source directory
65+
66+
cd ~/ros-workspace/src
67+
git clone https://github.com/aws-robotics/utils-common.git
68+
git clone https://github.com/aws-robotics/utils-ros1.git
69+
git clone https://github.com/aws-robotics/kinesisvideo-encoder-common.git
70+
git clone https://github.com/aws-robotics/kinesisvideo-encoder-ros1.git
71+
git clone https://github.com/aws-robotics/kinesisvideo-common.git
72+
git clone https://github.com/aws-robotics/kinesisvideo-ros1.git
73+
74+
- Install dependencies
75+
76+
cd ~/ros-workspace && sudo apt-get update
77+
rosdep install --from-paths src --ignore-src -r -y
78+
79+
- Build the packages
80+
81+
cd ~/ros-workspace && colcon build
82+
83+
- Configure ROS library Path
84+
85+
source ~/ros-workspace/install/setup.bash
86+
87+
- Build and run the unit tests
88+
89+
colcon build --packages-select kinesis_video_streamer --cmake-target tests
90+
colcon test --packages-select kinesis_video_streamer kinesis_manager && colcon test-result --all
91+
92+
93+
## Launch Files
94+
A launch file called `kinesis_video_streamer.launch` is included in this package that gives an example of how to include a stream configuration file when configuring the parameter server for this node. The launch file uses the following arguments:
95+
96+
| Arg Name | Description |
97+
| -------- | ----------- |
98+
| stream_config | A path to a rosparam config file for the (first) stream. If not provided, the launch file will default to using the `sample_configuration.yaml` that was provided with this package. |
99+
100+
An example launch file called `sample_application.launch` is included in this project that gives an example of how you can include this node in your project and provide it with arguments.
101+
102+
103+
## Usage
104+
105+
### Run the node
106+
1. Configure the nodes (for more details, see the extended configuration section below).
107+
- Set up your AWS credentials and make sure you have the required IAM permissions.
108+
- Encoding: review [H264 Video Encoder sample configuration file] and pay attention to subscription_topic (camera output - expects a `sensor_msgs::Image` topic) and publication_topic.
109+
- Streaming: review [Kinesis Video Streamer sample configuration file] - make sure subscription_topic matches the encoder's publication_topic.
110+
2. To use Amazon Rekognition for face detection and face recognition, follow the steps on the Rekognition guide (skip steps 8 & 9 as they are already performed by this node): https://docs.aws.amazon.com/rekognition/latest/dg/recognize-faces-in-a-video-stream.html
111+
3. Example: running on a Raspberry Pi
112+
- `roslaunch `[`raspicam_node`]`camerav2_410x308_30fps.launch`
113+
- `roslaunch h264_video_encoder sample_application.launch`
114+
- `roslaunch kinesis_video_streamer sample_application.launch`
115+
- Log into your AWS Console to see the availabe Kinesis Video stream.
116+
- For other platforms, replace step 1 with an equivalent command to launch your camera node. Reconfigure the topic names accordingly.
117+
118+
119+
## Configuration File and Parameters
120+
Applies to the `kinesis_video_streamer` node. For configuring the encoder node, please see the README for the [H264 Video Encoder node]. An example configuration file called `stream0.yaml` is provided. When the parameters are absent in
121+
the ROS parameter server, default values are used. Since this node makes HTTP requests to AWS endpoints, valid AWS credentials must be provided (this can be done via the environment variables `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` - see https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html).
122+
123+
### Node-wide configuration parameters
124+
The parameters below apply to the node as a whole and are not specific to any one stream.
125+
126+
| Parameter Name | Description | Type |
127+
| -------------- | -----------------------------------------------------------| ------------- |
128+
| aws_client_configuration/region | The AWS region which the video should be streamed to. | *string* |
129+
| kinesis_video/stream_count | The number of streams you wish to load and transmit. Each stream should have its corresponding parameter set as described below. | *int* |
130+
| kinesis_video/log4cplus_config | (optional) Config file path for the log4cplus logger, which is used by the Kinesis Video Producer SDK. | *string* |
131+
132+
### Stream-specific configuration parameters
133+
The parameters below should be provided per stream, with the prefix being `kinesis_video/stream<id>/<parameter name>`.
134+
135+
| Parameter Name | Description | Type |
136+
| ------------- | -----------------------------------------------------------| ------------- |
137+
| subscription_queue_size | (optional) The maximum number of incoming and outgoing messages to be queued towards the subscribed and publishing topics. | *int* |
138+
| subscription_topic | Topic name to subscribe for the stream's input. | *string* |
139+
| topic_type | Specifier for the transport protocol (message type) used. '1' for KinesisVideoFrame (supports h264 streaming), '2' for sensor_msgs::Image transport, '3' for KinesisVideoFrame with AWS Rekognition support. | *int* |
140+
| stream_name | the name of the stream resource in AWS Kinesis Video Streams. | *string* |
141+
| rekognition_data_stream | (optional - required if topic type == 3) The name of the Kinesis Data Stream from which AWS Rekognition analysis output should be read. | *string* |
142+
| rekognition_topic_name | (optional - required if topic type == 3) The ROS topic to which the analysis results should be published. | *string* |
143+
144+
Additional stream-specific parameters such as frame_rate can be provided to further customize the stream definition structure. See [Kinesis header stream definition] for the remaining parameters and their default values.
145+
146+
147+
## Performance and Benchmark Results
148+
We evaluated the performance of this node by runnning the following scenario on a Raspberry Pi 3 Model B Plus connected to a Raspberry Pi camera module. The camera output was setup at a rate of 30 fps and resolution of 410x308 pixels, and encoded at a bitrate of 2mbps.
149+
- Launch a baseline graph containing the talker and listener nodes from the [roscpp_tutorials package](https://wiki.ros.org/roscpp_tutorials), plus two additional nodes that collect CPU and memory usage statistics. Allow the nodes to run for 60 seconds.
150+
- Following the instructions in the "Quick Start" section above, launch a `raspicam_node` node to get the images from the camera module, then launch a `h264_video_encoder` node to encode the images, and finally launch a `kinesis_video_streamer` node to send the image frames to the Amazon Kinesis Video Streams service. Allow the nodes to run for 180 seconds.
151+
- Terminate the `raspicam_node`, `h264_video_encoder` and `kinesis_video_streamer` nodes, and allow the remaining nodes to run for 60 seconds.
152+
153+
The following graph shows the CPU usage during that scenario. After we start launching the kinesis nodes at second 60, the 1 minute average CPU usage increases from an initial 5.5% for the baseline graph up to a peak of 20.25%, and stabilizes around 15% until we stop the nodes around second 260.
154+
155+
![cpu usage](wiki/images/cpu.svg)
156+
157+
The following graph shows the memory usage during that scenario. Free memory also accounts for additional memory available through a swap partition. After launching the kinesis nodes around second 60, the memory increases from the 292 MB for the baseline graph up to a peak of 392 MB (+34.25%), and stabilizes around 374 MB (+28.08% wrt. baseline graph). The memory usage goes down to 318 MB after stopping the kinesis nodes.
158+
159+
![memory usage](wiki/images/memory.svg)
160+
161+
162+
## Node Details
163+
Applies to the `kinesis_video_streamer` node; Please see the following README for encoder-specific configuration.
164+
- [H264 Video Encoder node]
165+
166+
### Subscribed Topics
167+
The number of subscriptions is configurable and is determined by the `kinesis_video/stream_count` parameter. Each subscription is of the following form:
168+
169+
| Topic Name | Message Type | Description |
170+
| ---------- | ------------ | ----------- |
171+
| *Configurable* | *Configurable* (kinesis_video_msgs/KinesisVideoFrame or sensor_msgs/Image) | The node will subscribe to a topic of a given name. The data is expected to be either images (such as from a camera node publishing Image messages), or video frames (such as from an encoder node publishing KinesisVideoFrame messages). |
172+
173+
174+
## Bugs & Feature Requests
175+
Please contact the team directly if you would like to request a feature.
176+
177+
Please report bugs in [Issue Tracker].
178+
179+
180+
[`raspicam_node`]: https://github.com/UbiquityRobotics/raspicam_node
181+
[Amazon Rekognition]: https://docs.aws.amazon.com/rekognition/latest/dg/streaming-video.html
182+
[Amazon Web Services (AWS)]: https://aws.amazon.com/
183+
[Apache 2.0]: https://aws.amazon.com/apache-2-0/
184+
[avcodec_find_encoder_by_name]: https://ffmpeg.org/doxygen/2.7/group__lavc__encoding.html#gaa614ffc38511c104bdff4a3afa086d37
185+
[AWS Configuration and Credential Files]: https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html
186+
[H264 Video Encoder node]: https://github.com/aws-robotics/kinesisvideo-encoder-ros1/blob/master/README.md
187+
[H264 Video Encoder sample configuration file]: https://github.com/aws-robotics/kinesisvideo-encoder-ros1/blob/master/config/sample_configuration.yaml
188+
[Issue Tracker]: https://github.com/aws-robotics/kinesisvideo-ros1/issues
189+
[Kinesis header stream definition]: https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-data.html#how-data-header-streamdefinition
190+
[Kinesis Video Streamer sample configuration file]: kinesis_video_streamer/config/sample_configuration.yaml
191+
[ROS]: http://www.ros.org

kinesis_video_msgs/CMakeLists.txt

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
cmake_minimum_required(VERSION 2.8.3)
2+
project(kinesis_video_msgs)
3+
4+
find_package(catkin REQUIRED COMPONENTS
5+
message_generation
6+
message_runtime
7+
diagnostic_msgs
8+
)
9+
10+
## Generate messages in the 'msg' folder
11+
add_message_files(
12+
FILES
13+
KinesisVideoFrame.msg
14+
KinesisImageMetadata.msg
15+
)
16+
17+
## Generate added messages and services with any dependencies listed here
18+
generate_messages(
19+
DEPENDENCIES diagnostic_msgs
20+
)
21+
22+
include_directories(
23+
# include
24+
${catkin_INCLUDE_DIRS}
25+
)
26+
27+
catkin_package()

0 commit comments

Comments
 (0)