-
Couldn't load subscription status.
- Fork 8
Description
moved from aws-samples/aws-iot-greengrass-deploy-nvidia-deepstream-on-edge#4
given at jetson normally the app is executed with
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt
with prior downloading model files with
mkdir -p ../../models/tlt_pretrained_models/peoplenet && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_peoplenet/versions/pruned_v1.0/files/resnet34_peoplenet_pruned.etlt \
-O ../../models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt
I am looking to get usb camera to work with custom deepstream configuration file that loads peoplenet model from deepstream samples for jetson
ref: https://forums.developer.nvidia.com/t/output-tensor-in-facedetectir/166311/9
The tutorial I tried https://aws.amazon.com/blogs/iot/how-to-integrate-nvidia-deepstream-on-jetson-modules-with-aws-iot-core-and-aws-iot-greengrass/ allowed to run test4 deepstream app, also test5 deepstream app , but I would like to take the inputs from the usb camera also use the default trained model - deepstream peoplenet that uses specific model/ config file .
How to customize it?
Moreover, the tutotial I used pointed out :
"Once you see messages coming into AWS IoT Core, there are a lot of options to further process them or store them on the AWS Cloud. One simple example would be to use AWS IoT Rules to push these messages to a customized AWS Lambda function, which parses the messages and puts them in Amazon DynamoDB. You may find the following documents helpful in setting up this IoT rule to storage pipeline:"
Could you provide a more complete example how to do so, please?