Add Emotion Analyzer (new)#527
Conversation
|
Waiting for some rosdep keys to be merged |
|
@a-ichikura @sawada10 |
Fix emotion analyzer
|
@iory @mqcmd196 Thanks for your comments. I fixed it:
I further have to do these:
|
|
I'm sorry but I wrongly edited rosdep key. The patch would fix the issue |
Sorry for the late reply, and thank you for all the maintenance work. @ayaha-n As you mentioned during the today's labo meeting, is the current issue that when voice information is used as input, the analysis is based on the audio from two seconds earlier (i.e., it's not real-time)? @a-ichikura and I used it during the mamoru experiment, but since we were using text data at the time, I don't think that specific use case is particularly relevant here. |
|
@sawada10 Thanks for your reply.
Yes, I thought it would be better if it starts analyzing audio after the request, but when you want to analyze audio from a microphone, it would be unlikely to have only 2 seconds of audio. In this case, streaming style will be used, so the present implementation analyzing the audio after the request AND 2 seconds before the request is fine. |
|
@sawada10 @a-ichikura |
|
I checked the text_to_emotion function, then it exactly meets our demand. |
@mqcmd196
eye_statusなどが入っていたようで失礼しました.
新しくプルリクし直したので,ご確認いただければ幸いです.
Hume AI を用いたemotion_analyzerを作成しました.
使用方法はreadmeに記載の通りで,
ができます.
ただ,audio (record from /audio) to emotionのときが少し挙動が変で,
ReSpeakerを用いて
としたときは
と録音→分析ができているようなのですが,PC内蔵マイクを用いようと
のようにすると
となってしまって,録音の保存先/home/leus/tmp/hoge.wavを見に行っても,再生できない,もしくは,再生できても私が喋った音声とは異なる雑音のようになる,という感じです.
arecord -lするとのような感じになって,(1,0),(0,6)以外は((0,6)もたまに)capture.launchすると
のようにエラーが出ます.