Conversation
cli/src/main.c
Outdated
| " discover Discover Consoles.\n" | ||
| " wakeup Send Wakeup Packet.\n"; | ||
| " wakeup Send Wakeup Packet.\n" | ||
| " audiostream Fetch the audiostream.\n"; |
There was a problem hiding this comment.
Imo this should just be called "stream" since you might also want to get video and pipe it somewhere.
There was a problem hiding this comment.
I've been thinking about this.
If it is possible to fetch the audio-only stream from PS4 (which means that PS4 doesn't even deliver the video stream but only the audio stream), then IMO it makes some sense to have separate commands for audiostream and stream, because then audiostream command won't download the additional video data only to discard it later on.
If PS4 also force sends us the video stream when we only need the audio stream, then maybe we should just only have the stream command like you said which would deliver us both video and audio.
There was a problem hiding this comment.
I think that even if there was an option to only get an audio stream (which as far as I know there is not, unfortunately) it will still make more sense to have both things under a single stream command and switch by an additional option like --audio-only or something, since both operations would do very similar things for the most part.
cli/src/audiostream.c
Outdated
| chiaki_opus_decoder_set_cb(&opus_decoder, NULL, AudioFrameCb, NULL); | ||
|
|
||
| ChiakiAudioSink audio_sink; | ||
| chiaki_opus_decoder_get_sink(&opus_decoder, &audio_sink); |
There was a problem hiding this comment.
Instead of using ChiakiOpusDecoder, which would decode the opus itself, you will want to implement the ChiakiAudioSink interface yourself and then output the raw opus data in ChiakiAudioSink.frame_cb. You might have to also craft a header for ogg or something in ChiakiAudioSink.header_cb, so the reading program will be able to determine how to deal with the data.
There was a problem hiding this comment.
Alright, thanks! I'll give this a shot.
There was a problem hiding this comment.
I can't seem to properly craft the audio stream header. It seems like we are crafting the header for audio stream in the android build in:
chiaki/android/app/src/main/cpp/audio-decoder.c
Lines 149 to 175 in d4b4681
So I mostly copy-pasted the same thing here but I still can't seem to get the audio working with pipes.
cli/src/audiostream.c
Outdated
|
|
||
| static void AudioFrameCb(int16_t *buf, size_t samples_count, void *user) | ||
| { | ||
| fprintf(stderr, "%s", (char *)buf); |
There was a problem hiding this comment.
This does not work, %s is for zero-terminated strings, you will want to use fwrite instead.
|
I got the video working on mpv with next to no lag with: $ chiaki-cli stream --host=192.168.1.2 --registkey=123abc12 --morning=abcdABCDabcdABCDabcdAB== 2>&1 > /dev/null | mpv --no-cache --untimed --no-demuxer-thread --vd-lavc-threads=1 -However, this still results in this warning from mpv (and in turn ffmpeg): [lavf] This format is marked by FFmpeg as having no timestamps!
[lavf] FFmpeg will likely make up its own broken timestamps. For
[lavf] video streams you can correct this with:
[lavf] --no-correct-pts --fps=VALUE
[lavf] with VALUE being the real framerate of the stream. You can
[lavf] expect seeking and buffering estimation to be generally
[lavf] broken as well.I wonder if it is possible to write some kind of header information to pipe so these warnings do not show up? |
|
Sorry, I haven't been able to work on this. This stuff is still something on the woodoo side for me. If someone else wants to work on supporting headless mode, feel free to take stuff from this PR! |
|
I think I'll finish it when I have time. The timestamp warning is probably expected given the nature of the video data. |
3693db7 to
9c1dfb6
Compare
What platform does your feature request apply to?
Linux (and maybe Windows)
Is your feature request related to a problem? Please describe.
I want to stream only the audio from my PS4 to my headless Raspberry Pi using Chiaki. This will allow me to listen to real-time audio from my PS4 from the external speakers that are connected to my Raspberry Pi, which would be very cool!
Describe the solution you'd like
I want to be able to run something like this on my Raspberry Pi which will write the audio data to
STDOUTand other tools likeffplayormpvwould be able to play this audio stream.$ chiaki-cli audiostream --host=192.168.1.2 --registkey=123abc12 --morning=abcdABCDabcdABCDabcdAB== | ffplay -This feature would also allow users to redirect this audio output to a file which would allow them to record the audio from PS4.
Describe alternatives you've considered
It is already possible to listen to the audio stream by running the Chiaki GUI but this requires an X server to be available and is unnecessarily more expensive on resources as the video stream from the PS4 is decoded and displayed too.
Additional context
I've tried to implement something like this in this draft PR myself and I think I'm pretty close. To try out this draft PR:
The option to build the CLI needs to be
ONin:chiaki/CMakeLists.txt
Line 13 in 736b483
Then compile and run:
This should write what I believe is the OPUS encoded audio to
stderr. It writes tostderrbecause currently all the chiaki log output is written tostdout, so the output would mix up if we were to write the audio data tostdouttoo (it would be a good idea to write logs tostdoutwhich AFAIK is the convention, but that is a thing for a another day).However, when I pipe this audio output from
stderrtoffplayormpvwith:The player fails to recognize it as a valid audio data and nothing can be heard. I'm stuck here and need help to know why the player doesn't recognize and play the audio data received.
Also, I'm a novice in C and low-level stuff so there is a good chance this draft PR can potentially segfault at places or allocate unnecessary memory.