Skip to content

Initial implementation#1

Closed
varsill wants to merge 1 commit intomasterfrom
initial_implementation
Closed

Initial implementation#1
varsill wants to merge 1 commit intomasterfrom
initial_implementation

Conversation

@varsill
Copy link
Collaborator

@varsill varsill commented Dec 8, 2025

No description provided.

@varsill varsill marked this pull request as ready for review December 11, 2025 11:20
@varsill varsill requested review from jerzywilczek and mat-hek and removed request for jerzywilczek and mat-hek December 11, 2025 13:13
let mut decoder = resource
.decoder_mutex
.try_lock()
.map_err(|_| Error::Term(Box::new(decoder_lock_failure())))?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly that we're ignoring the error here? If so, can we convert it to a string and raise instead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just realized that these errors are quite helpful when I started working on the encoder in #2 so I will return them as a String, along the error

Copy link
Member

@mat-hek mat-hek Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I'd just raise, at least if Rustler allows it

- elixir/hex_publish:
requires:
- elixir/build_test
- elixir/test
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of disabling tests here, I'd have a tag disable_on_ci added to each test and exclude it in test_helper

) -> Result<(Atom, Vec<RawFrame<'a>>), Error> {
let mut decoder = resource
.decoder_mutex
.try_lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why try_lock here? In case another thread has locked the mutex we'll return error out of the function and (supposedly) never decode the frame, is this what you want?

Copy link
Collaborator Author

@varsill varsill Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, Rustler requires resource to be Sync+Send. Decoder is already Send but I needed to wrap it into Mutex to make it Sync.
At the same time, I don't expect the decoder to be used concurrently by different Erlang processes (and therfore different OS threads at the same time) - decode and flush are supposed to be called by a single Filter's process. If for whatever reason these NIFs are called from different Erlang processes and the lock attempt fails, it's fine that we return an error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the nif will be dirtyio anyway why not just lock instead of try_lock?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the nif will be dirtyio anyway why not just lock instead of try_lock?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the nif will be dirtyio anyway why not just lock instead of try_lock?

Copy link
Collaborator Author

@varsill varsill Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, since it's just the matter of using lock vs try_lock I can surely make it lock.
I decided to use try_lock as the Mutex's purpose is solely to make Rustler happy about the resource being Sync so I felt it would be safer to fail immediately if the resource is used in a way it wasn't supposed to be used (i.e. concurrently).
If Rustler allowed for this, I would be happy with the resource being just Send as this is the only thing that I need to make the element work.

pub height: u32,
}

#[rustler::nif(schedule = "DirtyCpu")]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if DirtyCpu is correct here. Actual blocking in the decoder happens on waiting for the GPU to return the frames, and there shouldn't be much CPU processing, but I don't really understand the difference between dirty IO and CPU in NIFs

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it sounds like DirtyIO then

lib/decoder.ex Outdated
%Membrane.RawVideo{
height: frame.height,
width: frame.width,
pixel_format: :I420,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The frames are probably in NV12, not I420

@jerzywilczek
Copy link
Contributor

One more thing I forgot about, Devices should be shared among all decoders.

Maybe creating multiple decoders with a separate instance and device for each one does work, but I dont think its the recommended way to do this.

@varsill
Copy link
Collaborator Author

varsill commented Dec 13, 2025

One more thing I forgot about, Devices should be shared among all decoders.

Hmm, it sounds as if we need some kind of a "Device server" which would obtain the Device and provide decoders with a reference to it. It would solve the problem of sharing the Device within single BEAM's OS process but I don't think we can do anything about it if the Device was already created in some other OS process.

@jerzywilczek
Copy link
Contributor

Hmm, it sounds as if we need some kind of a "Device server" which would obtain the Device and provide decoders with a reference to it. It would solve the problem of sharing the Device within single BEAM's OS process but I don't think we can do anything about it if the Device was already created in some other OS process.

Its fine to make multiple of them, I just think it's wasteful, since we don't need more than one.

Definitely different os processes can have their own instances and it will be fine, we dont need to worry about that.

I think a device server is the correct solution here.

@varsill varsill mentioned this pull request Dec 17, 2025
@varsill varsill requested a review from jerzywilczek December 22, 2025 14:56
@varsill
Copy link
Collaborator Author

varsill commented Dec 22, 2025

Hi @jerzywilczek - I've implemented some of the requested changes in #3 and #4, you can take a look there as well ;)

@varsill varsill requested a review from mat-hek December 22, 2025 14:57
Copy link
Contributor

@jerzywilczek jerzywilczek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, hope to get these segfaults sorted soon

Comment on lines 30 to 31
let decoder_mutex = Mutex::new(decoder);
let decoder_resource = DecoderResource { decoder_mutex };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let decoder_mutex = Mutex::new(decoder);
let decoder_resource = DecoderResource { decoder_mutex };
let decoder = Mutex::new(decoder);
let decoder = DecoderResource { decoder_mutex };

(nit) shadowing is encouraged

bytes: Binary,
pts_ns: Option<u64>,
) -> Result<(Atom, EncodedFrame<'a>), Error> {
let encoder_resource = resource.encoder().ok_or_else(|| Error::BadArg)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let encoder_resource = resource.encoder().ok_or_else(|| Error::BadArg)?;
let Resource::Encoder(encoder_resource) = resource else {
return Err(Error::BadArg);
};

I dont know if you know you can do that.

I'm not saying you should change this, just wanted to make sure you know this exists

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that's nice, but I think I will leave it with ok_or_else for now

@varsill varsill force-pushed the initial_implementation branch from 9d097e9 to 1fac062 Compare January 15, 2026 13:26
@varsill
Copy link
Collaborator Author

varsill commented Jan 15, 2026

closing in favour of #5

@varsill varsill closed this Jan 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants