Conversation
|
Hi @ksew1 and thanks for the pull request! Looks great on the first look and I will be having a detailed review soon. |
| }, | ||
| x | ||
| ) do | ||
| {_, _, _, jll} = |
There was a problem hiding this comment.
As you don't use the first three variables, you can wrap them in a tuple, and then use more convenient pattern match {_, jil}
| {i + 1, feature_log_probability, x, jll} | ||
| end | ||
|
|
||
| total_jll = jll + class_log_priors |
|
|
||
| feature_count = Nx.broadcast(0.0, {num_features, num_classes, num_categories}) | ||
|
|
||
| {_, _, _, _, feature_count} = |
msluszniak
left a comment
There was a problem hiding this comment.
I've Dropped some minor comments, but LGMT. There are some part that are common for all of NB algorithms. I think that it will be cool to separate them into utils.ex inside naive_bayes dir. But it's not necessary as a scope of this PR.
ad213ec to
e38812e
Compare
I’ve addressed all the comments 😄. I agree that it would be a good idea to extract the common parts into a separate file. I’ll do this in a separate PR. |
|
Any reason why |
|
|
@josevalim @msluszniak I am still having a look. Seems very good, but please give me some time before merging. I might have some improvements to suggest (e.g. maybe the code can be vectorised; this could be valuable for @ksew1 to learn in case they don't know it already). Happy New Year everyone! |
|
@josevalim Let's merge this one; I might submit another PR in case I see a convenient way to vectorise parts of the procedure. I apologise for the delay, in particular to @ksew1. |
|
💚 💙 💜 💛 ❤️ |
Added CategoricalNB :)