Fix inference bug when there are NULL in columns#175
Fix inference bug when there are NULL in columns#175hjk1030 wants to merge 5 commits intoddkang:mainfrom
Conversation
|
@ttt-77 how come the test case doesn't reflect the issue? |
|
I removed the rows containing NULL values for previous tests. Can you use the data file from the provided link to see if you can reproduce the error? To expedite the process, you can retain only a few normal rows and all abnormal rows initially. @hjk1030 https://drive.google.com/file/d/19lbMHnAPVs41iHlZXukRT6j2jUvJ7se8/view?usp=sharing |
|
I still can't reproduce the same error. It seems that the program would abort at a type check before the request is sent(though the fix works for that). Could you provide the test script that the error happened? |
|
It appears that JSON now allows 'None' values, so this is no longer an issue. However, rows containing 'None' values will be dropped. Could you check if we can remove '.dropna()' from the following code? @continue-revolution |
|
I think it's fine to remove "dropna()" |
|
I believe dropna is still needed as there are lines containing only null values corresponding to no outputs. I changed the parameters to dropping only these lines and it seems to pass the test for now. |
The problem in issue #104 seems to be caused by having NaN in the dataframe, which is not possible to transformed into json. By replacing the NaN in the dataframe with empty string should fix the issue.
P.S. I cannot really reproduce the problem since the blank place in the given table are not really null, instead are wrongly recognized strings starts with character '='. However, after searching on stackoverflow I believe the fix should resolve the issue.