-
Notifications
You must be signed in to change notification settings - Fork 126
Description
I would like to discuss the ramifications of adding support for complex arrays in ulab
.
At the beginning, the Fourier transform was the single function that would have required complex arrays, but it was possible to just pass and return two real arrays that stood for the real and imaginary parts. This was not very elegant, but still acceptable. However, there have been at least two feature requests recently that would result in functions that lead out of the domain of real numbers.
Namely,
- eigenvalues/eigenvectors of generic matrices [FEATURE REQUEST]
linalg.eig
for asymmetric matrices (and SVD?) #363 - roots of polynomials [FEATURE REQUEST] Add numpy.polynomial.polynomial.Polynomial.roots() #355
The problem is that these functions could potentially return complex arrays, even if the input was real.
Adding a new dtype
increases the firmware size, because the dtype
s have to be resolved in C; in particular, the biggest contributor is probably the set of binary operators, because there one has to tackle all possible combinations, therefore, this contribution to the firmware size scales with the square of the number of dtypes
. At the moment, we have 5 and a half dtypes
((u)int8
, (u)int16
, float/double
, plus bool
, which is actually an uint8
with a special flag), hence we deal with 25 combinations. By the addition of complex
, we would end up with 36 combinations.
There are a number of functions that would not have to support complex
. E.g., (arg)min
, (arg)max
, (arg)sort
, median
, clip
, fmin
etc. With these considerations, a rough estimate is 30% extra in the firmware size, if we add complex
as a dtype
.
Before jumping off the deep end, here are a couple of questions:
- Is 30% extra in firmware size worth it? (This is about 30 kB in two dimensions.) Of course, we could add this as an option, selectable via a pre-processor switch, but the code still has to be implemented, and we would have to do something about the documentation: the most-sought feature is the Fourier transform, and if the
complex
can be excluded from the firmware, then there would be two tiers, where the behaviour of thefft
function depends on the switch. That is highly undesirable. - Could the above-mentioned two feature requests be addressed in a different way, without the introduction of
complex
, as, e.g., infft
? - What should happen with the function pointers that provide an optimisation route? (These basically move all operations into the floating point domain, even if the input is an integer, thereby bypassing the problem of type resolution discussed above. This results in smaller firmware at the expense of execution speed. If
complex
is available, then that is the largest set of numbers, therefore, the function pointed to by the function pointer must return a complex number for all types. However, complexes can't just be passed to certain math functions in a haphazard manner. That is, whymicropython
implements its owncmath
module.) - Would it make sense to support
complex
only as a return type, i.e., to allow a function to return a complex, but not as an argument, except forreal
,imag
, andfft
? - Along the same lines, is partial support (only some functions) acceptable? Could we say that a binary operator works with real arrays only, and we bail out, if an array is complex?
- Are there hardware considerations? Some MCUs don't even have an FPU. Can we expect that complex math works on them? Having to sort out hardware limitations via pre-processor macros would definitely be a show stopper.