Are there any plans for supporting 16 bit floats (fp16 and Bfloat16) quantization for embeddings. I would assume it would be an easier choice that doesn't separately need to train any codebooks and gives some headroom for scaling indexes without compromising on recall quality