Hi, I was wondering what precision these models were trained at? Float32? Float16? Bfloat16? With mixed precision? Thanks.