16 bit float precision
-
wrote on 6 May 2020, 19:34 last edited by Q139 5 Jun 2020, 19:47
Hi,
I am thinking of using half precision float to reduce memory usage as my software has large vectors of floats.Can I expect faster performance as less data requests from ram?
Am I correc to assume that decimal precision of float16 is about 3-4decimal places?
Is this fastest implementation of this data type or should look for other libraries?
https://doc.qt.io/qt-5/qfloat16.html -
Hi,
I am thinking of using half precision float to reduce memory usage as my software has large vectors of floats.Can I expect faster performance as less data requests from ram?
Am I correc to assume that decimal precision of float16 is about 3-4decimal places?
Is this fastest implementation of this data type or should look for other libraries?
https://doc.qt.io/qt-5/qfloat16.htmlwrote on 6 May 2020, 19:55 last edited by@Q139 said in 16 bit float precision:
Can I expect faster performance as less data requests from ram?
I note the docs say:
This implies that any arithmetic operation on a qfloat16 instance results in the value first being converted to a float. This conversion to and from float is performed by hardware when possible, but on processors that do not natively support half-precision, the conversion is performed through a sequence of lookup table operations.
I don't know, does your hardware support half-precision floats?
-
Hi,
I am thinking of using half precision float to reduce memory usage as my software has large vectors of floats.Can I expect faster performance as less data requests from ram?
Am I correc to assume that decimal precision of float16 is about 3-4decimal places?
Is this fastest implementation of this data type or should look for other libraries?
https://doc.qt.io/qt-5/qfloat16.htmlHi @Q139,
just to add to @JonB's good answer:
Introduction to float 16 bit is here: https://en.wikipedia.org/wiki/Half-precision_floating-point_format
Can I expect faster performance as less data requests from ram?
The only correct answer to this question is: profile your program and decide afterwards.
Regards
-
wrote on 7 May 2020, 06:07 last edited by
I guess that half precision has been supported on GPUs for a while now. I thought this to be a relatively new feature on CPUs. However, here is an article from 2012 by Intel explaining hardware support for loading/storing (including conversion) of half precision floats: https://software.intel.com/content/www/us/en/develop/articles/performance-benefits-of-half-precision-floats.html
So, I guess you can assume half precision floats on comparatively recent computers.
-
wrote on 10 May 2020, 17:24 last edited by Q139 5 Oct 2020, 17:24
It worked out as good idea and,improved performance in my case(accessing 15+gb float arrays from ram)
Probably with each ram access it was pulling twice as much to cpu cache and less ram access wait for cpu. -
It worked out as good idea and,improved performance in my case(accessing 15+gb float arrays from ram)
Probably with each ram access it was pulling twice as much to cpu cache and less ram access wait for cpu.wrote on 12 May 2020, 14:15 last edited by@Q139 said in 16 bit float precision:
It worked out as good idea
Nice, are you able to mark your post as solved then? Thanks.
1/6