QByteArray and char type
-
@J-Hilk said in QByteArray and char type:
Let me bring even more confusion in this and point to Timur Doumler excellent talk at CppCon 2019 about type punning, where he outlines that this:
void printBitRepresentation(float f) { auto buf* = reinterpret_cast<unsigned char*>(&f); for( int i(0); i < sizeof(float); i++ ) { std::cout << buf[i]; } }
is actually undefined behavior.
https://youtu.be/_qzMpk-22cc?t=2626Wow, that's wild.
The same kind of thing happens in law -- hence why lawyers have job security!
-
@JKSH
We'll have to be careful. I realize this discussion will get out of hand, you know more than I do about correct definitions.What is your detailed definition of a byte?
About twice a "nibble" ;-) Also, if I get a mosquito nibble it doesn't hurt so much, but if I get a mosquito byte it really itches.
In a nutshell, I see for example in Python
Return a new "bytes" object, which is an immutable sequence of small integers in the range 0 <= x < 256
Wikipedia:
The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte
Assuming 8-bits to keep it simple, I have always taken "byte" as meaning an unsigned quantity 0--255, as opposed to a signed one, -128--127. That is the nub. It's just that's how I see it used elsewhere.
Can you provide a concrete example where you'd want to check that a byte is greater than 200 or whatever? (And I mean a byte, not a number, not an ASCII character)
Nope, nothing practical :) I have an imaginary piece of hardware sending me a stream of byte values. For whatever reason (the joystick is faulty in one direction), I wish to ignore the ones larger than 200. I don't want to worry about casting/sign extension.
QByteArray b; if (b.at(0) > 200) ...
.Does unsigned char fit your definition in #1?
Yep. And I don't have to worry about sign!
Does std::byte fit your definition in #1?
It does when I don't look at the content. It's a bit useless when I do want to look at it (as I have to cast all over the place), So all in all it turns out it's a bit like a quantum object :)
Do you think in common parlance that a "byte" implies to you a value between 0--255 (just assume 8-bit). Perhaps it just as much suggests -128--127 to you?
-
@JonB said in QByteArray and char type:
Do you think in common parlance that a "byte" implies to you a value between 0--255 (just assume 8-bit). Perhaps it just as much suggests -128--127 to you?
Byte
doesn't imply a value per se, it's a storage unit. Same if you talk about aWord
, depending on your architecture a word may be of a different size (usually one defines the word through the register's width). The punchline is that we've used these terms so interchangeably through the years for integers of specific width that it became ubiquitous to equate them, hence they defined theqbit
(albeit it's still regular a bit) for the quantum bit.PS. If you're wondering: from information theory a bit is the atom (in the sense of being the smallest distinguishable indivisible piece) of information.
-
@fcarney
Yes, a byte = 8 bit
The problem is, are you going to treat that as unsigned char or signed char. Because, if you are going to be performing mathematical operation on them, the sign matters. if it is just text, it does not matter. -
@JonB said in QByteArray and char type:
Nope, nothing practical :) I have an imaginary piece of hardware sending me a stream of byte values. For whatever reason (the joystick is faulty in one direction), I wish to ignore the ones larger than 200. I don't want to worry about casting/sign extension. QByteArray b; if (b.at(0) > 200) ....
This is wrong (as
QByteArray::at()
will return a signed value)QByteArray b = <something>; if (b.at(0) > 200) ....
This is the right way to do:
QByteArray b = <something>; if (quint8(b.at(0)) > 200) ....
Just my 2 cts,
-
@stretchthebits said in QByteArray and char type:
The problem is, are you going to treat that as unsigned char or signed char. Because, if you are going to be performing mathematical operation on them, the sign matters. if it is just text, it does not matter.
Or as I'd said:
Byte doesn't imply a value per se, it's a storage unit.
Take 4 consecutive bytes in memory, does that imply a value between
2^-32
to2^32 - 1
? Surely not, you can have at least several separate interpretations off the top of my head (packed struct assumed):
int
,unsigned int
,struct { short a, b; }
,char x[4]
and so on. All of this is four bytes and it's the same for the single byte, the interpretation is not tied to actual storage, strictly speaking.@KroMignon said in QByteArray and char type:
This is the right way to do:
QByteArray b = <something>; if (quint8(b.at(0)) > 200) ....
I suggest:
if (quint8(b.at(0)) > quint8(200))
so you don't get the value promoted to int for no good reason.
-
@kshegunov said in QByteArray and char type:
I suggest:
if (quint8(b.at(0)) > quint8(200))so you don't get the value promoted to int for no good reason.
I don't see a issue with
if (quint8(b.at(0)) > 200)
, butif (b.at(0) > 200)
is wrong and will never work. -
@stretchthebits said in QByteArray and char type:
The problem is, are you going to treat that as unsigned char or signed char.
I am going to treat it as whatever storage type I need. I will cast it to what is needed for that particular piece of code. Is this discussion about having to cast the pointer? I do casting all the time from base objects to derived types. How is this any different? I am not even promoting the type. Just saying its unsigned char* now. Why is this an issue?
-
@KroMignon said in QByteArray and char type:
I don't see a issue with if (quint8(b.at(0)) > 200), but if (b.at(0) > 200) is wrong and will never work.
It will work, of course, and the compiler is smart enough to optimize it out it appears. In C/C++ this return value should've been promoted to
int
as200
is an int literal, but I didn't take into account that theax
registers are already integers, so this is going to be pruned when optimizing. Note the finer details here: https://godbolt.org/z/6hb8bv -
@KroMignon said in QByteArray and char type:
This is wrong (as QByteArray::at() will return a signed value)
I know it doesn't work, that's why I wrote it. This whole thread is (supposed to be) a discussion of why that is the case in something named a
QByteArray
. -
@fcarney said in QByteArray and char type:
The definition of byte is that it is 8 bits
No, it's not. A byte is the smallest unit addressable by the CPU.
On most architectures it is 8bit, but not on all.
https://en.wikipedia.org/wiki/Byte -
@JonB said in QByteArray and char type:
I know it doesn't work, that's why I wrote it. This whole thread is (supposed to be) a discussion of why that is the case in something named a QByteArray.
As noticed, it is called
QByteArray
and notQUnsignedByteArray
orQSignedByteArray
, so there is nothing in the name which implies signed or unsigned.
And I found made return signed octets a natural type, because when you use char, short or long in your code, they are per default signed. You always have to specify unsigned to got unsigned value.
It made also sense to me, because QByteArray are design to work in combination with strings, which are signed char@jsulm Your are right Byte, at beginning was not a definition of a data structure, but since decades byte and octet have same meaning in programming world.
-
@KroMignon said in QByteArray and char type:
but since decades byte and octet have same meaning in programming world
Yes, but there is no "official" specification that it has always to be 8bit. It is a "de facto standard".
-
@jsulm said in QByteArray and char type:
Yes, but there is no "official" specification that it has always to be 8bit. It is a "de facto standard".
Yes, I agree with you, but as often, it is the "de facto standard" with prevail.
Wenn you look at many binary protocol specification, in the most case "byte" is used instead of "octet".
It is wrong, but it is the reality. -
@KroMignon said in QByteArray and char type:
And I found made return signed octets a natural type, because when you use char, short or long in your code, they are per default signed.
Note that
char
may be signed or unsigned, this is implementation defined. -
@KroMignon said in QByteArray and char type:
As noticed, it is called QByteArray and not QUnsignedByteArray or QSignedByteArray, so there is nothing in the name which implies signed or unsigned.
That is what this thread is about. I have offered a couple of examples --- I could have sought more --- of what I believe illustrates that in common parlance, and in other programming languages/libraries, the word "byte" does imply unsigned. The examples quoted a range of "0--255" where they might equally well have quoted "-128--127", but in practice they did not.
Maybe that's my opinion, or the opinion of some, but not shared by others.
At which point we have probably exhausted the debate.
-
OK, we'll stick with 1 byte == 8 bits for simplicity
@JonB said in QByteArray and char type:
In a nutshell, I see for example in Python
Return a new "bytes" object, which is an immutable sequence of small integers in the range 0 <= x < 256
...
I have always taken "byte" as meaning an unsigned quantity 0--255, as opposed to a signed one, -128--127. That is the nub. It's just that's how I see it used elsewhere.
...
Do you think in common parlance that a "byte" implies to you a value between 0--255 (just assume 8-bit). Perhaps it just as much suggests -128--127 to you?
We both agree that a byte should not be treated as a signed number -128 -- 127.
After this discussion and after some extra reading, I realize now that it's common for a byte to be treated as an unsigned number 0 -- 255.
I understand now that your definition of a byte is "an unsigned 8-bit integer". In this light, your original post makes sense:
char
is not a suitable datatype to store unsigned 8-bit integer, and I agree with you on this point.Personally though, I prefer to think of a byte as an 8-bit blob of data, distinct from an 8-bit number. That's why I have no problem with QByteArray storing
char
s -- because the signedness of the implementation has no effect on the meaning of the blob. It only affects people who want to implicitly convert the blob into a number (which you do).There is no unanimous consensus, however:
Language Byte-ish Datatype What is it? C unsigned char unsigned 8-bit integer C++ unsigned char unsigned 8-bit integer C++ std::byte 8-bit blob C# byte unsigned 8-bit integer Go byte unsigned 8-bit integer Java byte signed 8-bit integer JavaScript (element of an ArrayBuffer) 8-bit blob Python (element of a bytes-like object) unsigned 8-bit integer R (element of a raw vector) 8-bit blob Swift (element of Data) unsigned 8-bit integer Visual Basic Byte unsigned 8-bit integer Web IDL byte signed 8-bit integer Web IDL octet unsigned 8-bit integer (4 languages above don't let you create a singular variable with a byte type; the bytes always come in an array and extracting the byte involves conversion)
What is your detailed definition of a byte?
About twice a "nibble" ;-) Also, if I get a mosquito nibble it doesn't hurt so much, but if I get a mosquito byte it really itches.
Haha, good one!
-
@kshegunov said in QByteArray and char type:
. In C/C++ this return value should've been promoted to int as 200 is an int literal, but I didn't take into account that the ax registers are already integers, so this is going to be pruned when optimizing.
Again, I don't see any issue here, as value is unsigned, promoting it to int will not propagate sign bit.
Supposing b.at(0) = 0x81, which is 129 in base 10 when unsigned or -127 in base 10 when value is signed.If promoted to int value (32 bit):