Important: Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

Converting bitset to signed integer



  • Hi,

    I use a generic 32-bit bitset to perform some bit manipulation, before converting the bitset into a useable integer. The process works as intended for unsigned integers, but I'm unable to get the desired result if the intended output integer is signed.

    I use a 32-bit bitset as the data will never be larger than 32-bits; but it can be anywhere from 1 bit up to 32 bits. Code as follows:

    //Let's use the hexadecimal value 0xE008 here as an example
    std::bitset<32> bitset(0xE008);
    
    //Converting this to a uint32_t works as expected:
    uint32_t unsigned = bitset.to_ulong();
    
    qDebug() << "unsigned expected: 57352";
    qDebug() << "unsigned actual: " << unsigned;
    
    //Attempts to instead convert it to a signed integer don't work as expected:
    int32_t signed = static_cast<int32_t>(bitset.to_ulong());
    
    qDebug() << "signed expected: -8184";
    qDebug() << "signed actual: " << signed;
    
    

    The resultant debugs:

    unsigned expected: 57352
    unsigned actual: 57352
    signed expected: -8184
    signed actual: 57352
    

    I've read a couple of threads regarding the use of std::bitset with signed integers but haven't made any progress. Is there something simple I'm missing here?



  • I think 0xE008 only equals -8184 when being a signed 16bit integer?



  • Ah of course you're right.

    I just changed 'signed' from int32_t to int16_t and it gives -8184 as expected.

    I guess I need to check whether the unsigned value is greater than 32767, and if so, then set the left-most/most significant bits accordingly to convert the value to a negative.

    For example:

    if the value is 7FFF (32,767), the value can be converted to a uin32_t as shown above. However, if the value is 8000 (32,768), set the left-most 16 bits to 1 and the resultant int32_t will be -32768 as expected.

    Thanks for pointing me in the right direction!



  • @jars121
    I don't fully understand you but I think you can just convert it to a signed 16bit integer and then assign the 16 bit one to a 32bit integer.



  • Yes you're right, I suppose I was just considering instances where the value is greater than 16 bits, in which case some additional manual processing would be required.


Log in to reply