Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Qt Development
  3. General and Desktop
  4. Converting bitset to signed integer
QtWS25 Last Chance

Converting bitset to signed integer

Scheduled Pinned Locked Moved Solved General and Desktop
5 Posts 2 Posters 2.4k Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Offline
    J Offline
    jars121
    wrote on last edited by
    #1

    Hi,

    I use a generic 32-bit bitset to perform some bit manipulation, before converting the bitset into a useable integer. The process works as intended for unsigned integers, but I'm unable to get the desired result if the intended output integer is signed.

    I use a 32-bit bitset as the data will never be larger than 32-bits; but it can be anywhere from 1 bit up to 32 bits. Code as follows:

    //Let's use the hexadecimal value 0xE008 here as an example
    std::bitset<32> bitset(0xE008);
    
    //Converting this to a uint32_t works as expected:
    uint32_t unsigned = bitset.to_ulong();
    
    qDebug() << "unsigned expected: 57352";
    qDebug() << "unsigned actual: " << unsigned;
    
    //Attempts to instead convert it to a signed integer don't work as expected:
    int32_t signed = static_cast<int32_t>(bitset.to_ulong());
    
    qDebug() << "signed expected: -8184";
    qDebug() << "signed actual: " << signed;
    
    

    The resultant debugs:

    unsigned expected: 57352
    unsigned actual: 57352
    signed expected: -8184
    signed actual: 57352
    

    I've read a couple of threads regarding the use of std::bitset with signed integers but haven't made any progress. Is there something simple I'm missing here?

    B 1 Reply Last reply
    0
    • J jars121

      Hi,

      I use a generic 32-bit bitset to perform some bit manipulation, before converting the bitset into a useable integer. The process works as intended for unsigned integers, but I'm unable to get the desired result if the intended output integer is signed.

      I use a 32-bit bitset as the data will never be larger than 32-bits; but it can be anywhere from 1 bit up to 32 bits. Code as follows:

      //Let's use the hexadecimal value 0xE008 here as an example
      std::bitset<32> bitset(0xE008);
      
      //Converting this to a uint32_t works as expected:
      uint32_t unsigned = bitset.to_ulong();
      
      qDebug() << "unsigned expected: 57352";
      qDebug() << "unsigned actual: " << unsigned;
      
      //Attempts to instead convert it to a signed integer don't work as expected:
      int32_t signed = static_cast<int32_t>(bitset.to_ulong());
      
      qDebug() << "signed expected: -8184";
      qDebug() << "signed actual: " << signed;
      
      

      The resultant debugs:

      unsigned expected: 57352
      unsigned actual: 57352
      signed expected: -8184
      signed actual: 57352
      

      I've read a couple of threads regarding the use of std::bitset with signed integers but haven't made any progress. Is there something simple I'm missing here?

      B Offline
      B Offline
      Bonnie
      wrote on last edited by Bonnie
      #2

      I think 0xE008 only equals -8184 when being a signed 16bit integer?

      1 Reply Last reply
      0
      • J Offline
        J Offline
        jars121
        wrote on last edited by
        #3

        Ah of course you're right.

        I just changed 'signed' from int32_t to int16_t and it gives -8184 as expected.

        I guess I need to check whether the unsigned value is greater than 32767, and if so, then set the left-most/most significant bits accordingly to convert the value to a negative.

        For example:

        if the value is 7FFF (32,767), the value can be converted to a uin32_t as shown above. However, if the value is 8000 (32,768), set the left-most 16 bits to 1 and the resultant int32_t will be -32768 as expected.

        Thanks for pointing me in the right direction!

        B 1 Reply Last reply
        0
        • J jars121

          Ah of course you're right.

          I just changed 'signed' from int32_t to int16_t and it gives -8184 as expected.

          I guess I need to check whether the unsigned value is greater than 32767, and if so, then set the left-most/most significant bits accordingly to convert the value to a negative.

          For example:

          if the value is 7FFF (32,767), the value can be converted to a uin32_t as shown above. However, if the value is 8000 (32,768), set the left-most 16 bits to 1 and the resultant int32_t will be -32768 as expected.

          Thanks for pointing me in the right direction!

          B Offline
          B Offline
          Bonnie
          wrote on last edited by
          #4

          @jars121
          I don't fully understand you but I think you can just convert it to a signed 16bit integer and then assign the 16 bit one to a 32bit integer.

          1 Reply Last reply
          0
          • J Offline
            J Offline
            jars121
            wrote on last edited by
            #5

            Yes you're right, I suppose I was just considering instances where the value is greater than 16 bits, in which case some additional manual processing would be required.

            1 Reply Last reply
            0

            • Login

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • Users
            • Groups
            • Search
            • Get Qt Extensions
            • Unsolved