Linux - Installing libs via "make install" versus linking to build folder
-
wrote on 22 Jul 2016, 17:17 last edited by Yosemite
I have started developing on Linux with Qt 5.7. I am familiar with installing new libraries to the system PATH by using make/make install. I have also imported a CMake project in Qt Creator, built it, and then linked to the library in the resulting build directory.
Forgive my ignorance, but are there any implications to choosing one way or the other, beyond the location of the lib? I would be inclined to say no, but I don't feel like I am experienced enough to say for sure.
-
I have started developing on Linux with Qt 5.7. I am familiar with installing new libraries to the system PATH by using make/make install. I have also imported a CMake project in Qt Creator, built it, and then linked to the library in the resulting build directory.
Forgive my ignorance, but are there any implications to choosing one way or the other, beyond the location of the lib? I would be inclined to say no, but I don't feel like I am experienced enough to say for sure.
@Yosemite
Hello,
I think you're confusing linking with loading.When you compile your program (or library), providing a correct link line (i.e. linking against the dependencies) is needed to tell the linker where it can find the undefined symbols it encounters. On Windows you have special "import library" for that and you don't even need the actual binary (the dll) to do the linking. On Linux this information is taken out from the ELF header.
When you ask your application to be executed (on any platform), there's a special program (the loader) that loads the code into memory. This program also starts loading the dependencies one by one. If any of the dependencies has dependencies of its own these are loaded as well. So what does loading constitute - simply copying the relevant code from HDD to memory and then going through each symbol that isn't resolved and putting the correct address to it (i.e. making it resolved). And this is done for each dynamic library that you use.
For example, suppose you use
QApplication::QApplication(int &, char **)
- the application object's constructor in Qt. When you compile your program, the compiler puts a "reference" to that method when generating the translation units. Same with anything you use that's not in your cpp file. After that the linker is run to generate the final binary image. The linker takes the whole bunch of object files and puts them together. Then it goes over each "reference" and tries to find it somewhere in the image. If you have it in the image, it puts an address to the reference and voila!, everything is okay. If it doesn't find it, well, then you need to provide it with an external source to search for it - the external libraries you supply on the link line. Now, because addresses of symbols are actually determined only after the library is loaded to memory, the linker doesn't actually put an address to the external unresolved symbol, it only does the checking (name, arguments, etc). The actual resolution (assigning addresses to external symbols) is done by the loader (which is also called "dynamic linker" on Linux). So going back to theQApplication
constructor, you'll get valid address only after the loader has mapped theQtWidgets.so
library and resolved the symbols.Kind regards.
-
wrote on 24 Jul 2016, 00:12 last edited by
Thank you for the reply! I see that my question was ambiguous, so i will try and be more precise.
I am talking about the choice of where the third party libraries actually reside once they are built. That is to say, I could choose to build them and provide my application a path to the library in place, without running "make install". I could also choose to build, then run "make install" and use the system default path when I link to it. I was wondering if there were any material difference to these two choices.
After a bunch of playing around I realized running "make install" after building, so that the libraries are placed in default locations, makes for less work when you have other libraries to build that will link to the earlier ones. This was obvious after I started building a series of dependent libraries, but it was something I hadn't thought of before I began. Otherwise, does running "make install" do anything important?
-
Thank you for the reply! I see that my question was ambiguous, so i will try and be more precise.
I am talking about the choice of where the third party libraries actually reside once they are built. That is to say, I could choose to build them and provide my application a path to the library in place, without running "make install". I could also choose to build, then run "make install" and use the system default path when I link to it. I was wondering if there were any material difference to these two choices.
After a bunch of playing around I realized running "make install" after building, so that the libraries are placed in default locations, makes for less work when you have other libraries to build that will link to the earlier ones. This was obvious after I started building a series of dependent libraries, but it was something I hadn't thought of before I began. Otherwise, does running "make install" do anything important?
Well, my previous answer does seem silly in that context. :)
I am talking about the choice of where the third party libraries actually reside once they are built. That is to say, I could choose to build them and provide my application a path to the library in place, without running "make install". I could also choose to build, then run "make install" and use the system default path when I link to it. I was wondering if there were any material difference to these two choices.
Usually installing in the system location would be preferred to a local copy - the whole point of a "shared library" is to be shared, right? Additionally, this way the OS can do a bit of magic with sharing the images between the processes. That said, the disadvantage is that you may overwrite a newer version of the same library and consequently make some other program fail to run and you're polluting the system location (i.e. no package management tool is tracking your library).
Depending on the supported distributions and the 3rd party libraries involved, I'd always opt for the third option - not distributing the said libraries at all. I'd require from the user to pull those from the repository and I would build/test with the version available there.
Suppose I want to distribute a Qt application (I'm using Debian, so the names reflect that), I would not distribute Qt's binaries at all, instead I'd pull the
libqt5core5
,libqt5gui5
,libqt5widgets5
(if I'm using thecore
,gui
andwidgets
modules) packages on the client machine. While for development I'd pull on my dev machine in addition to the mentioned packages, also thedev
bundles -qtbase5-dev
providing the headers andqtbase5-dbg
providing the debug information.
I'd do that for any library that's available in the distribution's repository. This all works great when the libraries are binary compatible (like Qt), if that's not the case, see below.If the library is not available in the repository (or isn't binary compatible), then I'd usually go the "local copy" way, because I can ensure the file is removed when the application is uninstalled (i.e. I don't pollute the user's default library path) and because I know the version of the library is compatible with my application.
Otherwise, does running "make install" do anything important?
Depends on the makefile, but it usually only copies the binaries to the default library path (ordinarily
/usr/lib
).Kind regards.
-
wrote on 25 Jul 2016, 16:58 last edited by
Not silly! Still information worth having. :)
And your second answer was just the ticket. Thanks for your help.
-
Not silly! Still information worth having. :)
And your second answer was just the ticket. Thanks for your help.
@Yosemite
You're welcome.
Cheers!
1/6