Most large projects use a build process to generate its distribution files. The typical steps are as follows: configure, create dependencies, build tools, build executables & build documentations, and install & deploy.
At the first step, it configures software options and determine what are exactly to be built. The second step creates a dependency graph, which specifies the correct order in which all the components of the project are to be built. The third step builds some of the proprietary tools that can be used later in the build process. At the fourth step, the project-specific tools built at the third step, together with those standard tools, are used to preprocess, compile, and link the project executables. At the same time, documentations are generated in the final form as well. At last, the resultant executables and documentations are installed on the target system or prepared for larger-scale deployment and distribution.
"By far the most intricate part of a build process concerns the definition and management of the project's dependencies. These specify how different project parts depend on each other and, therefore, the order in which the project's parts are to be built." It is quite right. Usually, distribution files depend on executables and documentations, and some libraries and components. Documentations depend on those source documents, while executables depend on object files, including some libraries and components which also depend on some object files. Object files, of course, depend on source files, and header files if written in C or C++.
It is not easy to get a dependency graph from a concrete build process manually, vice versa. However, there are some automatic tools to do so. The famous examples are make, ant, etc.