makefile(.abi) vs. autoconf/automake/libtool/etc.

Subject: makefile(.abi) vs. autoconf/automake/libtool/etc.
From: Paul Rohr (
Date: Thu Feb 15 2001 - 13:03:11 CST

It looks like we have three kinds of folks on this list:

  A. people who understand (and like) our diving Makefile system
  B. people who like (and perhaps understand) the auto* system
  C. masochists willing to maintain parallel tool-specific build env.s

I get the impression that these are disjoint sets, so to help out, here's an
incomplete translation aid, from the perspective of our current build
system. I'd love feedback and corrections from folks who grok the other

Since we're talking about build systems, I've arbitrarily chosen the
following criteria for comparison purposes. Feel free to propose others,
along with an explanation of how they're handled in at least one of these

1. platform support, including ease of adding new ones
2. toolchain required
3. build targets
4. dependencies
5. ease of maintenance in abi tree
6. ease of maintenance in peer modules
7. rebuild speed
8. full build speed

A. diving Makefiles
The current diving make build system for AbiWord was designed by Jeff, a
Makefile guru, as a streamlined variant of the kind of build system used for

1. platform support
(strength) Runs using gmake and native compilers on every supported
platform except legacy Macs. All configuration info for platform-specific
tools are expressed using Makefile syntax in the following compact stubs:


Note that because we use a carefully-constrained subset of standard C
libraries (more or less ANSI), anything beyond that gets wrapped with our
own util functions here:


This separation of platform-specific weirdness (tools vs. libraries) is
worth noting.

2. toolchain required
(strength) Requires only gmake, sed, and a few other small tools. Most of
these have command-line analogs on other platforms, and any syntax
differences are easily learned.

3. build targets
(strength) Allows generation of multiple build variants in the same,
unmodified tree -- just run with different environment settings, and you get
different build targets here.


Thus, to clean the tree, you just have to prune those directories.

4. dependencies
(weakness) Doesn't attempt to track dependencies.

5. ease of maintenance in abi tree
(strength) As mentioned above, supporting new platforms consistently is
easy, and the work scales appropriately.

To add new files to the tree requires minimal Makefile maintenance at the
appropriate nodes of the tree. Most of the work happens by including common
*.mk stubs, so adding each new file is usually a one-line change. Each new
directory added requires a Makefile at that node, plus a reference in the
Makefile one level up. Again, the work scales appropriately.

6. ease of maintenance in peer modules
(mixed) The strength is that by adding a single XP Makefile.abi to the peer
module, we can guarantee that compatible object files and libraries are
dropped into our build system at the appropriate spots, without otherwise
affecting the source trees of the peer modules in any way.

In short, this means that peer modules inherit advantages #1, 3, and 5
above. Plus which, we can choose to build only the portions of peer modules
that we need, in very different ways than the original maintainers intended,
without affecting the integrity of their stuff.

The weakness is that those Makefile.abi files aren't usually maintained by
the owners of the respective modules. Thus, upstream changes to add or drop
files need to get mirrored in Makefile.abi by one of us. The work scales
appropriately, but it's annoying.

7. rebuild speed
(strength) Because this is a diving make system, rebuilds can be localized
by diving to the appropriate level of the tree and doing the appropriate
make variants (tidy, clean, realclean) there.

The scale factors are nice here, because this mirrors and reinforces the
modularity of the code. Localized API changes which only affect a small
part of the tree can be rebuilt quickly. API changes which affect the
entire tree require massive clean rebuilds of the tree (and usually get
mentioned as such during commits).

8. full build speed
(unknown) Most of the overhead of a diving make system comes from
repeatedly invoking make for yet another Makefile stub which is including
the same sets of common logic. A potential downside is that most of the
time spent isn't triggering make rules, but doing sed calculations, etc. to
reestablish the path-relative build environment for yet another node of the

Still, the real test here is to do head-to-head comparisons.

B. autoconf + automake + libtool
I'm starting to understand how this whole paradigm is supposed to work, but
there may be plenty that I'm missing.

Autoconf and friends are unix-centric tools that do a lot of shell-scripting
magic to help abstract out the details of various platform-specific kinds of
weirdness -- historically, there was a *ton* of gratuitous vendor-specific
"differentiation" in the old-style Unix world -- and construct makefiles
that should build properly on those platforms.

1. platform support
(mixed) These tools are quite widely used, and somewhat well-understood, in
various Unix communities -- some more than others. They're used to abstract
out *both* of the following soucres of platform variations:

  - toolchain stuff (compiler/linker names and options)
  - crufty C library stuff

These tool are almost never used anywhere else, where either raw Makefiles
or more tool-specific project files are preferred.

2. toolchain required
(mixed) The recurrent claim is that these tools are no more difficult to
port than the lower-level toolchain used in the existing build system. This
claim usually goes unproven, presumably because the intersection of the
following two populations is pretty small:

  - auto* experts
  - people developing on non-Unix platforms

Brave Sam, who falls in neither category AFAIK, is trying to address these
problems anyhow. ;-)

3. build targets
(unknown) I have no idea whether or how an auto* toolchain can preserve the
flexibility and cleanliness of the existing system as mentioned above.

Since the final result of the auto* process are just Makefiles, I assume
that this could eventually be done with sufficient work, but the opaqueness
of those tools (to me, at least) makes it hard to assess how difficult
this'd really be. It's not obvious at first glance that any of these tools
were designed to meet this goal.

4. dependencies
(mixed) These tools do support dependency tracking, but only for gcc users.
This is nice for them, but does nothing for the rest of us. In fact, it
could tend to reduce the awareness of locality fostered by the existing
system (see #7 above).

5. ease of maintenance in abi tree
(strength?) For the specific tasks mentioned in #5 above, I have no idea
what the required work is. However, I assume it scales as well as the
current approach, or we wouldn't be considering this at all.

I'm getting a vague sense that the complete set of static Makefiles are
built once using configure, and then automatically rebuilt as needed. Is
this correct?

6. ease of maintenance in peer modules
(strength?) If we can figure out how to use unmodified makefiles as
provided by the upstream maintainers, then that certainly minimizes
integration headaches.

The unknown here is how easy it will be to configure those modules to be
built in the ways we need. The current approach seems to be to pass the
contents of the following file as environment arguments to configure:

  abi/src/config/platforms/ (or the equivalent)

This feels busted, so I assume that the real fix is to move more of the
required platform-specific awareness to configure itself. Again, I have no
idea how much of what kind of work that entails.

7. rebuild speed
(mixed) In theory, the static makefiles generated by autoconf and friends
could be faster, since any and all platform-specific and path-specific
configuration is hardwired into the makefile each time its rewritten.

However, this would mean that makefiles would have to be regenerated for
each variant configuration being built -- with the probable exception of
debug vs. release, if the makefiles are written properly.

8. full build speed
(unknown) Again, the real test is to let 'em both rip on a few platforms
(Unix and not) and see who wins.

C. parallel tool-specific build environments
Most of us will continue to use a common build system, so that changes
automatically show up on our platform, too. However, some folks love the
feature sets of their IDE so much that they're willing to assume the
maintenance burden of keeping a parallel build environment in sync -- for
example, MSVC Project files.

Anyone interested in discussing the merits of these alternatives can spawn
their own thread. :-)


This archive was generated by hypermail 2b25 : Thu Feb 15 2001 - 12:55:39 CST