[svlug] Which linux distro for production ?

Rick Moen rick at linuxmafia.com
Sat Dec 18 12:16:53 PST 2004

Quoting Ivan Sergio Borgonovo (mail at webthatworks.it):

> They make available the LiveCD quite shortly if not immediately after
> the release of the newest release.

This is reflected in last night's update of my SUSE Product Strategy 
HTML file, you'll notice.

> I never tried to install from LiveCD and than upgrade to semi-Pro via
> yast/apt. But I've the feeling it is not that comfortable.

More to the point, LiveCD isn't designed to be installed to the hard
drive at all.  At present, Novell/SUSE have given you no reasonable way
of installing 9.2 to HD other than the retail boxed sets -- by design,
to motivate you to buy the retail offering.  This changes some
weeks/months after release, when they increment Ftp Edition and Personal
CD Edition to match.

> Right. I never used those packages "consciously(?)" so I forgot them.
> They used to mix free and non free in the CD set so to make hard to
> make copies legally.

Let's please not confuse the open source / proprietary distinction with
that of lawfully redistributable versus not.  Last I checked, _all_ SUSE
editions included proprietary ("non-free") software, and I'm not just
referring to the prior releases of YaST. 

In addition to that proprietary software, the retail boxed sets also
include a significant amount of software that's not merely proprietary,
but also is not licensed by its owners/authors for public
redistribution.  Which is, of course, why it's omitted from Ftp Edition,
Personal CD Edition, and Live-CD Edition (which are downloadable).

> yast is now GPLed and I've heard unofficial rumours they were going to
> shuffle packages to make easier to copy the CD set.

I've just downloaded the SUSE 9.2 LiveCD KDE image, and intend to
examine its software to see what the licensing mix is.  LiveCD ISOs
prior to 9.2 could be lawfully redistributed in non-commercial settings
only, but I suspect that has now changed, and all software on LiveCD and
the other downloadable editions is now publicly redistributable.

> why? I had a couple of resolvable problems on desktops mainly for
> stupidity on my part. Otherwise I had a good experience with apt4rpm.

YaST Online Updater's (YOU's) repositories are probably more
systematically maintained.  Just a hunch, based on analogous situations

> pinning seems to be the magic word.
> I'd exchange a pgsql course with an advanced Debian administration
> course at my LUG ;)

I don't qualify as any kind of expert, on this.  I'm just a guy who
sporadically reads documentation, and tries things out.  My main server

:r /etc/apt/preferences

Package: *
Pin: release a=unstable
Pin-Priority: 50

That's one of a couple of different ways to "pin" the system's package
priority to testing:  It specifies that unstable-branch packages will be
automatically assigned low priority during apt-get operations, so that 
unstable-branch packages won't be retrieved by default -- but will be
used to resolve dependencies if one _does_ specify retrieving a package
from that branch.  E.g.:

# apt-get  -t unstable  install mozilla-firefox

This would cause apt-get to retrieve the unstable branch's current Firefox
package (instead of the default testing branch's package), and also
fetch/update from the unstable branch any other packages it depends on.

And, of course, my /etc/apt/sources.list has "deb" lines for both
branches.  (Of course, I don't have Firefox installed on my server, but
I just used the name to illustrate my underlying point.)

> I never saw a discussion about pinning and the art of mixing Debian on
> a mailing list. I got the idea it is something you learn *just* from
> experience.
> Furthermore while I think it meets the target, keeping the balance
> requires much more attention, not just know-how. Having several boxes
> to manage make worth to be careful on one and then deploy on others,
> having 1 or 2 boxes make it a PITA.

In the separate post you made on this topic, you asked a related

> Why should I bother to have packages from testing at all?

Let me cast my mind back to when I had my server on then-current Debian
stable (potato/2.2) and upgraded my workstation to Debian unstable
(sid).  On the one hand, the workstation gained a stunning array of (at
the time) 8000+ leading-edge packages.  On the other, occasionally a
newly uploaded, newly fetched unstable-branch package would have
problems and require some ingenuity to fix.  Either I hand-hacked the
package's existence out of /var/lib/dpkg/status (an officially
disapproved solution, but it worked), or I hand-retrieved a different
version and inserted it via dpkg -i, or something like that.  The
problems were various, not too common, and generally pretty trivial (if
likely to frighten newcomers).

Once in a blue moon, something very wrong would happen, like the time my
keymap got set to AZERTY instead of QWERTY (because the default keymap
for newly revised package console-data had accidentally been set to the
first keymap on the list, instead of one's prior default, and I hadn't
paid attention while picking keymaps).  That was the hardest problem to
fix (lacking a Belgian/French keyboard), and the most severe in its

And, of course, one had to live with the knowledge that today might be
the day that the glibc maintainer uploaded a completely broken libc to
the unstable branch, and the day that you happened to fetch and install
it before anyone fixed the glitch.  The name "unstable" means you don't
get to complain that you weren't put on notice that it could happen.
(This is also why I got in the habit of keeping a mini-installation on
500 MB partition /maint, normally unmounted.)

After the "package pools" scheme was introduced in the package mirrors
(a reorg/redesign of the ftp sites), it became feasible for the Debian
Project to have programmatically populated development branches, and 
"testing" was the first of those.  A cron "quarantining" script runs
nightly on (I think) the master release server, testing newly uploaded
unstable-branch packages to see if they meet certain canned criteria for
package quality.  If they pass, the script populates them into the
testing branch, as well.  In effect, "testing" is the unstable branch,
filtered through a quality check and a quarantining delay of (typically)
a few days.

I changed my workstation's /etc/apt/sources.list to follow the testing
branch instead of unstable, hoping to be a few steps back from the cliff
edge that is the unstable branch, and, well, see what it's like.

If found that the quarantining works pretty well -- but that there were
a couple of side-effects.  First, suites of related packages with
unusually tangled dependency trees (a set that approximately maps to
KDE, GNOME, Mozilla, and Galeon) sometimes had the problem of their
sundry components clearing quarantine at different rates.  You would
find that... eh...  Nautlius was not currently installable (upgradeable)
because libfoobarbaz version 1.2.3 on which it depends was not
available.  So, you'd grab libfoobarbaz_1.2.3_i386.deb from the package
mirrors manually, and "dpkg -i" it.

Second, there was the gap in Security Team coverage:  The Security Team
releases backported fixes for stable-branch packages only, and doesn't
guarantee releasing anything else (although they sometimes do so), and
unstable branch users are _usually_ covered by virtue of getting
upstream releases immediately as the Debian package maintainer uploads
them.  By contrast, people on testing get the latter updates only
when/if they clear quarantine.  Again, the immediate fix for this was to
read security announcement bulletins, and, if they applied to me, fetch
the necessary update package manually and "dpkg -i" it.

So, being on the testing branch is workable but for those two, largely
fixable glitches.  Adding "pinning" and thus convenient access to
unstable branch packages just improves my two fixes, both at the same
time and in the same fashion:  If (hypothetically) the testing-branch
package of mozilla-firefox v. 1.0 won't install because libgtk2.0-0
hasn't yet cleared quarantine, then I can just do "apt-get  -t unstable
install mozilla-firefox" and all of _those_ dependencies are resolved
from the cutting-edge branch, without touching the rest of my system.
And if a security advisory tells me I should upgrade to package ssh
version 1:3.9 but it's not yet available on testing, I can fix that the
same way.

> It seems surely more reasonable to mix unstable and testing rather
> than testing and stable because testing receives security updates
> later than sid.

No, it's more reasonable to "mix unstable and testing" (which is
inaccurate -- I was actually referring to gaining access to
unstable-branch packages on an otherwise testing-branch system) than
testing and stable because the package-version gap between testing and
stable is severe and the results are predictably dysfunctional.
Attempting the latter "pinning" configuration is a common bonehead error.

> Stable and sid may be too different to be worth to try
> to mix without major PITA.

(You mean stable and unstable.)

So are stable and testing.  If you understand what testing _is_ -- i.e.,
unstable with a (usually) small quarantining delay -- you will
understand why.

> But then why should I bother to have packages from testing at all?

Maybe, like me, you appreciate the benefits of quarantining.

> Furthermore I can't find any completely convincing criteria about
> which packages should come from testing and which one from unstable.

I'm not entirely sure you understood how pinning works, when you wrote

First, please understand that pinning is much more general mechanism for
setting package priority than I may have implied.[1]  The canonical
documentation is in the APT HOWTO, sections 3.8 through 3.10:
(As well as in the apt-preferences(5) manpage.)

But the way I was (and am) using it, _all_ packages will come from the
testing branch, other than (1) specific package updates that I request
to come from the later branch, and (2) the particular required update of
those partciular updates' dependencies on that one occasion.

> I would consider to chose exposed packages from sid cos they will
> receive security packages earlier and to take not exposed packages
> and to take not exposed packages from testing.

"Receive security packages"?  Huh?  Obviously, you are not correctly
understanding what is meant by the term "pinning" in this context.
(A particular package can be pinned to a particular version or range of
versions, but that's not the way I've used it.)

I think I can guess what's wrong:  You probably assumes that one
declares a particular named package such as (e.g.) mozilla-firefox to be
"pinned" to a specific development branch henceforth.  That is _not_ the

[1] Also, the method I used, of declaring a low pin-priority for a
branch I want to be non-default, is unusual.  The standard method is to
declare a default branch in /etc/apt/apt.conf , using the
"APT::Default-Release" keyword.  Quoting the apt-preferences(5) manpage:

       APT::Default-Release "stable";

       If  the  target  release has been specified then APT uses the following
       algorithm to set the priorities of the versions of a package.  Assign:

       priority 100
              to the version that is already installed (if any).

       priority 500
              to the versions that are not installed and do not belong to  the
              target release.

       priority 990
              to  the versions that are not installed and belong to the target

       If the target release has not been specified then  APT  simply assigns
       priority  100 to all installed package versions and priority 500 to all
       uninstalled package versions.

There is more in there.  You should read the manpage if interested, as
well as /usr/share/doc/apt/examples/configure-index.gz to see all
possible configuration options for /etc/apt/apt.conf .

Rick Moen                                     Age, baro, fac ut gaudeam.
rick at linuxmafia.com

More information about the svlug mailing list