rick at linuxmafia.com
Mon Jul 11 00:34:21 PDT 2016
Quoting Ivan Sergio Borgonovo (mail at webthatworks.it):
> And that's because I value simplicity much more than what it seems.
> I'm just aware that there is a mismatch between what I can manage and
I not only sympathise, but also am overbooked myself.
> The problem arise when people pretend the truth is simple just
> because simple things are the only one they can/are willing to understand.
Well, if I take that position in the general case (not counting a few
edge cases where the truth really is simple), please kick me. I'll fly
to Italy to be properly booted in the posterior.
> The world is complex and the more complexity you can digest the better,
> research for simplicity is just a necessity to swallow the whole at
> small bites.
I hope you don't mean that because the _world_ is complex, then
avoidable and apparently needless complexity in computing systems,
particularly servers, are exempt from skeptical scrutiny. Because that
would be conflating two very different things, and essentially promoting
> Well known software engineering techniques like delegation,
> encapsulation, design patterns... don't simplify reality, simplify
> implementation of the model of reality. They are not even meant to
> simplify the model even if they may be helpful to do so. You may have a
> more complicated model you're not able to implement due to other
> And this is not peculiar of CS.
This seems frankly non-responsive to what I said.
My usual inclination when discussions seem to be getting lost in the
clouds of abstract language is to discuss something specific. So,
consider a Linux-based Internet server. Its software presents one attack
surface to remote agents, and a second and broader attack surface to
When I learned firewalls and Internet security (primarily from the
Cheswick & Bellovin text of that name, http://wilyhacker.com/1e/ -- but
also others), a key principle was that the more software functions are
accessible with elevated privilege, the greater the risk of critical
malfunction or security breach. _And_, incidentally, the critical
malfunctions need not require actual attack, to result in harm. Because
bugs are a thing, without attackers.
Thus, it is in the interest of the server sysadmin to use the mimimally
featured code that suffices to accomplish the machine functions deemed
necessary: Mutatis mutandis, you would thus prefer the software
alternative for each role that has the smallest-scoped feature set able
to do the job required. This is why, for example, SVLUG favours
Lightthpd (nginx would be good, too) over Apache httpd for its current
software needs, and NSD rather than BIND9 for authoritative-only DNS
So, when I see an extremely featureful dynamic /dev manager (udevd)
running with root authority on my server, and (AFAICT) nearly everything
it is capable of doing is irrelevant to my server's intended
functionality, I am lead to wonder whether, in 2016, it is still
possible for _my_ use-case to use static /dev instead, or failing that
mdev. (I'm actually quite sure mdev will suit, just not sure about
static /dev with current kernels and hardware.)
> > To the contrary, improved system simplicity is the only hope.
> See above.
I did. You talked past what I said, and didn't address it.
I assume this happened despite good intentions. We're all friends,
> > You do not 'manage' that by crowdsourcing it.
> You do.
> Because a big thick wall is a single point of failure.
> Too simple things may not be flexible enough. You make them more
> complicated, you increase attack surface.
> If you don't come to compromises people will try to circumvent your
> defenses etc...
> Value and complexity go together. You may argue that they may not
> increase with the same law, but when you've finished to explore the
> boundaries, if you've to increase value, you'll have to increase complexity.
> Deterministic behavior is just one of the many proprieties you may want
> from a system.
> The most current theories say you've to be pretty careful about what you
> could expect from determinism ;)
I'm really sorry, but the above is so _very_ abstract that I really have
no idea what it means in the real world and what connection it has with
upthread discussion. Probably me being an irritatingly literal person,
> > Let me tell you a story about mej (Michael E. Jennings).
> And your point is?
That _one guy_ beautifully maintained a major Linux distribution,
unaided, for multiple years. The _whole_ megillah. By himself.
And with very high quality.
You say 'If you get out of the herd and not enough people follow you,
you're doomed.' I cite mej as stunning evidence that this is completely
untrue, _and_ he did something many orders of magnitude more difficult
than merely collating, testing, and documenting how to successfully run
Debian 8 'Jessie' with your choice of init (system) rather than the
systemd default. If mej wasn't 'doomed' -- and he wasn't -- I sure as
Gehenna am not.
And I won't be 'doomed' if I document how to run Debian 8 servers using
mdev rather than udev, either.
(By the way, didn't you say somewhere that my Web page could vanish
tomorrow and be lost to the world? Au contraire: The Internet Archive
> To me the point is that if they didn't merge their resources they
> wouldn't have been successful.
Then, you missed the point about the mej anecdote completely.
More information about the svlug