On 12/7/19 10:36 AM, gouessej [via jogamp] wrote:
> Personally, I call the SystemV scripts in the SystemD services and it works
> correctly. I did it for Jetty, I contributed, it has worked as expected for
> months. If Debian drops SystemV, you'll still be able to call your SystemV
> scripts but it will require some redesign.
Disclaimer: My intention is to provoke a reasonable debate,
sometimes I might exxagerate a little bit.
As we say in Germany: Potatoes are not eaten as hot as they are cooked ;-)
Julien, my point as laid out
is not so much about init scripts, but the overall control over the
whole free software ecosystem.
"Risk: Who controls systemd will control the Linux desktop."
"... especially when the user land applications
start to make it a hard requirement."
Yes, there are alternatives like Gentoo and FreeBSD.
But will they have the manpower to impact and survive?
Gentoo resurrected an independent udev via eudev
i.e. removed hard requirements to udev.
Worse: udev merged into systemd and ended its existence
for non-systemd platforms - AFAIK.
Now imagine not only GTK but many other upstream
source projects claim systemd as a hard requirement
Majority will then move over to systemd only platforms
and other 'systems' outside of this GNU/Linux/Systemd'
bubble will cease to exist.
Yes, a bit dramatic, but that is the risk.
Why is Red Hat pushing for this change?
It's their cloud biz, stupid! ;-)
Today we hear from the cloud protagonists
and hence cloud service provider that
micro-services, container and management
of the very same is the new cool thing to do.
A problem or functional block (some call it monolith)
is split into parts and deployed across the network.
Remembering such capabilities with Apache and Cocoon
(an early MVC XSLT web solution) as a great thing to
have - scaling your application and supporting good desgin
by separation and encapsulation.
However .. before doing so, you have to properly resolve
the design to achieve this networking capable design.
State-full or REST-less, it is not about just using
tools, Docker'em, Kubernet-fire and Ansible-forget,
it is about the old school
software architecture to achieve this excellence.
Modularization and having fully functional building blocks
allows you to scale your application across the network.
But keep in mind that adding containers and so forth
will again not lessen complexity but increase.
Know what you are doing.
BTW, a Java'ish runtime package is also a good
platform agnostic container ;-)
So back to system dependencies, IMHO it is outrageous
to force a feature to the masses which only serves
the biz model of a few.
At least they should call it what it is, a brick in THEIR
wall to sell platforms for other companies micro services.
So when bringing in hard dependencies to the average
software package in the open source universe,
which potentially cuts off other systems (non-systemd, non-Linux),
it might have some costly impact, if not irreversible.
Yes, the core functionality of systemd is desired and
great like OpenRC, initng, runit, monit, s6, daemontools, and Shepherd.
If systemd would keep it at that and also would embrace
other platforms (cgroup alternatives on FreeBSD etc),
one could thank Red Hat for doing so.
However, it seems the opposite is true.
They bash the old system design style, show more systemd appetite
to take over even more functionality of the system as systemd's
mission statement says and potentially leave others behind.
Red Hat's Daniel Riek also ran their show in this regard last weekend at
Surprised that some others start to differentiate
in a reasonable manner now.
See Kelsey Hightower (GOOG Cloud):
Still running Debian on my machines one way (systemd)
or the other (systemd-shim).
As long the ecosystem survives, capable of compiling 'em for other systems,
we should be good and my many words here gladly 'just wasted'.
Having an eye on Gentoo, Alpine Linux and also FreeBSD.