Discussion:
Ursa Major (modules in buildroot) enablement
Justin Forbes
2018-11-05 15:22:15 UTC
Permalink
This is related to an open ticket to Release Engineering
(https://pagure.io/releng/issue/7840) which was brought to FESCo
(https://pagure.io/fesco/issue/2003). We understand the need to
enable this, but there is an impact to workflow for local builds. It
is possible that some of this could be alleviated with a fairly simple
change to mock.
The ability to install all builddeps with dnf builddep will be a bit
more difficult. Of course local builds where the build deps are
already installed will not be impacted. With this in mind, we wanted
to open this up to some discussion on list before we make a decision.

Thanks,
Justin
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.
Stephen John Smoogen
2018-11-05 16:39:56 UTC
Permalink
Post by Justin Forbes
This is related to an open ticket to Release Engineering
(https://pagure.io/releng/issue/7840) which was brought to FESCo
(https://pagure.io/fesco/issue/2003). We understand the need to
enable this, but there is an impact to workflow for local builds. It
is possible that some of this could be alleviated with a fairly simple
change to mock.
The ability to install all builddeps with dnf builddep will be a bit
more difficult. Of course local builds where the build deps are
already installed will not be impacted. With this in mind, we wanted
to open this up to some discussion on list before we make a decision.
There were a couple of things that weren't clear in the proposal or the meeting:

1. Can you use rpmbuild to rebuild packages with this change? Or do
you have to use mock or koji to do so?
2. How do sites which rebuild Fedora work with this?
3. Where do the bits which are getting added get built from and what
is their build history?
4. For buildsystems outside of koji how do they work?

I expect all the answers are benign, but it might make things clearer
to the effects.
Post by Justin Forbes
Thanks,
Justin
_______________________________________________
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
--
Stephen J Smoogen.
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/lis
Fabio Valentini
2018-11-05 17:39:48 UTC
Permalink
Post by Justin Forbes
This is related to an open ticket to Release Engineering
(https://pagure.io/releng/issue/7840) which was brought to FESCo
(https://pagure.io/fesco/issue/2003).
Until now, I've been mostly keeping quiet about the whole modularity
thing - in part, because I disagree with the direction the concrete
implementation has been taken in;
I didn't see it as useful to me, and it has not impacted me as a
packager who only builds standard packages. TL;DR: I want to keep it
that way (at least for now).
Post by Justin Forbes
We understand the need to
enable this, but there is an impact to workflow for local builds. It
is possible that some of this could be alleviated with a fairly simple
change to mock.
What exactly does "could be alleviated" mean here - would that "simple
change" to mock make it just 90% more cumbersome, instead of 100%?
What would the new workflow look like?
Post by Justin Forbes
The ability to install all builddeps with dnf builddep will be a bit
more difficult.
A bit more difficult ... how, exactly?
Do I have to solve the Riemann Hypothesis before calling that command
- or what would be the added difficulty here?
Post by Justin Forbes
Of course local builds where the build deps are
already installed will not be impacted.
Do you mean host-system-local, or mock-local builds?
Because I don't even have the "-modular" repositories enabled on my
f29 system, and I'll keep it that way.
Post by Justin Forbes
With this in mind, we wanted
to open this up to some discussion on list before we make a decision.
I have to say, making core, non-leaf packages available as modules
only sounds like a *terrible* idea to me.
I don't want to have to deal with this uncooked mess if I just want to
do standard packaging.

Heck, as things stand right now, I'd even volunteer to maintain
"standard branches" of my dependencies which are "in danger" of being
converted to module-only, just to not have to deal with modules.

I understand that modularity can have benefits for some work-flows and
some specific packages, but this effort sure looks like jumping on the
band-wagon just because it's the new, shiny thing, without considering
the consequences.

Please don't take this criticism the wrong way - I acknowledge that
releng and FESCo are doing hard work here.
I'm just not convinced that this work is actually benefiting fedora as
a developer platform / platform to develop for.

Fabio
Post by Justin Forbes
Thanks,
Justin
_______________________________________________
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedo
Kevin Kofler
2018-11-06 00:54:54 UTC
Permalink
Post by Fabio Valentini
I have to say, making core, non-leaf packages available as modules
only sounds like a *terrible* idea to me.
I don't want to have to deal with this uncooked mess if I just want to
do standard packaging.
+1. And, for that matter, that goes even for standard USING, as you implied
Post by Fabio Valentini
Because I don't even have the "-modular" repositories enabled on my
f29 system, and I'll keep it that way.
As you explained pretty well, it does not make sense to FORCE modules onto
users. Even less so if those packages are dependencies of other packages
outside of the module walled garden. Ursa Major is a crude hack to make that
broken setup work.

Kevin Kofler
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedorapr
Zbigniew Jędrzejewski-Szmek
2018-11-06 08:25:02 UTC
Permalink
Post by Kevin Kofler
Post by Fabio Valentini
I have to say, making core, non-leaf packages available as modules
only sounds like a *terrible* idea to me.
I don't want to have to deal with this uncooked mess if I just want to
do standard packaging.
+1. And, for that matter, that goes even for standard USING, as you implied
Post by Fabio Valentini
Because I don't even have the "-modular" repositories enabled on my
f29 system, and I'll keep it that way.
As you explained pretty well, it does not make sense to FORCE modules onto
users. Even less so if those packages are dependencies of other packages
outside of the module walled garden. Ursa Major is a crude hack to make that
broken setup work.
This is not about forcing modules unto people. The drive comes from
the other direction: packages want to be available only as modules,
and this is a work-around to allow them to be used as build dependencies.
So this change is driven by packagers who want to use modules for
*their own packages*.

I'm with you in the sense that I too fail to see practical benefits of
modules so far. But e.g. the java-sig says it makes their life easier,
and it is their choice. The decision was made to proceed with
modularity in Fedora. Once that decision was made, we cannot forbid
packagers from making use of the new functionality. This further step
is only a natural consequence.

Zbyszek
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@l
Dridi Boukelmoune
2018-11-06 10:05:39 UTC
Permalink
Post by Zbigniew Jędrzejewski-Szmek
I'm with you in the sense that I too fail to see practical benefits of
modules so far. But e.g. the java-sig says it makes their life easier,
and it is their choice. The decision was made to proceed with
modularity in Fedora. Once that decision was made, we cannot forbid
packagers from making use of the new functionality. This further step
is only a natural consequence.
Besides not seeing benefits of modules, while I do understand the
rationale it's also something I disagree with. I feel like modules go
against the First principle, I get a sense of bundling there too. But
if anything, it feels like we are going full circle again with Snaps
and Flatpaks and being pushed into an "innovation" corner by
Canonical's agenda just because they have enough momentum to make
everyone believe there is something wrong with the current packaging
model. Snaps made sense for Ubuntu phones to deliver arbitrary apps
from a store, not something suited to a general-purpose OS like
Fedora. Shipping Snap and Flatpak as packages, sure, but shipping
Snaps or Flatpaks, I don't see the point besides throwing away the
progress made since the years of the so called RPM hell.

We're already seeing examples of "portable packages" not keeping up
with upstream releases (the last I heard of was nextcloud) and either
a package is stable because upstream projects know how not to break
carelessly or they really need to live at head because $REASONS (web
browsers, probably things like nextcloud, etc). We have the same
problem with lagging updates for "traditional" packaging so I have yet
to be convinced that modules, Snaps or Flatpaks will solve that.

I'm not knowledgeable enough about how Fedora manages parallel
installation of streams but at the end of the day if I run "node" from
the command line only one executable will be picked up from my $PATH.
How can I run multiple streams then? Are the packages tweaked so that
I run node8 or node10? How is that different from compat packages? I
guess the answer is the bundling of dependencies inside both modules,
I don't know for sure, I do not wish to dig further because I only
have so much time to dedicate to Fedora. I'd rather go for OmniOS-like
packaging guidelines for parallel installations and module switching
basically be an update-alternatives to put things on the default $PATH
or let users tweak their $*PATH when they need to target a given
stream. (And yes, I know it doesn't sound friendly to non-tech-savvy
users but how often do they need parallel installations of GUI apps?)

I just hope modularity won't break mock as I know it. My $DAYJOB
kindly allows me to work on Fedora and that makes me very productive
when I target RHEL, otherwise I need to spin up VMs whenever I need to
work on other platforms packaging (mostly because we ship apt-rpm as
/usr/bin/apt). I happen to work on both an upstream of Fedora and many
other systems, and on Fedora, so I know how hard it is to
DoTheRightThing(tm) on both sides of the fence. To me this sounds like
something that should be built on top of mock and made transparently
available via fedpkg but I have neither the will nor the resources to
look further than what I superficially read on the devel list (and
when I rarely manage to catch up with all important threads).

Dridi

PS. examples taken off the top of my head, not pointing fingers here
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedoraproje
Nicolas Mailhot
2018-11-06 12:30:54 UTC
Permalink
Post by Dridi Boukelmoune
Post by Zbigniew Jędrzejewski-Szmek
I'm with you in the sense that I too fail to see practical benefits of
modules so far. But e.g. the java-sig says it makes their life easier,
and it is their choice. The decision was made to proceed with
modularity in Fedora. Once that decision was made, we cannot forbid
packagers from making use of the new functionality. This further step
is only a natural consequence.
Besides not seeing benefits of modules, while I do understand the
rationale it's also something I disagree with. I feel like modules go
against the First principle, I get a sense of bundling there too.
My current understanding of modules benefits is that they’re just
improved SCLs. ie something EL oriented that the average Fedora packager
has little interest or use for.

Practically, being improved SCLs just means:

1. rawhide has the latest version of each module enabled by default,
2. stable has the same version enabled by default if the module version
is completely baked, and the previous one otherwise
3. epel has the same module version as stable enabled by default

So the average Fedora packager ends up maintaining at most two streams
of packages in parallel.

That actually cut downs the number of version a Fedora packager needs to
maintain from 3 (devel + 2 × stable) to 2. I suppose one could up it to
3 to get the same QA levels as the current systems. Realistically, one
could even use Fedora release versions as module versions.

And every other combination is an explicit user choice, for the same
people that use or maintain SCLs today, with about the same level of
popularity or uptake. And everyone who hopes to see a flourishing of
module versions will hit the “no one’s interested in packaging and QA-
ing a miriad versions of the same software” wall.

Regards,

--
Nicolas Mailhot
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fed
Florian Weimer
2018-11-06 15:46:10 UTC
Permalink
Post by Nicolas Mailhot
My current understanding of modules benefits is that they’re just
improved SCLs. ie something EL oriented that the average Fedora packager
has little interest or use for.
1. rawhide has the latest version of each module enabled by default,
2. stable has the same version enabled by default if the module version
is completely baked, and the previous one otherwise
3. epel has the same module version as stable enabled by default
Modules do not support parallel installations of different module
versions. Many SCLs are constructed in such a way that this is
possible. So I'm not sure if modules are a clear improvement over SCLs.

Thanks,
Florian
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedor
Stephen Gallagher
2018-11-06 16:15:02 UTC
Permalink
Post by Florian Weimer
Post by Nicolas Mailhot
My current understanding of modules benefits is that they’re just
improved SCLs. ie something EL oriented that the average Fedora packager
has little interest or use for.
1. rawhide has the latest version of each module enabled by default,
2. stable has the same version enabled by default if the module version
is completely baked, and the previous one otherwise
3. epel has the same module version as stable enabled by default
Modules do not support parallel installations of different module
versions. Many SCLs are constructed in such a way that this is
possible. So I'm not sure if modules are a clear improvement over SCLs.
I find myself repeating this reply over and over again in various places...

The feedback that we (Red Hat) got about SCLs that was filtered down
to Engineering was this:

1) Customers really like having the option to install the version of
software that their applications needs from a trusted source (the OS
vendor/distributor)
2) Customers really *dislike* needing to modify their software to
understand the SCL enablement process.
3) Customers very rarely need to install different versions of the
same software on the same system. They use containers or VMs for
separate applications.

So with Modularity, we opted to drop the parallel-installability
requirement in favor of parallel-*availability* and the ability to
keep the packages installing in the standard locations (/usr/bin,
/usr/lib64, etc.)

This *is* a net improvement for the vast majority of deployments.
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/l
Dridi Boukelmoune
2018-11-06 21:42:34 UTC
Permalink
On Tue, Nov 6, 2018 at 6:06 PM Stephen Gallagher <***@redhat.com> wrote:
<snip>
Post by Stephen Gallagher
I find myself repeating this reply over and over again in various places...
Sorry about that.
Post by Stephen Gallagher
The feedback that we (Red Hat) got about SCLs that was filtered down
1) Customers really like having the option to install the version of
software that their applications needs from a trusted source (the OS
vendor/distributor)
Not surprising, especially when it comes to RHEL and its quite long life cycle.
Post by Stephen Gallagher
2) Customers really *dislike* needing to modify their software to
understand the SCL enablement process.
Really not surprising. Not that I find SCLs dis-likeable, but they
require active involvement (like virtualenv and other similar things)
while the trend is to make things JustWork(tm) (off the top of my
head, "vagrant up", "docker run"...)
Post by Stephen Gallagher
3) Customers very rarely need to install different versions of the
same software on the same system. They use containers or VMs for
separate applications.
So with Modularity, we opted to drop the parallel-installability
requirement in favor of parallel-*availability* and the ability to
keep the packages installing in the standard locations (/usr/bin,
/usr/lib64, etc.)
I missed that change, so that's one less peeve for me.
Post by Stephen Gallagher
This *is* a net improvement for the vast majority of deployments.
I sort-of put Fedora modules, Snaps and Flatpaks in the same bags and
at least for Snaps and Flatpaks it has always been a disaster for me
(only AppImage ever worked for me with that flavor of packaging).
Granted, it's not often that I need them so I haven't tested for a
while now, but it never ever worked for me.

Regarding modules, I superficially followed the early discussions
(things like the modulemd format, initial goals etc) but as soon as
the SIG was set up I lost track. I'm actually happy that the DNF
plugin materialized as a "dnf module [...]" sub-command, and I only
hope it won't break Fedora as I love it: First (and then stable for 13
months every 6 months).

Dridi
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel
Kevin Kofler
2018-11-09 02:27:02 UTC
Permalink
Post by Stephen Gallagher
The feedback that we (Red Hat) got about SCLs that was filtered down
But is that feedback relevant for Fedora, as opposed to RHEL?
Post by Stephen Gallagher
1) Customers really like having the option to install the version of
software that their applications needs from a trusted source (the OS
vendor/distributor)
Fedora is normally about having the latest version of everything available.
(That's what the "First" in the 4 'F's stands for.) So it rarely happens
that the default version is too old. And if the default version is too new
for the user, that is generally filed in the PEBKAC category. ;-)
Post by Stephen Gallagher
2) Customers really *dislike* needing to modify their software to
understand the SCL enablement process.
That is a non-issue if everything uses the latest version, as it is supposed
to. And we never allowed SCLs in Fedora to begin with.
Post by Stephen Gallagher
3) Customers very rarely need to install different versions of the
same software on the same system. They use containers or VMs for
separate applications.
I don't see this being the case for Fedora users at all. A container for
every single application is something some large-scale deployments of RHEL
may be doing. But the average Fedora user is a home user with one or two
(desktop and notebook) computers running a single Fedora installation each.
They do not want to have to deal with the added complexity of containers,
and container technologies such as Flatpak or Snap that try to hide the
container from the user do not allow the user to upgrade the libraries in
the container and so often suffer from delayed or absent security updates
(because the whole container or the whole runtime has to be respun by the
maintainer to provide them, and then the user has to upgrade to the respun
image).

So I think that having things in Fedora that are not parallel-installable is
not a good idea, at all. There is a reason that the guideline on Conflicts
says to avoid it at all costs, and that compatibility libraries, in
particular, are NOT allowed to conflict at runtime (Conflicts are only
tolerated in the -devel packages, and discouraged even there).

It really worries me that we are allowing Fedora-provided packages to depend
on arbitrary branches of modular packages (modular Fedora packages can
already do that, Ursa Major would apparently extend that possibility even to
normal packages), because that can easily lead to Module Hell (RPM Hell
2.0), where application A requires module Foo version n whereas application
B requires module Foo version m (and obviously versions n and m of Foo are
not compatible). So the user is stuck being unable to install applications A
and B from our repository. That is something that should NEVER happen in a
consistent distribution. (Avoiding that is the main job of a distribution!)
Post by Stephen Gallagher
So with Modularity, we opted to drop the parallel-installability
requirement in favor of parallel-*availability* and the ability to
keep the packages installing in the standard locations (/usr/bin,
/usr/lib64, etc.)
This *is* a net improvement for the vast majority of deployments.
Sure, the SCL hack with its non-FHS-compliance is a bad idea, too, but that
was never allowed in Fedora for a reason.

Kevin Kofler
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject
Jason L Tibbitts III
2018-11-06 16:20:14 UTC
Permalink
FW> Modules do not support parallel installations of different module
FW> versions. Many SCLs are constructed in such a way that this is
FW> possible. So I'm not sure if modules are a clear improvement over
FW> SCLs.

And the really fun thing is that once the different versions are
installable in parallel you could just.... have them in different
packages. So SCLs aren't really an improvement over plain old packages,
either.

So it seems to me that modules are useful specifically in the "not
parallel installable" case; they seem to be to simply be a framework for
handling sets of mutually exclusive packages (and the combinatorial
dependency explosion which results). Which I guess is reasonable,
though I always thought they would be the last resort when
you can't make two versions able to be installed in parallel. Instead
it seems like they're being pushed as the default, which just seems
backwards to me.

- J<
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/arch
Stephen Gallagher
2018-11-06 16:35:06 UTC
Permalink
Post by Jason L Tibbitts III
FW> Modules do not support parallel installations of different module
FW> versions. Many SCLs are constructed in such a way that this is
FW> possible. So I'm not sure if modules are a clear improvement over
FW> SCLs.
And the really fun thing is that once the different versions are
installable in parallel you could just.... have them in different
packages. So SCLs aren't really an improvement over plain old packages,
either.
So it seems to me that modules are useful specifically in the "not
parallel installable" case; they seem to be to simply be a framework for
handling sets of mutually exclusive packages (and the combinatorial
dependency explosion which results). Which I guess is reasonable,
though I always thought they would be the last resort when
you can't make two versions able to be installed in parallel. Instead
it seems like they're being pushed as the default, which just seems
backwards to me.
I think it only seems that way because there's a non-trivial number of
useful packages (e.g. Node.js) that can't be trivially installed in
parallel like Python can and which have regular,
backwards-incompatible jumps and multiple supported upstream versions.
This has always been a problem for Fedora; either users would hold
their systems back on unsupported Fedora releases to maintain older
versions, or else they'd stop using our packaged versions at all,
which devalues us. Modules gives us the ability to allow us to ship
whatever versions the maintainer is willing to maintain.

Now, one thing that I think hasn't been made clear in this thread is
this: Ursa Major is net-new functionality. With or without modules
today, you can only have in the buildroot the set of things that you
could get from DNF without being aware of module-specific commands.
Modules with a default stream Just Work for buildroots. The
improvement with Ursa Major is the ability to have a non-default
version of software available *only at build-time*. As a hypothetical
example, maybe python-sphinx has a major backwards-incompatible update
that becomes the default in Fedora 30. The package you maintain will
only build its docs with the older Sphinx. Without Ursa Major, you
basically have two choices: 1) Stop building the docs until upstream
catches up to Sphinx, or 2) Try to write a patch to support the new
version of Sphinx. With Ursa Major, you potentially gain 3)
BuildRequires the previous version of Sphinx for your package.
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.or
Neal Gompa
2018-11-06 17:56:13 UTC
Permalink
As a hypothetical example, maybe python-sphinx has a major
backwards-incompatible update that becomes the default in Fedora 30.
The package you maintain will only build its docs with the older Sphinx.
Without Ursa Major, you basically have two choices: 1) Stop building
the docs until upstream catches up to Sphinx, or 2) Try to write a patch
to support the new version of Sphinx. With Ursa Major, you potentially
gain 3) BuildRequires the previous version of Sphinx for your package.
So, this statement is the core of what I don't like about modularity.
Pitching it as a means of allowing people to "keep back" even in
packages for the distribution is bad for a distribution that pushes
forward. One of the major reasons I prefer Fedora to other
distributions is that we contribute to the advancement of FOSS through
our developers and packagers. This goes to the extent of helping
upstreams port forward and leverage new versions and new
functionality.

Now, I don't hate modularity as a concept, but I have personally felt
that the design and approach to modules in Fedora is horribly
misguided. From my perspective, it seems to be pitched as a way for
Fedora to move slower, and that's not what I want from a distribution
like Fedora.

Moreover, as it stands, I don't think modularity provides any quality
of life improvements for packagers within Fedora (it adds extra steps
and makes it confusing to figure out what is maintained), and
currently is a huge impairment for packagers outside of Fedora. I've
brought up the issues I have with the "modularization" of things
within Fedora from the context of a third-party packager, and I
haven't yet seen a solution outlined with my concerns fully handled.
And as I've also pointed out privately and publicly in other
instances, the extra "foreign" metadata is difficult or impossible for
most tools today to handle. There is some hope that those issues will
be addressed, but I'm unsure if anyone cares enough to prioritize
these issues.

It's very clear that modules as they currently stand aren't designed
for Fedora. They're designed for Red Hat Enterprise Linux. And that's
not good, because we're trying to use it in Fedora.

Personally, I see the value proposition for modules as such in the
context of Fedora:
* Providing non-default, older version packages for backwards
compatibility and supporting stepped upgrade processes. Common
examples of this are ownCloud/Nextcloud, OpenShift/Kubernetes, and so
on.
* Offering alternative variants of language runtime stacks from the
system version. The tooling around modules automates the chain
building process and could actually be used to generate alternate
versions of language stacks very easily. This can be something like
having Python 2 being managed as a module that can be built on demand
for people who need it, or supporting PHP 5 when PHP 7 won't work, or
something like that.

What I am annoyed about is that there's been almost zero interest in
actually improving the quality of life of packagers who handle the
bulk of packages in Fedora, the so-called "ursine" packages (which I'm
not terribly pleased about the name...). I've outlined for a couple of
years now some improvements we could make here.

I also continue to wonder why we aren't pushing for a merger of Koji,
Koschei, and COPR to provide better workflows across the board. One of
our biggest problems is that it's _impossible_ to stage any change in
a suitably useful way and do things like install checks, media
creation, OpenQA runs, and so on. This is the critical difference
between our development process and openSUSE's, as an example.

We also seem to have some kind of fear about having extra optional
repositories for people to enable for non-default stuff, which is why
modules are wired up the way they are (modules look like repos to the
solver, and enabling and disabling them triggers that base logic).

I also feel that some of the tooling we developed for modules actually
would equally apply well for regular packages. For example, MBS
implements a giant hack for Koji so that it's actually possible to
generate a side-tag, build all the packages and their incorporated
dependencies, and then export it to be included in a module. But why
not adapt that model for everything else? Why not allow someone to
trigger a build of something, check for reverse dependencies, include
them automatically, and then build it in a side tag in the exactly
correct order (as determined by the solver)? The reverse dependencies
could get a normal rebuild spec bump and then have that committed to
dist-git (or not, depending on how we do it!), and then merge it back
into the distro after it all succeeds. The advantage of this is that
if something does fail, it can be independently handled and merged
into the side tag, and once all the builds in a tag are green, it
could auto-merge into the main distribution tag (rawhide,
updates-candidate, or whatever). And of course, building and
auto-pushing for all supported distro targets is a huge simplification
that would help a lot.

In summary, I feel like modules as a concept makes sense, but I don't
understand how the current implementation is good for Fedora as a
distribution. A lot of what we've built makes a lot of sense for
regular packages, so I don't understand why we don't do that.

--
真実はいつも一つ!/ Always, there's only one truth!
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedoraprojec
Kevin Kofler
2018-11-09 02:33:44 UTC
Permalink
Post by Neal Gompa
Moreover, as it stands, I don't think modularity provides any quality
of life improvements for packagers within Fedora (it adds extra steps
and makes it confusing to figure out what is maintained),
There is one I can see in that it allows packagers to make their packages
depend on incompatible library versions, which makes their job easier (by
not having to care about compatibility, patching for different library
versions, etc.) at the expense of the users that are left with an unsolvable
Module Hell and the total impossibility to install the software they need.

But obviously, I think this is a very poor tradeoff. Helping packagers must
not happen at the end users' expense!

Kevin Kofler
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedorapro
Raphael Groner
2018-11-09 06:07:01 UTC
Permalink

Post by Kevin Kofler
But obviously, I think this is a very poor tradeoff. Helping packagers must
not happen at the end users' expense!
Kevin Kofler
+1
Can you think about a time when modules can or will (hopefully) bring benefits to our users? Well, it's just seen as an additional feature of many Features like we expect from Fedora. Where's Friends, First, Freedom?!
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedoraproject.org
Vít Ondruch
2018-11-09 09:22:34 UTC
Permalink
Post by Kevin Kofler
Post by Neal Gompa
Moreover, as it stands, I don't think modularity provides any quality
of life improvements for packagers within Fedora (it adds extra steps
and makes it confusing to figure out what is maintained),
There is one I can see in that it allows packagers to make their packages
depend on incompatible library versions, which makes their job easier (by
not having to care about compatibility, patching for different library
versions, etc.)
The advantage for packagers is just temporary, as long as the
(supposedly) older library they still use is maintained. One day, they
will need to move forward. This is just postponing the inevitable.


V.
Post by Kevin Kofler
at the expense of the users that are left with an unsolvable
Module Hell and the total impossibility to install the software they need.
But obviously, I think this is a very poor tradeoff. Helping packagers must
not happen at the end users' expense!
Kevin Kofler
_______________________________________________
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/li
Dridi Boukelmoune
2018-11-09 11:43:32 UTC
Permalink
Post by Vít Ondruch
The advantage for packagers is just temporary, as long as the
(supposedly) older library they still use is maintained. One day, they
will need to move forward. This is just postponing the inevitable.
And for the same reason we have compat packages, we can't always honor
the First principle because upstream projects update their dependencies
requirements at different paces and not all dependencies are maintained
with forward compatibility in mind.

Software projects in general jump the trigger too easily when it comes
to adding unneeded dependencies, and dependency graphs grow with
quadratic complexity.

I like the idea of package maintainers getting involved with their
upstream projects, and I also understand that trying to add a single
package may sometimes (often?) result in having to also maintain dozens
of other packages and that getting involved with all upstreams doesn't
scale. I consider compat packages to be a last resort solution and
don't see the value of modules. But again I understand the rationale
and appreciate the effort (I simply disagree).

Better tooling won't cut it, we also need more maintainers and ideally
maintainers from upstream projects that understand the challenges of
software downstream distribution. But again it doesn't scale when
upstream projects face dozens of distributions.

What's really inevitable is conflicting agendas (and limited
resources) of so many parties.

Dridi
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archiv
Tomasz Torcz
2018-11-09 16:09:24 UTC
Permalink
Post by Neal Gompa
As a hypothetical example, maybe python-sphinx has a major
backwards-incompatible update that becomes the default in Fedora 30.
The package you maintain will only build its docs with the older Sphinx.
Without Ursa Major, you basically have two choices: 1) Stop building
the docs until upstream catches up to Sphinx, or 2) Try to write a patch
to support the new version of Sphinx. With Ursa Major, you potentially
gain 3) BuildRequires the previous version of Sphinx for your package.
So, this statement is the core of what I don't like about modularity.
Pitching it as a means of allowing people to "keep back" even in
packages for the distribution is bad for a distribution that pushes
forward. One of the major reasons I prefer Fedora to other
distributions is that we contribute to the advancement of FOSS through
our developers and packagers. This goes to the extent of helping
upstreams port forward and leverage new versions and new
functionality.
Suprisingly, recently I've found use for modularity. It's a crutch
for bad software (OpenShift breaking backwards compatibility) but it
worked.
Specifically, openshift CLI command 'oc' in version 3.11 does not
work anymore with cluster in version 3.7. I already downloaded rpms for
previous version, and was ready for dnf downgrade + exclude= in dnf.conf
when I thought about modules.
And yes, there's a module with openshift-3.10. I've enabled it,
installed older client and was able to access our clusters once again.
And I hope to have updates and fixes provided by this module (which
would be troublesome in standard exclude= scenario).
That's as an user. I'm still to discover the need for modularity
as a packager, but I'm not eager to.


--
Tomasz Torcz "Funeral in the morning, IDE hacking
xmpp: ***@chrome.pl in the afternoon and evening." - Alan Cox
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/deve
Matthew Miller
2018-11-09 17:09:14 UTC
Permalink
Post by Tomasz Torcz
Suprisingly, recently I've found use for modularity. It's a crutch
for bad software (OpenShift breaking backwards compatibility) but it
worked.
I mean, software is software. :)
Post by Tomasz Torcz
That's as an user. I'm still to discover the need for modularity
as a packager, but I'm not eager to.
Solving problems for users should count as something!

The thing I really want from it is: automatic builds across bases from one
(or two) stream branches. That's partly there, but isn't as magical as I'd
like it to be.

--
Matthew Miller
<***@fedoraproject.org>
Fedora Project Leader
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/arch
Kevin Kofler
2018-11-08 04:59:26 UTC
Permalink
Post by Zbigniew Jędrzejewski-Szmek
This is not about forcing modules unto people. The drive comes from
the other direction: packages want to be available only as modules,
But that is exactly what I mean by "forcing modules onto people"!
Post by Zbigniew Jędrzejewski-Szmek
and this is a work-around to allow them to be used as build dependencies.
So this change is driven by packagers who want to use modules for
*their own packages*.
But I am speaking from a *user*'s standpoint, both end users of the package
and maintainers of dependent packages. For them, if the maintainer of
package foo decides to make package foo module-only, the maintainer *forces*
modules onto everyone wanting to use foo on their system or for their
package.

So making a package module-only is by definition forcing modules onto
people. If you claim otherwise, you have a too maintainer-centric view of
the issue and are not getting the whole picture.
Post by Zbigniew Jędrzejewski-Szmek
I'm with you in the sense that I too fail to see practical benefits of
modules so far. But e.g. the java-sig says it makes their life easier,
and it is their choice. The decision was made to proceed with
modularity in Fedora.
And that was a mistake! But…
Post by Zbigniew Jędrzejewski-Szmek
Once that decision was made, we cannot forbid packagers from making use of
the new functionality. This further step is only a natural consequence.
… that does not mean we need to go down that slippery slope. It is perfectly
possible to allow modules only with some restrictions, e.g.:
* that packages on which other packages depend at build time or at runtime
MUST NOT be module-only, or even
* that no package may ever be module-only, but modules can only be used for
non-default versions.

But if Fedora thinks it does not make sense to have modules under such
common-sense rules, then the decision to allow modules in the first place
needs to be rethought and they should be deprecated immediately (i.e., no
more modules in F30, no new modules in F29, and all module-only packages
must return to the non-modular F29 updates repository).

Kevin Kofler
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproj
Raphael Groner
2018-11-09 06:10:20 UTC
Permalink
Kevin,
Post by Kevin Kofler
* that no package may ever be module-only, but
modules can only be used for non-default
versions.
That statement doesn't make any sense for me. Can you explain, please? How should modules live without packages in background? We'd already discussed this in another thread.
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedoraprojec
Kevin Kofler
2018-11-09 14:51:35 UTC
Permalink
Post by Raphael Groner
Kevin,
Post by Kevin Kofler
* that no package may ever be module-only, but
modules can only be used for non-default
versions.
That statement doesn't make any sense for me. Can you explain, please? How
should modules live without packages in background? We'd already discussed
this in another thread.
I don't think you understood the sentence I wrote.

The current state is that we can have:
main repo: no package foo, no package libfoo (but many other packages)
module foo-1: foo-1.8.10, libfoo-1.8.12
module foo-2: foo-2.0.0, libfoo-2.0.1
but the "main repo: no package foo, no package libfoo" part is what I am
objecting to, especially if libfoo is used by more packages than just foo.

I want to require the main repo to contain some version of libfoo, and other
packages (from the main repo or from modules other than foo) should be
required to use the version in the main repo and not in some non-default
module.

Though I think that ideally, we would have only the main repo and pick one
version of foo to ship there instead of offloading this distribution job to
the user through arbitrarily-branched modules.

Kevin Kofler
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedora
Stephen Gallagher
2018-11-09 15:28:06 UTC
Permalink
Post by Kevin Kofler
Post by Raphael Groner
Kevin,
Post by Kevin Kofler
* that no package may ever be module-only, but
modules can only be used for non-default
versions.
That statement doesn't make any sense for me. Can you explain, please? How
should modules live without packages in background? We'd already discussed
this in another thread.
I don't think you understood the sentence I wrote.
main repo: no package foo, no package libfoo (but many other packages)
module foo-1: foo-1.8.10, libfoo-1.8.12
module foo-2: foo-2.0.0, libfoo-2.0.1
but the "main repo: no package foo, no package libfoo" part is what I am
objecting to, especially if libfoo is used by more packages than just foo.
I want to require the main repo to contain some version of libfoo, and other
packages (from the main repo or from modules other than foo) should be
required to use the version in the main repo and not in some non-default
module.
This is literally the exact way things work today, except that instead
of "the main repo", we treat it as "the main repo OR the default
stream of the module".

Nothing in the main repo is permitted to use anything that is not
available in the main repo or a default module stream at runtime. Full
stop.

The case of Ursa Major is special: it's addressing the case where we
may have some *build-time* requirements that are not in the default
repo. All of their runtime requirements must still meet the above
criteria, but perhaps their build requires a too-new (or
old-and-more-stable) build-time requirement. In this case, it is far
easier on the packager to be able for them to be allowed to use that
other version to build.

Consider the Go case: we know that most Go packages will be statically
linked (issues with that are a different topic), so we know they will
work fine once built. However, if the application upstream cannot
build with the latest "stable" version because of
backwards-incompatible changes, it seems to me that it's perfectly
reasonable to allow them to use a slightly older Go compiler from a
module stream. Strictly speaking, this is no different from offering
an SCL or a compat-* package of the compiler, except that having it as
a module means that its executables and other tools will be in
standard locations, so the build process doesn't need to be patched to
locate them elsewhere.
Post by Kevin Kofler
Though I think that ideally, we would have only the main repo and pick one
version of foo to ship there instead of offloading this distribution job to
the user through arbitrarily-branched modules.
And if we lived in a proprietary world where we had dictatorial
control over what our users are allowed to install, that might work.
In 1998, this approach made sense. At that time, your two choices for
any software were "install a distro package" or "try to compile it
yourself". Upstream projects themselves used to strive to build
packages for the distros so they could make sure they reached users.
That is simply not how much of today's software works and on the rare
cases they *do* provide a distro package, it's usually for the distros
with a (real or perceived) long-term stability guarantee.

What we are doing is providing additional tools. If you do not wish to
use them to build your packages, don't! That's fine. For others, it's
a matter of putting a price on their time: is it worth spending an
extra two months hacking on a package in the name of ideological
purity, or is that two months better spent doing other work? The
Fedora of a few years ago would have *required* the former approach.
Fedora today is more welcoming.
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.or
Dridi Boukelmoune
2018-11-09 17:44:33 UTC
Permalink
Post by Stephen Gallagher
Consider the Go case: we know that most Go packages will be statically
linked (issues with that are a different topic), so we know they will
work fine once built. However, if the application upstream cannot
build with the latest "stable" version because of
backwards-incompatible changes, it seems to me that it's perfectly
reasonable to allow them to use a slightly older Go compiler from a
module stream. Strictly speaking, this is no different from offering
an SCL or a compat-* package of the compiler, except that having it as
a module means that its executables and other tools will be in
standard locations, so the build process doesn't need to be patched to
locate them elsewhere.
<snip>
Post by Stephen Gallagher
What we are doing is providing additional tools. If you do not wish to
use them to build your packages, don't! That's fine. For others, it's
a matter of putting a price on their time: is it worth spending an
extra two months hacking on a package in the name of ideological
purity, or is that two months better spent doing other work? The
Fedora of a few years ago would have *required* the former approach.
Fedora today is more welcoming.
If you take this compromise to an extreme then let's solve the Java
problem (or <insert similar stack here>) and grant an internet access
to builds. This way we can use vanilla maven/gradle/ivy to fetch
dependencies at build time and make sure that we can upgrade to the
latest versions of any leaf package.

For the Go case (and we can include Rust too) it is indeed very likely
that, because the model is almost exclusively static linking, a leaf
package will force the creation of dozens of devel packages only for
the sake of BuildRequir'ing them. What about changing guidelines to
allow such packages to have multiple SourceX archives and list their
dependencies as bundled(xxx) in the final RPM? I would argue that Go
and Rust leaf packages should already advertise their dependencies as
bundled because of their very nature.

This way adding a Go or Rust application could lessen the burden on
package maintainers and still provide the metadata needed to keep
track of what bundles what when an update is needed (CVE or other).
Ideally helped with tools to avoid doing everything by hand...

Again, I'm part of neither Java, Go or Rust SIGs and don't have time
to follow modules closely. Apologies if that has already been brought
up in the past. And I'm pretty sure that would hardly be a lead for
NodeJS applications but I'm far less familiar with the latter.

Dridi
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list
Nicolas Mailhot
2018-11-10 12:03:55 UTC
Permalink
Post by Dridi Boukelmoune
For the Go case (and we can include Rust too) it is indeed very likely
that, because the model is almost exclusively static linking, a leaf
package will force the creation of dozens of devel packages only for
the sake of BuildRequir'ing them. What about changing guidelines to
allow such packages to have multiple SourceX archives and list their
dependencies as bundled(xxx) in the final RPM?
That does not work because bundled code is in a terrible state,
multiarchive rpms suck to create and maintain, and you lose all the
version tracking framework rpm provides (that Go upstream, has not
managed to replicate yet in their own package maintainer BTW).

What works is to make rust or Go package creation and review as simple
and streamlined as possible, so packagers can focus on upstream's poor
code state, instead of fighting rpm or the review process.

That's basically what
https://pagure.io/fesco/issue/2004
and
https://github.com/rpm-software-management/rpm/issues/104
are for the technical parts: automate away the mapping between language-
specific dep mecanisms and rpm dep mecanisms, so packagers can work on
what those dep mean technically not how to rewrite language deps in rpm
deps, and they are helped not hindered by the rpm layer.

For the org part the review process needs a major rework so it does not
provide incentives to publish one frankenpackage that mixes lots of
unrelated stale unaudited lumps of code, over several dozen of clean
modular packages that are easier to maintain and audit and can be shared
between leaf users.

And if you want packagers that ignore upstream’s code state, you'll
eventually get huge highly publicised security holes. Code does not
self-maintain.

Regards,

--
Nicolas Mailhot
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraprojec
Kevin Kofler
2018-11-11 01:52:41 UTC
Permalink
Post by Dridi Boukelmoune
If you take this compromise to an extreme then let's solve the Java
problem (or <insert similar stack here>) and grant an internet access
to builds. This way we can use vanilla maven/gradle/ivy to fetch
dependencies at build time and make sure that we can upgrade to the
latest versions of any leaf package.
For Java, this does not work because Maven fetches precompiled JARs, whereas
we need our software to be built from source. (You are not allowed to bundle
precompiled JARs even if you download them beforehand or they are even
included in the upstream tarball.) It is an essential requirement for a Free
Software distribution that all software it ships is built from source.
Post by Dridi Boukelmoune
For the Go case (and we can include Rust too)
For those, please see Nicolas Mailhot's reply.

Kevin Kofler
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedoraproject.or
Nico Kadel-Garcia
2018-11-11 04:32:04 UTC
Permalink
Post by Kevin Kofler
Post by Dridi Boukelmoune
If you take this compromise to an extreme then let's solve the Java
problem (or <insert similar stack here>) and grant an internet access
to builds. This way we can use vanilla maven/gradle/ivy to fetch
dependencies at build time and make sure that we can upgrade to the
latest versions of any leaf package.
For Java, this does not work because Maven fetches precompiled JARs, whereas
we need our software to be built from source. (You are not allowed to bundle
precompiled JARs even if you download them beforehand or they are even
included in the upstream tarball.) It is an essential requirement for a Free
Software distribution that all software it ships is built from source.
Post by Dridi Boukelmoune
For the Go case (and we can include Rust too)
For those, please see Nicolas Mailhot's reply.
Kevin Kofler
It's a very sensible requirement. It's not a legal one, as long as the
"free software" has the source available one. For the legal protection
of users who can assure the legal provenance of the code, and for
elementary security reasons, it's critical. It's one of the great
risks of rubygems and of all the Java build tools. It's installing
binaries without robust provenance. It's a risk, as well, for CPAN and
pip based installations.
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/a
Ben Rosser
2018-11-09 18:02:08 UTC
Permalink
Post by Stephen Gallagher
Consider the Go case: we know that most Go packages will be statically
linked (issues with that are a different topic), so we know they will
work fine once built.
How does this scale to ecosystems that *aren't* statically linked,
though? Suppose I turn a C++ library, or set of libraries, into a
module, and ship incompatible versions in different streams (different
soname versions, say). Then suppose there are non-module dependencies
of this library in the distribution. What happens when someone tries
to switch the module to the non-default stream on their system?

It doesn't sound like Ursa Major can solve this problem. As far as I
understand, the only solution is to turn those dependencies into
modules too, and somehow keep the streams synchronized? Is there
planned tooling to do this?

It's all very well to add default streams of modules to the buildroot
automatically-- I think that makes sense, if it can be done in a way
that's transparent to end users and packagers. But-- unless I'm
missing something obvious-- this isn't enough, unless everything is
statically linked.

Ben Rosser
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedor
Nicolas Mailhot
2018-11-10 10:19:14 UTC
Permalink
Post by Stephen Gallagher
Consider the Go case: we know that most Go packages will be statically
linked (issues with that are a different topic), so we know they will
work fine once built. However, if the application upstream cannot
build with the latest "stable" version because of
backwards-incompatible changes, it seems to me that it's perfectly
reasonable to allow them to use a slightly older Go compiler from a
module stream. Strictly speaking, this is no different from offering
an SCL or a compat-* package of the compiler, except that having it as
a module means that its executables and other tools will be in
standard locations, so the build process doesn't need to be patched to
locate them elsewhere.
Please do not drag Go into this if you want to handwave Go away
problems. Yes modules will be useful in Go but only to blow away in EPEL
the rotten Go codebase RHEL ships.

But anyway, since you referred to GO.

Go is the perfect example of why bundling as a general approach does not
work and does not scale. In case you haven't noticed years of bundling
Go side has resulted in such a deep widespread rot Google is scrambling
to write a Go v2 language version that will force Go projects to version
and update.

All the people that claim bundling allows “using a slightly older
version” (implying it's a good safe maintained older version) are lying
through their teeth. Yes it allows doing that but that's not how people
use it. And it does not matter if you bundle via self-provided windows
DLLS, containers, flatpacks, modules or rhel versions.

Bundling basically allows reusing third party code blindly without any
form of audit or maintenance. You take third party code, you adapt your
code to its current API, and you forget about it.

You definitely *not* check it for security legal or other problems, you
definitely *not* check regularly if CVEs or updates have been released,
you definitely *not* try to maintain it yourself. Any bundler dev that
tells you otherwise lies. The average bundler dev will tell you “Look at
my wonderful up to date award-wining modern code, security problems? Ah,
that, not my code, I bundle it, not my problem”.

It is however a *huge* problem for the people on the receiving end of
the resulting software, static builds or not. Static builds do not add
missing new features or fix security holes. They just remove the shared
libs that could be used by the sysadmin use to track them. And since
malware authors do not bother identifying how software was compiled,
before attempting to exploit it, static builds do not hinder them the
slightest.

While trying to improve Go packaging in Fedora by myself I found serious
old security issues in first-class Go code. First-class as in benefiting
from huge publicised ongoing dev investments from major companies like
Google, Red Hat or Microsoft. It’s not hard, you do not even need to
write Go code, just take the bundled pile of components those bundle,
and read the upstream changelog of those components for later versions.
You will hit pearls like “emergency release because of *** CVE”. Or
“need to change the API to fix a race in auth token processing”. And the
answer of the projects that bundled a previous state of this code was
never “we have a problem” or “we have fixed it some other way” but “go
away, we haven’t planned to look or touch this code before <remote
future>”.

And, again, I’m no Go dev, or dev in general, I didn’t even try any form
of systematic audit, that was just the bits jumping to attention when I
hit API changes and had to look at the code history to try to figure
when they occurred. The day any bundled codebase is subjected to the
kind of herd security research java was some years ago and CPUs are
today sparks are going to fly all over the place.

And this is a natural phenomenon trivial to explain. Maintaining old
code versions is hard. Upstreams are not interested in supporting you.
You have to redo their work by yourself, while forbidding yourself API
changes (if you were ready to accept them you wouldn't have bundled in
the first place). And modern code is so deeply interdependent, freeze
one link in the dependency web and you get cascading effects all other
the place. You quickly end up maintaining old versions of every single
link in this web. If you try to do it seriously, you effectively have to
fork and maintain the whole codebase. IE all the no-support problems of
barebones free software, with none of the community friends help that
should come with it.

That's what RH tries to do for EL versions. It takes a *huge* dev
investment to do in a semi-secure no-new features way. And after a
decade, RH just dumps the result, because even with all those efforts,
it reaches terminal state and has no future.

There is only one way to maintain cheaply lots of software components
that call each other all over the place. That’s to standardise on the
latest stable release of each of them (and when upstream does not
release, the latest commit). And aggressively port everything to the
updates of those versions when they occur. If you are rich, maintain a
couple of those baselines, maximum. flatpackers do not say otherwise
with their frameworks (except I think they deeply underestimate the
required framework scope).

And sure, every once in a while, porting takes consequent effort, it can
not be done instantaneously, devs are mobilized elsewhere, etc. That's
when you use targeted compat packages, to organise the porting effort,
to push the bits already ported while keeping the ones not ready yet.
And to *trace* this exception, to remind yourself that if you do not fix
this you’re going to be in deep trouble and get in old code maintenance
hell.

Not porting is not an “optimization”. Not porting is pure unadulterated
technical debt. Porting to upstream API changes *is* cheaper
than freezing on an old version of upstream’s code you get to maintain
in upstream’s stead.

If you try to use modules as a general release mechanism, and not
temporary compat mechanisms, you *will* hit this old code maintenance
hell sooner than you think. Not a problem for RH, old code maintenance
is basically the reason people pay for RHEL, huge problem for Fedora.

Because bundling has never been a magic solution. It’s only a magic
solution when you are the average dev that does not want to maintain
other people’s code, nor adapt to this other people’s code changes.

One bonus of bundling is removal of any kind of nagging that would
incite the dev to take a look at what happens upstream, so he can sleep
soundly at night.

But the real bundling perk is that, because container and static build
introspection tech is immature, you get to *not* *maintain* the code you
ship to users, with bosses, security auditors, etc being none the wiser.
Force any bundler dev to actually maintain all the code he ships, and I
can assure you, his love affair with bundling will end at once.

Regards,

--
Nicolas Mailhot
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@lists.fedoraproject.org
Jakub Cajka
2018-11-13 12:07:51 UTC
Permalink
Post by Nicolas Mailhot
Please do not drag Go into this if you want to handwave Go away
problems. Yes modules will be useful in Go but only to blow away in EPEL
the rotten Go codebase RHEL ships.
But anyway, since you referred to GO.
Go is the perfect example of why bundling as a general approach does not
work and does not scale. In case you haven't noticed years of bundling
Go side has resulted in such a deep widespread rot Google is scrambling
to write a Go v2 language version that will force Go projects to version
and update.
All the people that claim bundling allows “using a slightly older
version” (implying it's a good safe maintained older version) are lying
through their teeth. Yes it allows doing that but that's not how people
use it. And it does not matter if you bundle via self-provided windows
DLLS, containers, flatpacks, modules or rhel versions.
Bundling basically allows reusing third party code blindly without any
form of audit or maintenance. You take third party code, you adapt your
code to its current API, and you forget about it.
You definitely *not* check it for security legal or other problems, you
definitely *not* check regularly if CVEs or updates have been released,
you definitely *not* try to maintain it yourself. Any bundler dev that
tells you otherwise lies. The average bundler dev will tell you “Look at
my wonderful up to date award-wining modern code, security problems? Ah,
that, not my code, I bundle it, not my problem”.
It is however a *huge* problem for the people on the receiving end of
the resulting software, static builds or not. Static builds do not add
missing new features or fix security holes. They just remove the shared
libs that could be used by the sysadmin use to track them. And since
malware authors do not bother identifying how software was compiled,
before attempting to exploit it, static builds do not hinder them the
slightest.
While trying to improve Go packaging in Fedora by myself I found serious
old security issues in first-class Go code. First-class as in benefiting
from huge publicised ongoing dev investments from major companies like
Google, Red Hat or Microsoft. It’s not hard, you do not even need to
write Go code, just take the bundled pile of components those bundle,
and read the upstream changelog of those components for later versions.
You will hit pearls like “emergency release because of *** CVE”. Or
“need to change the API to fix a race in auth token processing”. And the
answer of the projects that bundled a previous state of this code was
never “we have a problem” or “we have fixed it some other way” but “go
away, we haven’t planned to look or touch this code before <remote
future>”.
And, again, I’m no Go dev, or dev in general, I didn’t even try any form
of systematic audit, that was just the bits jumping to attention when I
hit API changes and had to look at the code history to try to figure
when they occurred. The day any bundled codebase is subjected to the
kind of herd security research java was some years ago and CPUs are
today sparks are going to fly all over the place.
And this is a natural phenomenon trivial to explain. Maintaining old
code versions is hard. Upstreams are not interested in supporting you.
You have to redo their work by yourself, while forbidding yourself API
changes (if you were ready to accept them you wouldn't have bundled in
the first place). And modern code is so deeply interdependent, freeze
one link in the dependency web and you get cascading effects all other
the place. You quickly end up maintaining old versions of every single
link in this web. If you try to do it seriously, you effectively have to
fork and maintain the whole codebase. IE all the no-support problems of
barebones free software, with none of the community friends help that
should come with it.
That's what RH tries to do for EL versions. It takes a *huge* dev
investment to do in a semi-secure no-new features way. And after a
decade, RH just dumps the result, because even with all those efforts,
it reaches terminal state and has no future.
There is only one way to maintain cheaply lots of software components
that call each other all over the place. That’s to standardise on the
latest stable release of each of them (and when upstream does not
release, the latest commit). And aggressively port everything to the
updates of those versions when they occur. If you are rich, maintain a
couple of those baselines, maximum. flatpackers do not say otherwise
with their frameworks (except I think they deeply underestimate the
required framework scope).
And sure, every once in a while, porting takes consequent effort, it can
not be done instantaneously, devs are mobilized elsewhere, etc. That's
when you use targeted compat packages, to organise the porting effort,
to push the bits already ported while keeping the ones not ready yet.
And to *trace* this exception, to remind yourself that if you do not fix
this you’re going to be in deep trouble and get in old code maintenance
hell.
Not porting is not an “optimization”. Not porting is pure unadulterated
technical debt. Porting to upstream API changes *is* cheaper
than freezing on an old version of upstream’s code you get to maintain
in upstream’s stead.
If you try to use modules as a general release mechanism, and not
temporary compat mechanisms, you *will* hit this old code maintenance
hell sooner than you think. Not a problem for RH, old code maintenance
is basically the reason people pay for RHEL, huge problem for Fedora.
Because bundling has never been a magic solution. It’s only a magic
solution when you are the average dev that does not want to maintain
other people’s code, nor adapt to this other people’s code changes.
One bonus of bundling is removal of any kind of nagging that would
incite the dev to take a look at what happens upstream, so he can sleep
soundly at night.
But the real bundling perk is that, because container and static build
introspection tech is immature, you get to *not* *maintain* the code you
ship to users, with bosses, security auditors, etc being none the wiser.
Force any bundler dev to actually maintain all the code he ships, and I
can assure you, his love affair with bundling will end at once.
Regards,
--
Nicolas Mailhot
++ awesome write up of thing that we have been facing in the Go packaging for few years already. I hope that with the joint forces Go-SIG will be able to improve the situation.

JC
Post by Nicolas Mailhot
_______________________________________________
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/ar
Vít Ondruch
2018-11-12 09:48:53 UTC
Permalink
Post by Stephen Gallagher
Post by Kevin Kofler
Post by Raphael Groner
Kevin,
Post by Kevin Kofler
* that no package may ever be module-only, but
modules can only be used for non-default
versions.
That statement doesn't make any sense for me. Can you explain, please? How
should modules live without packages in background? We'd already discussed
this in another thread.
I don't think you understood the sentence I wrote.
main repo: no package foo, no package libfoo (but many other packages)
module foo-1: foo-1.8.10, libfoo-1.8.12
module foo-2: foo-2.0.0, libfoo-2.0.1
but the "main repo: no package foo, no package libfoo" part is what I am
objecting to, especially if libfoo is used by more packages than just foo.
I want to require the main repo to contain some version of libfoo, and other
packages (from the main repo or from modules other than foo) should be
required to use the version in the main repo and not in some non-default
module.
This is literally the exact way things work today, except that instead
of "the main repo", we treat it as "the main repo OR the default
stream of the module".
Nothing in the main repo is permitted to use anything that is not
available in the main repo or a default module stream at runtime. Full
stop.
The case of Ursa Major is special: it's addressing the case where we
may have some *build-time* requirements that are not in the default
repo.
I might be missing something, but how do you want to enforce this ^^?
This sounds that although build succeeds, runtime might fail later,
because of missing dependencies. This might not happen for Go you used
as an example, because it is statically linked, but it must be the case
for other dynamically linked libraries.


V.
Post by Stephen Gallagher
All of their runtime requirements must still meet the above
criteria, but perhaps their build requires a too-new (or
old-and-more-stable) build-time requirement. In this case, it is far
easier on the packager to be able for them to be allowed to use that
other version to build.
Consider the Go case: we know that most Go packages will be statically
linked (issues with that are a different topic), so we know they will
work fine once built. However, if the application upstream cannot
build with the latest "stable" version because of
backwards-incompatible changes, it seems to me that it's perfectly
reasonable to allow them to use a slightly older Go compiler from a
module stream. Strictly speaking, this is no different from offering
an SCL or a compat-* package of the compiler, except that having it as
a module means that its executables and other tools will be in
standard locations, so the build process doesn't need to be patched to
locate them elsewhere.
Post by Kevin Kofler
Though I think that ideally, we would have only the main repo and pick one
version of foo to ship there instead of offloading this distribution job to
the user through arbitrarily-branched modules.
And if we lived in a proprietary world where we had dictatorial
control over what our users are allowed to install, that might work.
In 1998, this approach made sense. At that time, your two choices for
any software were "install a distro package" or "try to compile it
yourself". Upstream projects themselves used to strive to build
packages for the distros so they could make sure they reached users.
That is simply not how much of today's software works and on the rare
cases they *do* provide a distro package, it's usually for the distros
with a (real or perceived) long-term stability guarantee.
What we are doing is providing additional tools. If you do not wish to
use them to build your packages, don't! That's fine. For others, it's
a matter of putting a price on their time: is it worth spending an
extra two months hacking on a package in the name of ideological
purity, or is that two months better spent doing other work? The
Fedora of a few years ago would have *required* the former approach.
Fedora today is more welcoming.
_______________________________________________
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/ar
Stephen Gallagher
2018-11-12 12:43:01 UTC
Permalink
Post by Vít Ondruch
Post by Stephen Gallagher
Post by Kevin Kofler
Post by Raphael Groner
Kevin,
Post by Kevin Kofler
* that no package may ever be module-only, but
modules can only be used for non-default
versions.
That statement doesn't make any sense for me. Can you explain, please? How
should modules live without packages in background? We'd already discussed
this in another thread.
I don't think you understood the sentence I wrote.
main repo: no package foo, no package libfoo (but many other packages)
module foo-1: foo-1.8.10, libfoo-1.8.12
module foo-2: foo-2.0.0, libfoo-2.0.1
but the "main repo: no package foo, no package libfoo" part is what I am
objecting to, especially if libfoo is used by more packages than just foo.
I want to require the main repo to contain some version of libfoo, and other
packages (from the main repo or from modules other than foo) should be
required to use the version in the main repo and not in some non-default
module.
This is literally the exact way things work today, except that instead
of "the main repo", we treat it as "the main repo OR the default
stream of the module".
Nothing in the main repo is permitted to use anything that is not
available in the main repo or a default module stream at runtime. Full
stop.
The case of Ursa Major is special: it's addressing the case where we
may have some *build-time* requirements that are not in the default
repo.
I might be missing something, but how do you want to enforce this ^^?
This sounds that although build succeeds, runtime might fail later,
because of missing dependencies. This might not happen for Go you used
as an example, because it is statically linked, but it must be the case
for other dynamically linked libraries.
Well, it *should* be enforced in Bodhi with the dependency-check test
(dist.rpmdeplint). It should see that the packages won't be
installable and once we get gating turned back on, it will enforce
that the package cannot go to stable.
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/***@l
Vít Ondruch
2018-11-12 13:08:11 UTC
Permalink
Post by Stephen Gallagher
Post by Vít Ondruch
Post by Stephen Gallagher
Post by Kevin Kofler
Post by Raphael Groner
Kevin,
Post by Kevin Kofler
* that no package may ever be module-only, but
modules can only be used for non-default
versions.
That statement doesn't make any sense for me. Can you explain, please? How
should modules live without packages in background? We'd already discussed
this in another thread.
I don't think you understood the sentence I wrote.
main repo: no package foo, no package libfoo (but many other packages)
module foo-1: foo-1.8.10, libfoo-1.8.12
module foo-2: foo-2.0.0, libfoo-2.0.1
but the "main repo: no package foo, no package libfoo" part is what I am
objecting to, especially if libfoo is used by more packages than just foo.
I want to require the main repo to contain some version of libfoo, and other
packages (from the main repo or from modules other than foo) should be
required to use the version in the main repo and not in some non-default
module.
This is literally the exact way things work today, except that instead
of "the main repo", we treat it as "the main repo OR the default
stream of the module".
Nothing in the main repo is permitted to use anything that is not
available in the main repo or a default module stream at runtime. Full
stop.
The case of Ursa Major is special: it's addressing the case where we
may have some *build-time* requirements that are not in the default
repo.
I might be missing something, but how do you want to enforce this ^^?
This sounds that although build succeeds, runtime might fail later,
because of missing dependencies. This might not happen for Go you used
as an example, because it is statically linked, but it must be the case
for other dynamically linked libraries.
Well, it *should* be enforced in Bodhi
This rather important detail is not mentioned anywhere (at least quick
grep for 'bodhi' and 'dep' over the two tickets from initial email did
not revealed anything).
Post by Stephen Gallagher
with the dependency-check test
(dist.rpmdeplint). It should see that the packages won't be
installable and once we get gating turned back on, it will enforce
that the package cannot go to stable.
The dependency check is not blocking ATM, is it?


V.

_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject
Stephen Gallagher
2018-11-12 13:20:40 UTC
Permalink
Post by Vít Ondruch
Post by Stephen Gallagher
Post by Vít Ondruch
Post by Stephen Gallagher
Post by Kevin Kofler
Post by Raphael Groner
Kevin,
Post by Kevin Kofler
* that no package may ever be module-only, but
modules can only be used for non-default
versions.
That statement doesn't make any sense for me. Can you explain, please? How
should modules live without packages in background? We'd already discussed
this in another thread.
I don't think you understood the sentence I wrote.
main repo: no package foo, no package libfoo (but many other packages)
module foo-1: foo-1.8.10, libfoo-1.8.12
module foo-2: foo-2.0.0, libfoo-2.0.1
but the "main repo: no package foo, no package libfoo" part is what I am
objecting to, especially if libfoo is used by more packages than just foo.
I want to require the main repo to contain some version of libfoo, and other
packages (from the main repo or from modules other than foo) should be
required to use the version in the main repo and not in some non-default
module.
This is literally the exact way things work today, except that instead
of "the main repo", we treat it as "the main repo OR the default
stream of the module".
Nothing in the main repo is permitted to use anything that is not
available in the main repo or a default module stream at runtime. Full
stop.
The case of Ursa Major is special: it's addressing the case where we
may have some *build-time* requirements that are not in the default
repo.
I might be missing something, but how do you want to enforce this ^^?
This sounds that although build succeeds, runtime might fail later,
because of missing dependencies. This might not happen for Go you used
as an example, because it is statically linked, but it must be the case
for other dynamically linked libraries.
Well, it *should* be enforced in Bodhi
This rather important detail is not mentioned anywhere (at least quick
grep for 'bodhi' and 'dep' over the two tickets from initial email did
not revealed anything).
Post by Stephen Gallagher
with the dependency-check test
(dist.rpmdeplint). It should see that the packages won't be
installable and once we get gating turned back on, it will enforce
that the package cannot go to stable.
The dependency check is not blocking ATM, is it?
To quote myself "once we get gating turned back on, it will enforce
that the package cannot go to stable."

I'd prefer to assume that anyone who knows to request a build-time dep
would be sufficiently informed about their package to also know if it
would need that as a runtime dep and wouldn't blindly submit it,
though.
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproj
Neal Gompa
2018-11-12 14:04:09 UTC
Permalink
Post by Vít Ondruch
Post by Stephen Gallagher
Post by Vít Ondruch
Post by Stephen Gallagher
Post by Kevin Kofler
Post by Raphael Groner
Kevin,
Post by Kevin Kofler
* that no package may ever be module-only, but
modules can only be used for non-default
versions.
That statement doesn't make any sense for me. Can you explain, please? How
should modules live without packages in background? We'd already discussed
this in another thread.
I don't think you understood the sentence I wrote.
main repo: no package foo, no package libfoo (but many other packages)
module foo-1: foo-1.8.10, libfoo-1.8.12
module foo-2: foo-2.0.0, libfoo-2.0.1
but the "main repo: no package foo, no package libfoo" part is what I am
objecting to, especially if libfoo is used by more packages than just foo.
I want to require the main repo to contain some version of libfoo, and other
packages (from the main repo or from modules other than foo) should be
required to use the version in the main repo and not in some non-default
module.
This is literally the exact way things work today, except that instead
of "the main repo", we treat it as "the main repo OR the default
stream of the module".
Nothing in the main repo is permitted to use anything that is not
available in the main repo or a default module stream at runtime. Full
stop.
The case of Ursa Major is special: it's addressing the case where we
may have some *build-time* requirements that are not in the default
repo.
I might be missing something, but how do you want to enforce this ^^?
This sounds that although build succeeds, runtime might fail later,
because of missing dependencies. This might not happen for Go you used
as an example, because it is statically linked, but it must be the case
for other dynamically linked libraries.
Well, it *should* be enforced in Bodhi
This rather important detail is not mentioned anywhere (at least quick
grep for 'bodhi' and 'dep' over the two tickets from initial email did
not revealed anything).
Post by Stephen Gallagher
with the dependency-check test
(dist.rpmdeplint). It should see that the packages won't be
installable and once we get gating turned back on, it will enforce
that the package cannot go to stable.
The dependency check is not blocking ATM, is it?
It is not. Arguably, this check should be blocking across the board. I
personally would rather have this check earlier than Bodhi (mark
builds in Koji as failed if they aren't installable), but that appears
to be a thing we can't do.



--
真実はいつも一つ!/ Always, there's only one truth!
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/l
Randy Barlow
2018-11-12 15:58:27 UTC
Permalink
Post by Neal Gompa
It is not. Arguably, this check should be blocking across the board. I
personally would rather have this check earlier than Bodhi (mark
builds in Koji as failed if they aren't installable), but that
appears
to be a thing we can't do.
Sometimes builds depend on other builds, so this would not always be
possible. Bodhi is a good place to check things like this, because it
is the first time you have an opportunity to express "these builds ship
together".

It would be nice if there were a way to express this earlier, such as
Pagure PRs.
Neal Gompa
2018-11-12 17:12:13 UTC
Permalink
On Mon, Nov 12, 2018 at 11:50 AM Randy Barlow
Post by Randy Barlow
Post by Neal Gompa
It is not. Arguably, this check should be blocking across the board. I
personally would rather have this check earlier than Bodhi (mark
builds in Koji as failed if they aren't installable), but that appears
to be a thing we can't do.
Sometimes builds depend on other builds, so this would not always be
possible. Bodhi is a good place to check things like this, because it
is the first time you have an opportunity to express "these builds ship
together".
That's not actually possible. Builds depending on other builds already
fail today without Koji overrides being done first.
Post by Randy Barlow
It would be nice if there were a way to express this earlier, such as
Pagure PRs.
How would we do this? Scratch build IDs would need to be identified
somehow, and the builds would need to be captured for this use. This
is actually something I've been doing for packages I personally build
on COPR, so it's not particularly difficult to implement, provided we
can grab repository configs from koji for a scratch build and pull the
RPMs in and do the dep check and the install check.



--
真実はいつも一つ!/ Always, there's only one truth!
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/l
Randy Barlow
2018-11-12 15:56:17 UTC
Permalink
Post by Stephen Gallagher
It should see that the packages won't be
installable and once we get gating turned back on, it will enforce
that the package cannot go to stable.
It is now possible and encouraged to voluntarily opt-in to test gating
in Bodhi again:

https://docs.pagure.org/greenwave/package-specific-policies.html

The FESCo decision about gating was that we need some feedback from
packagers about the UI improvements that have been made before we turn
it back on distro-wide, so I encourage you to try it out and let us
know what you think!
Mat Booth
2018-11-09 11:21:46 UTC
Permalink
Post by Kevin Kofler
Post by Zbigniew Jędrzejewski-Szmek
This is not about forcing modules unto people. The drive comes from
the other direction: packages want to be available only as modules,
But that is exactly what I mean by "forcing modules onto people"!
If you want to keep non-module version of packages around then you (or any
interested party) needs to step up and help with the maintenance of them.

Someone said further up that the it makes the Java SIG's life easier. These
days the Java SIG is pretty much one guy maintaining hundreds of packages:

https://lists.fedoraproject.org/archives/list/java-***@lists.fedoraproject.org/message/MQMRQVENBLDRS67WLNQ7EOCMSDI5WIET/

So if we want a Java stack in the distro at all and you are not willing or
able to lend a hand, then by all means let him maintain those packages in
the most efficient way he can.

It's not about forcing modules onto users, it's about not forcing more work
than necessary onto already overstretched maintainers.
--
Mat Booth
http://fedoraproject.org/get-fedora
Nicolas Mailhot
2018-11-09 12:34:17 UTC
Permalink
Post by Mat Booth
It's not about forcing modules onto users, it's about not forcing more
work than necessary onto already overstretched maintainers.
Then help finish
https://pagure.io/fesco/issue/2004
and
https://github.com/rpm-software-management/rpm/issues/104

That would have a lot more effect on maintainer productivity. Not
coincidently, it started with an helper the java SIG wrote an no one
ever bothered to integrate infra-side

Regards,

--
Nicolas Mailhot
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/de
Miroslav Suchý
2018-11-13 16:50:34 UTC
Permalink
Post by Justin Forbes
It
is possible that some of this could be alleviated with a fairly simple
change to mock.
There is no need for a change in Mock. Mock can consume modules for looong time. You can put in mock config something like:

# This is executed just before 'chroot_setup_cmd'.
config_opts['module_enable'] = ['list', 'of', 'modules']
config_opts['module_install'] = ['module1/profile', 'module2/profile']

This will enable and install the module in buildroot and make the RPMs available in buildroot.

Miroslav
_______________________________________________
devel mailing list -- ***@lists.fedoraproject.org
To unsubscribe send an email to devel-***@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedorapr

Loading...