Discussion:
Proposed F18 feature: MiniDebugInfo
(too old to reply)
Alexander Larsson
2012-05-07 13:07:20 UTC
Permalink
I just wrote a new Feature proposal for shipping minimal debug info by
default:
https://fedoraproject.org/wiki/Features/MiniDebugInfo

The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.

My personal opinion is that we should go with compressed data, in the
original files without the line number information. This means we use
minimal space (i.e. an installation increase by only 0.5%) while being
completely transparent to users. It does however make the normal
packages larger in a non-optional way which some people disagree with.
Jan Kratochvil
2012-05-07 14:25:46 UTC
Permalink
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The "several choices" is missing the primary possibility of no debug info
needed at the client side at all thanks to already implemented
https://fedoraproject.org/wiki/Features/RetraceServer

Why not to use ABRT Retrace Server for the bugreports instead?

I am only aware the upload of core file may be slow but that can be solved
(by gdbserver for core files, which was already implemeted once). I do not
know if it is a real problem or not, core file do not have to be large.


Regards,
Jan
Jakub Jelinek
2012-05-07 14:34:27 UTC
Permalink
Post by Jan Kratochvil
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The "several choices" is missing the primary possibility of no debug info
needed at the client side at all thanks to already implemented
https://fedoraproject.org/wiki/Features/RetraceServer
Why not to use ABRT Retrace Server for the bugreports instead?
I am only aware the upload of core file may be slow but that can be solved
(by gdbserver for core files, which was already implemeted once). I do not
know if it is a real problem or not, core file do not have to be large.
For bug reporting, you don't need to upload core files, if all you want
is to augment backtraces with symbol info and perhaps line info, then
all you can do is just upload backtraces without symbol info/line info,
supply the relevant build-ids for addresses seen in the backtrace and
let some server with access to debuginfo packages finish that up.

Jakub
Jan Kratochvil
2012-05-07 15:25:29 UTC
Permalink
Post by Jakub Jelinek
For bug reporting, you don't need to upload core files, if all you want
is to augment backtraces with symbol info and perhaps line info, then
all you can do is just upload backtraces without symbol info/line info,
supply the relevant build-ids for addresses seen in the backtrace and
let some server with access to debuginfo packages finish that up.
This will lose a lot of info, any local variables, function parameters, even
inlined functions etc.


Thanks,
Jan
Jakub Jelinek
2012-05-07 15:29:12 UTC
Permalink
Post by Jan Kratochvil
Post by Jakub Jelinek
For bug reporting, you don't need to upload core files, if all you want
is to augment backtraces with symbol info and perhaps line info, then
all you can do is just upload backtraces without symbol info/line info,
supply the relevant build-ids for addresses seen in the backtrace and
let some server with access to debuginfo packages finish that up.
This will lose a lot of info, any local variables, function parameters, even
inlined functions etc.
Lose info compared to what? If debuginfo isn't installed on the client
side, only this minidebug info is there, then that info wouldn't be provided
either, so just adding the symbol (and/or line info) stuff on the server
shouldn't make (much difference), unless the backtrace uses symbol or line
info internally for heuristics etc. (but we really should be emitting
async unwind info anyway already).

Jakub
Jan Kratochvil
2012-05-07 15:36:41 UTC
Permalink
Post by Jakub Jelinek
Post by Jan Kratochvil
Post by Jakub Jelinek
For bug reporting, you don't need to upload core files, if all you want
is to augment backtraces with symbol info and perhaps line info, then
all you can do is just upload backtraces without symbol info/line info,
supply the relevant build-ids for addresses seen in the backtrace and
let some server with access to debuginfo packages finish that up.
This will lose a lot of info, any local variables, function parameters, even
inlined functions etc.
Lose info compared to what?
To ABRT Retrace Server.


Thanks,
Jan
Alexander Larsson
2012-05-07 15:15:17 UTC
Permalink
Post by Jan Kratochvil
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The "several choices" is missing the primary possibility of no debug info
needed at the client side at all thanks to already implemented
https://fedoraproject.org/wiki/Features/RetraceServer
Why not to use ABRT Retrace Server for the bugreports instead?
I am only aware the upload of core file may be slow but that can be solved
(by gdbserver for core files, which was already implemeted once). I do not
know if it is a real problem or not, core file do not have to be large.
Well, its not listed as an option because that means there is no feature
to be done at all.

However, I don't think the retrace server is always what you want. They
have several disadvantages:

* They don't work offline, or before/after the network is up
* There are privacy issues with sending the users coredumps to some
server on the internet
* They don't work for site-local packages, or scratch builds of fedora
packages.
* They require some server to store every build of every fedora package
forever, and sync new builds from the buildsystem there.
* If some organization doesn't want to send reports to the fedora
servers they need to duplicate all debug info packages on their
retrace server
* They only work for ABRT, not if you're e.g. debugging something
locally, or a user is reporting a backtrace with gdb
* They can only be used for crash reporting, not e.g. tracing
or profiling
* Its problematic to use a retrace server during early boot, or e.g. in
non-session apps like a daemon

I think retrace servers are interesting, because when applicable they do
allow you to get a higher quality backtrace, with full debug info.
However, I think we should *always* have a baseline backtrace with at
least function names, which is there when retracing isn't.

This is very similar to the backtrace shown at kernel oopses: they are
low-quality backtraces, which are generated at time of the oops, and not
later. It gets you the most out of the machine at a time where the
machine is otherwise already pretty useless. We want that for userspace
too, regardless if it is early boot, late shutdown or any other state of
the system.

So, I don't think the existance of retracing servers is contrary to
having minidebuginfo.
Jan Kratochvil
2012-05-07 15:36:07 UTC
Permalink
Post by Alexander Larsson
* They don't work offline, or before/after the network is up
+
Post by Alexander Larsson
* Its problematic to use a retrace server during early boot, or e.g. in
non-session apps like a daemon
/var/spool/abrt/ stores them for later GUI upload, it already works this way.
Post by Alexander Larsson
* There are privacy issues with sending the users coredumps to some
server on the internet
As whole Fedora is built by the Fedora Project and Retrace Server is also run
by Fedora Project this is non-issue. There can be already inserted trojan
uploading of any private info in the shipped binaries.

Somehow I agree it is a valid point.

But the people concerned about this level of security are so few they can
afforrd downloading full debuginfos (like I do).
Post by Alexander Larsson
* They don't work for site-local packages, or scratch builds of fedora
packages.
With locally built packages I believe the person is already a developer with
full debuginfos installed.

But I find this as a valid usecase missed by Retrace Server.
Post by Alexander Larsson
* They require some server to store every build of every fedora package
forever, and sync new builds from the buildsystem there.
This is already done anyway; already provided.
Post by Alexander Larsson
* If some organization doesn't want to send reports to the fedora
servers they need to duplicate all debug info packages on their
retrace server
Not so valid point, I believe current Retrace Server does not but it could
also download packages on demand.

This is more a problem existing Fedora infrastructure does not store old
builds so there is a need for persistent storage of everything on Retrace
Server primarily for that reason anyway.
Post by Alexander Larsson
* They only work for ABRT, not if you're e.g. debugging something
locally, or a user is reporting a backtrace with gdb
+
Post by Alexander Larsson
* They can only be used for crash reporting, not e.g. tracing
or profiling
Yes, I target only the ABRT use case. We can talk about non-ABRT use cases
elsewhere.
Post by Alexander Larsson
I think retrace servers are interesting, because when applicable they do
allow you to get a higher quality backtrace, with full debug info.
However, I think we should *always* have a baseline backtrace with at
least function names, which is there when retracing isn't.
I do not find many reasons why "retracing isn't". If it really is not (is
not?) we should rather fix that.
Post by Alexander Larsson
So, I don't think the existance of retracing servers is contrary to
having minidebuginfo.
I agree but I see these minidebuginfo usecases (where ABRT Retrace Server is
not applicable) to be very limited (*).

(*) and IMHO therefore not worth the distro size increase.


Thanks,
Jan
Bruno Wolff III
2012-05-09 18:51:30 UTC
Permalink
On Mon, May 07, 2012 at 17:36:07 +0200,
Post by Jan Kratochvil
Post by Alexander Larsson
* There are privacy issues with sending the users coredumps to some
server on the internet
As whole Fedora is built by the Fedora Project and Retrace Server is also run
by Fedora Project this is non-issue. There can be already inserted trojan
uploading of any private info in the shipped binaries.
While this applies for some scenarios, it doesn't cover everything. Once
the data is on those servers it is vulnerable to things (e.g. subpoenas,
mistakes in configuration) at that location that it otherwise wouldn't be.
Lennart Poettering
2012-05-07 15:40:18 UTC
Permalink
Post by Jan Kratochvil
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The "several choices" is missing the primary possibility of no debug info
needed at the client side at all thanks to already implemented
https://fedoraproject.org/wiki/Features/RetraceServer
Why not to use ABRT Retrace Server for the bugreports instead?
I am only aware the upload of core file may be slow but that can be solved
(by gdbserver for core files, which was already implemeted once). I do not
know if it is a real problem or not, core file do not have to be large.
It's already simply a scalability issue. Generating useful backtraces is
not really something that is only and exclusively done when reporting a
bug interactively, but is something that should be done automatically,
without user input, on the individual machine, and should just be there,
without having to keep coredumps, installing abrt or anything. The same
way as kernel oops backtraces are always shown inline with other
loggable information and are just there, we should log process
backtraces in userspace too. Always, and on all systems, regardless in
what state they are. And currently we can't do this.

Having a centralized service that is bombared everytime any user of
Fedora anywhere on the world runs into a coredump is just not how it
makes sense to design a system. You don't want to centralize dispatching
of something that can happen a million times a second all around the
world.

Right now, it is easier to make sense of a kernel oops without any
special tools, than it is to make sense of a userspace segfault. And
that's something that needs fixing, and what Alex helps us to do.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Jan Kratochvil
2012-05-07 15:50:56 UTC
Permalink
Post by Lennart Poettering
on the individual machine,
There is no backing reason for this requirement. It does not matter where.
Post by Lennart Poettering
without having to keep coredumps,
Core dump currently always have to be shortly stored on the disk anyway, even
for using minidebuginfo. Backtracing crashed program without dumping its core
would be a different project.
Post by Lennart Poettering
And currently we can't do this.
Unfortunately this is not possible with all your requirements due to:

* Even the optimal full debuginfo size (Jakub's dwz) is still only IIRC ~50%
of the current debuginfo size, which is still not suitable to install for
every package on every machine.

* Opinions to this item significantly differ but minidebuginfo-only
backtraces are in many (IMO most) cases not usable for problem analysis.
Post by Lennart Poettering
Having a centralized service that is bombared everytime any user of
Fedora anywhere on the world runs into a coredump
There are some efforts from ABRT team to discard duplicate crashes already
before uploading them.
Post by Lennart Poettering
You don't want to centralize dispatching of something that can happen
a million times a second all around the world.
Unique crashes do not happen so often.
Post by Lennart Poettering
Right now, it is easier to make sense of a kernel oops without any
special tools,
Kernel is a very specific package and its behavior and coding style cannot be
automatically generalized to other packages.
Post by Lennart Poettering
And that's something that needs fixing, and what Alex helps us to do.
I agree we should fix it but I believe there are better options to do so.

Primarily ABRT Retrace Server is already deployed and I still have not heard
why not to use its significantly better debug info backtraces quality compared
to this symtab-only poor backtraces.


Thanks,
Jan
Colin Walters
2012-05-07 16:10:17 UTC
Permalink
Post by Jan Kratochvil
* Opinions to this item significantly differ but minidebuginfo-only
backtraces are in many (IMO most) cases not usable for problem analysis.
My experience is otherwise; just look at how the kernel works in
practice. People often post stack traces to the list, and it's
certainly not uncommon for problems to be fixed just based on this data.

The key phrase here is "many (IMO most) cases" which is YOUR experience;
but it's important to at least consider that your experience isn't
necessarily representative of everyone. When was the last time you had
to diagnose a crash in gnome-shell with with our 100+ DSOs loaded?

There's a world of difference between "no stack trace" and "basic stack
trace".
Jan Kratochvil
2012-05-07 17:50:56 UTC
Permalink
Post by Colin Walters
My experience is otherwise; just look at how the kernel works in
practice. People often post stack traces to the list, and it's
certainly not uncommon for problems to be fixed just based on this data.
Linux kernel is not generalizable for all packages, as I already wrote.
Post by Colin Walters
but it's important to at least consider that your experience isn't
necessarily representative of everyone.
I agree, therefore I hope FESCo will consider this feature objectively.
Post by Colin Walters
When was the last time you had
to diagnose a crash in gnome-shell with with our 100+ DSOs loaded?
firefox, *office, from several times a month to several times a year.
3.4G /usr/lib/debug/
Post by Colin Walters
There's a world of difference between "no stack trace" and "basic stack
trace".
I repeat "no stack trace" should never happen and I believe it does not
happen. At least it never happens for me. I never prefer "no stack trace" to
anything. If "no stack trace" ever happens it should be bugreported and ABRT
Retrace Server should be fixed. It is there for proper backtraces.


Thanks,
Jan
Lennart Poettering
2012-05-07 20:16:02 UTC
Permalink
Post by Jan Kratochvil
Post by Lennart Poettering
on the individual machine,
There is no backing reason for this requirement. It does not matter where.
That's my requirement, and actually of many others, too.

Everybody who builds OSes or appliances, and who needs to supervise a
large number of systems, and hosts, wants stacktraces that just work,
and don't require network access or any complex infrastructure.
Post by Jan Kratochvil
Post by Lennart Poettering
without having to keep coredumps,
Core dump currently always have to be shortly stored on the disk anyway, even
for using minidebuginfo. Backtracing crashed program without dumping its core
would be a different project.
Temporarily and briefly storing things on disk is not a problem. Whether
something is in a temporary file or in memory is pretty much an
implementation detail. What matters it that we don't have to keep them
all the time.
Post by Jan Kratochvil
Post by Lennart Poettering
And currently we can't do this.
* Even the optimal full debuginfo size (Jakub's dwz) is still only IIRC ~50%
of the current debuginfo size, which is still not suitable to install for
every package on every machine.
We don't need the full backtrace in all cases. There's a lot of room
between "no backtrace" and "best backtrace ever". For the client-side
backtraces in "low quality" the way Alex suggests are perfectly OK. It's
a reasonable tradeoff between disk usage and usefulness.
Post by Jan Kratochvil
* Opinions to this item significantly differ but minidebuginfo-only
backtraces are in many (IMO most) cases not usable for problem analysis.
Well that's somewhere where we simply don't agree. The kernel folks only
have these low-quality backtraces, and they mostly are OK with it, never
asked for more. At least I couldn't find any mention on Google that
people were complaining loudly about the kernel backtraces, that they
were too limited.

Also note that a couple of projects over the years have been patched to do
in-process backtraces with backtrace(3). For example D-Bus did. The fact
that people did that makes clear that people want client side backtraces
that just work. (even though these are not too useful, since they only
use exported symbols names, instead of real debuginfo)

The Intel folks are actually leaving full debug info around for their
mobile distros, because they want client side backtraces that just
work, and their systems are big enough to not care too much about the
extra waste.

I mean, people all around of us go for client side backtraces, and it
has shown to be a valuable tool for very little price (if done properly
they way Alex suggests).
Post by Jan Kratochvil
Post by Lennart Poettering
You don't want to centralize dispatching of something that can happen
a million times a second all around the world.
Unique crashes do not happen so often.
Well, but how do you figure out that a crash is unique? You extract a
backtrace in some form and match it against some central database of
some form. That's kinda hard to do without, well, centralization.

And anyway: so I want my backtrace resolved, and by your suggestions I'd
hence have to talk to your server. But the server then tells me: nah, no
can do, yours has been seen before, go away! That makes little sense. I
just wanted my backtrace, nothing else resolved.

I mean, there are certain things that should just work, without any
complex centralized infrastructure, without having Fedora even know
about it. And one of those things is getting a frickin' stacktrace for
the crashes on your systems.
Post by Jan Kratochvil
Post by Lennart Poettering
Right now, it is easier to make sense of a kernel oops without any
special tools,
Kernel is a very specific package and its behavior and coding style cannot be
automatically generalized to other packages.
The kernel is a C program, with a stack and symbols, and is in this
regard very much the same as any other C program.

I mean, I can tell you: I want client-side backtraces that just work,
and I know a lot of people and companies who also do (see two examples
above). You tell me: "nah, nobody wants that". I mean, it's obvious that
you aren't right, are you?

Lennart
--
Lennart Poettering - Red Hat, Inc.
Jan Kratochvil
2012-05-07 21:02:01 UTC
Permalink
Post by Lennart Poettering
Everybody who builds OSes or appliances, and who needs to supervise a
large number of systems, and hosts, wants stacktraces that just work,
and don't require network access or any complex infrastructure.
Yes, they work for me.
3.9G /usr/lib/debug/
People who build OSes or appliances can definitely afford several GBs of HDD.

The goal of minidebuginfo and/or Retrace Server is to provide good enough
service for regular users with crash/bugreport only occasionally and not
having any other use for the debuginfo files otherwise.
Post by Lennart Poettering
Temporarily and briefly storing things on disk is not a problem. Whether
something is in a temporary file or in memory is pretty much an
implementation detail. What matters it that we don't have to keep them
all the time.
Your objection was "without having to keep coredumps".

So we agree we need to keep it at least for several seconds.

For ABRT Retrace Server we need to keep it at most for several minutes, before
it gets uploaded (either whole or in gdbserver-optimized way).

I do not find seconds vs. minutes such a critical difference.
Post by Lennart Poettering
We don't need the full backtrace in all cases. There's a lot of room
between "no backtrace" and "best backtrace ever". For the client-side
backtraces in "low quality" the way Alex suggests are perfectly OK.
For who and which purposes is "perfectly OK"? At least not for ABRT
backtraces. We should define the scope for usefulness of minidebuginfo.
If it is only for non-ABRT uses I can stop complaining as I do not know about
those.
Post by Lennart Poettering
Also note that a couple of projects over the years have been patched to do
in-process backtraces with backtrace(3). For example D-Bus did. The fact
that people did that makes clear that people want client side backtraces
that just work.
These people probably do not have ABRT Retrace Server, so they are sufficient
at least with poor solutions. Fedora already has better solution.

Fedora should improve, not degrade.
Post by Lennart Poettering
I mean, people all around of us go for client side backtraces,
It is the most simple way how to implement backtracing functionality. It does
not have to be optimal (for user - performance and developers
- backtrace quality).
Post by Lennart Poettering
Post by Jan Kratochvil
Unique crashes do not happen so often.
Well, but how do you figure out that a crash is unique? You extract a
backtrace in some form and match it against some central database of
some form. That's kinda hard to do without, well, centralization.
Yes, this is already being developed by ABRT team. I do not welcome it as it
will give occasional wrong decisions but if Retrace Server farm gets into some
real capacity trouble this solution at least is available.
Post by Lennart Poettering
And anyway: so I want my backtrace resolved, and by your suggestions I'd
hence have to talk to your server. But the server then tells me: nah, no
can do, yours has been seen before, go away!
Not "go away" but it says "here is the results already backtraced before".

But here is a misunderstanding of the target user again. If you are
interested in the backtrace for other reasons than just ABRT bugreport you are
a developer. You can afford several GBs of /usr/lib/debug for the high
quality local backtraces.
Post by Lennart Poettering
I mean, there are certain things that should just work, without any
complex centralized infrastructure,
Yes, it is called /usr/lib/debug. But it should not be required for ABRT
bugreports. But ABRT bugreports require infrastructure where the Bug is filed
to anyway. So without available infrastructure neither bugreport not
backtracing makes sense for ABRT.
Post by Lennart Poettering
I mean, I can tell you: I want client-side backtraces that just work,
So why don't you just install 3-4GB of /usr/lib/debug locally and you want to
push 2% of distro size to people who have no use for it as they are happy with
ABRT Retrace Server?


Thanks,
Jan
Lennart Poettering
2012-05-07 21:36:04 UTC
Permalink
Post by Jan Kratochvil
Post by Lennart Poettering
Everybody who builds OSes or appliances, and who needs to supervise a
large number of systems, and hosts, wants stacktraces that just work,
and don't require network access or any complex infrastructure.
Yes, they work for me.
3.9G /usr/lib/debug/
People who build OSes or appliances can definitely afford several GBs of HDD.
Some certainly can, not all want. And it's not just disk space, it's
also downloading all that data in the first place...

I mean, just think of this: you have a pool of workstations to
administer. It's all the same machines, with the same prepared OS
image. You want to know about the backtraces as admin. Now, since the OS
images are all the same some errors will happen across all the machines
at the same time. Now, with your logic this would either result in all
of them downloading a couple of GB of debuginfo for glibc and stuff like
that, or all of them bombarding the retrace server, if they can.

But anyway, I don't think it's worth continuing this discussion, this is
a bit like a dialogue between two wet towels...
Post by Jan Kratochvil
Your objection was "without having to keep coredumps".
So we agree we need to keep it at least for several seconds.
For ABRT Retrace Server we need to keep it at most for several minutes, before
it gets uploaded (either whole or in gdbserver-optimized way).
Well, assuming that the network works, and I am connected to one, and I
am happy to pay 3G for it. And so on.

Here's another thing that you should think about: the stack of things
that need to work to get a remote retrace done properly is immense: you
need abrt working, you need NM working (and all the stuff it pulls in)
and you need your ISP working, and your cabling and everything
else. With Alex' work you need very very little working, just a small
unwinder. Full stop.
Post by Jan Kratochvil
Post by Lennart Poettering
Also note that a couple of projects over the years have been patched to do
in-process backtraces with backtrace(3). For example D-Bus did. The fact
that people did that makes clear that people want client side backtraces
that just work.
These people probably do not have ABRT Retrace Server, so they are sufficient
at least with poor solutions. Fedora already has better solution.
Fedora should improve, not degrade.
I am pretty sure I don't want my local developer machine always talk to
the fedora server while i develop and something crashes. jeez. I want to
hack on trains and on planes, and I want my data private.
Post by Jan Kratochvil
Post by Lennart Poettering
Well, but how do you figure out that a crash is unique? You extract a
backtrace in some form and match it against some central database of
some form. That's kinda hard to do without, well, centralization.
Yes, this is already being developed by ABRT team. I do not welcome it as it
will give occasional wrong decisions but if Retrace Server farm gets into some
real capacity trouble this solution at least is available.
Look at this data from Mozilla:

https://crash-stats.mozilla.com/topcrasher/byversion/Firefox/12.0/1/all

For ffox 12.0 alone they get 110726 crashes per day. That's one package,
and one version of it. Admittedly they have a much bigger user base than
us, but we have an entire distribution to care for. An *entire*
*distribution*. And so far this is all done for complete coredumps, not
the minidumps Mozilla uses. With Mozilla's stats this is already 77
crashes per minute. If we want abrt ever to be usefully used, you'll
probably get into much higher ranges. That's a huge amount of requests.

Sure, one can make the retrace server scale to this, for example, by
being google, and hosting a datacenter just for this. But there's a much
smarter way: do client side backtraces and be done with it.
Post by Jan Kratochvil
Not "go away" but it says "here is the results already backtraced before".
But I want it for my data, no anybody else's!

Lennart
--
Lennart Poettering - Red Hat, Inc.
Jan Kratochvil
2012-05-07 21:54:38 UTC
Permalink
Post by Lennart Poettering
I mean, just think of this: you have a pool of workstations to
administer. It's all the same machines, with the same prepared OS
image.
Then I probably use readonly /usr/lib/debug over NFS.
Post by Lennart Poettering
Now, with your logic this would either result in all
of them downloading
Downloading only once, such farms use HTTP proxy.
Post by Lennart Poettering
a couple of GB of debuginfo for glibc and stuff like
that, or all of them bombarding the retrace server, if they can.
I would choose local debuginfo for such specialized farm myself.
Post by Lennart Poettering
But anyway, I don't think it's worth continuing this discussion, this is
a bit like a dialogue between two wet towels...
I also do not think we can ever find an agreement. I only wanted to post here
the opposite side of oppinions on this formal feature request.
Post by Lennart Poettering
With Alex' work you need very very little working, just a small unwinder.
Yes, just an unwinder. Not backtrace for debugging the problem.
Post by Lennart Poettering
I am pretty sure I don't want my local developer machine always talk to
the fedora server
Again, as a developer you can affort several GBs of debuginfo.

I can't believe - as a developer - you would be really sufficient debugging
all the system components just with the bare unwinder? Without every looking
at any function parameters, local variables?

You would have to debug everything from the disassembly, wouldn't you? In the
last 10 years debugging has improved and we can debug analysing at the source
level, no longer just reverse engineering the disassembly.
Post by Lennart Poettering
smarter way: do client side backtraces and be done with it.
And have just the stats without ever being able to fix it.


Regards,
Jan
Alexander Larsson
2012-05-08 06:09:04 UTC
Permalink
Post by Jan Kratochvil
Post by Lennart Poettering
But anyway, I don't think it's worth continuing this discussion, this is
a bit like a dialogue between two wet towels...
I also do not think we can ever find an agreement. I only wanted to post here
the opposite side of oppinions on this formal feature request.
I think this is the one thing we can agree on.
Post by Jan Kratochvil
Post by Lennart Poettering
With Alex' work you need very very little working, just a small unwinder.
Yes, just an unwinder. Not backtrace for debugging the problem.
This is your opinion. I rarely need the full backtrace in a bug report,
because it you can get one its generally something thats easily
reproduced and I can just run it in gdb myself. When you need it is when
something weird is happening and you have to rely on the bugreport only.
This is sometimes doable even without debug info, I even wrote a blog
post about this:

http://blogs.gnome.org/alexl/2005/08/26/the-art-of-decoding-backtraces-without-debug-info/

But, having the full symbol names for all libraries and apps in all
backtraces I'll ever see in the future would help me immensely. Even if
its "just an unwinder".
Post by Jan Kratochvil
Post by Lennart Poettering
I am pretty sure I don't want my local developer machine always talk to
the fedora server
Again, as a developer you can affort several GBs of debuginfo.
Not only developers are interested in backtraces, and not only on their
development machine. Administrators are too, and developers are
interested in backtraces from live systems in deployment etc. It just
makes more sense to have solid reliable client side backtraces.
Jakub Jelinek
2012-05-08 06:30:55 UTC
Permalink
Post by Alexander Larsson
This is your opinion. I rarely need the full backtrace in a bug report,
because it you can get one its generally something thats easily
reproduced and I can just run it in gdb myself. When you need it is when
something weird is happening and you have to rely on the bugreport only.
This is sometimes doable even without debug info, I even wrote a blog
http://blogs.gnome.org/alexl/2005/08/26/the-art-of-decoding-backtraces-without-debug-info/
But, having the full symbol names for all libraries and apps in all
backtraces I'll ever see in the future would help me immensely. Even if
its "just an unwinder".
But for that you really don't need the symtabs stored in the binaries/shared
libraries, you can just have the backtrace without symbols printed + print
relevant build-ids at the beginning, a script at any time can reconstruct
that into not just the symbol names, but also lineinfo. And the build-ids
will help even if you want to look at further details (.debug_info, source
files).

Jakub
Alexander Larsson
2012-05-08 06:34:57 UTC
Permalink
Post by Jakub Jelinek
Post by Alexander Larsson
This is your opinion. I rarely need the full backtrace in a bug report,
because it you can get one its generally something thats easily
reproduced and I can just run it in gdb myself. When you need it is when
something weird is happening and you have to rely on the bugreport only.
This is sometimes doable even without debug info, I even wrote a blog
http://blogs.gnome.org/alexl/2005/08/26/the-art-of-decoding-backtraces-without-debug-info/
But, having the full symbol names for all libraries and apps in all
backtraces I'll ever see in the future would help me immensely. Even if
its "just an unwinder".
But for that you really don't need the symtabs stored in the binaries/shared
libraries, you can just have the backtrace without symbols printed + print
relevant build-ids at the beginning, a script at any time can reconstruct
that into not just the symbol names, but also lineinfo. And the build-ids
will help even if you want to look at further details (.debug_info, source
files).
Its true that that is all the information you need from the
process/core. But you need to have the rest of the information availible
*somewhere*, such as on a global retrace server or just having it
locally in the minidebuginfo. The later is far more robust and simple.
It lets you directly get a reasonable backtrace given *only* the actual
binaries in the running process.
Jakub Jelinek
2012-05-08 06:41:17 UTC
Permalink
Post by Alexander Larsson
Its true that that is all the information you need from the
process/core. But you need to have the rest of the information availible
*somewhere*, such as on a global retrace server or just having it
Yes.
Post by Alexander Larsson
locally in the minidebuginfo. The later is far more robust and simple.
It lets you directly get a reasonable backtrace given *only* the actual
binaries in the running process.
What is far more robust and simple is something we simply have to agree to
disagree on.

Jakub
Jan Kratochvil
2012-05-08 08:02:47 UTC
Permalink
Post by Alexander Larsson
Its true that that is all the information you need from the
process/core. But you need to have the rest of the information availible
*somewhere*, such as on a global retrace server or just having it
locally in the minidebuginfo. The later is far more robust and simple.
It lets you directly get a reasonable backtrace given *only* the actual
binaries in the running process.
Also during local crashes the daemon/process has been automatically updated in
the meantime on the disk while the older binary is still running - and it
crashes. Only a few packages restart the daemon on its update (openssh does),
most packages do not (*).

In such case of stale running binary even local /usr/lib/debug is not enough
(and minidebuginfo sure also does not work), with ABRT Retrace Server it
always just works.

The unavailability of infrastructure is a myth, people have moved to Google
services from local programs and there are no complaints for "unavailability".

partial countercase to my argument: minidebuginfo could still work for the
main executable as during the crash dump it is still readable as /proc/PID/exe
and it could be extracted from it. But for any .so libraries there is no
associated fd provided by kernel so in practice it is not applicable anyway.


Regards,
Jan


(*) OT: Is not restarting a daemon on its update a packaging bug or not?
Miloslav Trmač
2012-05-08 11:08:06 UTC
Permalink
On Mon, May 7, 2012 at 11:36 PM, Lennart Poettering
Post by Lennart Poettering
I mean, just think of this: you have a pool of workstations to
administer. It's all the same machines, with the same prepared OS
image. You want to know about the backtraces as admin. Now, since the OS
images are all the same some errors will happen across all the machines
at the same time. Now, with your logic this would either result in all
of them downloading a couple of GB of debuginfo for glibc and stuff like
that, or all of them bombarding the retrace server, if they can.
No, someone administering a pool of machines would also want to
collect the crash information centrally instead of running tools
manually on every machine in the pool - and it turns out ABRT was from
the start designed to support such data collection; all core files can
be configured to end up at a single analysis machine.

The analysis machine can either have debuginfo installed locally (if
the single OS image is always the same) and just run gdb with full
information, or there can be a company-wide retrace server - it's an
open source project as well. At no moment is a system administrator
of a large pool of machines forced to send data to Fedora
infrastructure if they don't want to.
Post by Lennart Poettering
But anyway, I don't think it's worth continuing this discussion, this is
a bit like a dialogue between two wet towels...
Let me try it one more time anyway, it seems to me that various use
cases were being mixed together in the reply quoting. There are about
three major different classes of users, depending on who is the "first
responder" to a particular crash (not depending no who will ultimately
want to review the information):
1) Developers of the software in question
2) Non-programming end-users.
3) System administrators who do not routinely deal with this
particular program, but may need to get as much data as possible.
and the discussion was mixing 1 and 2, and 2 and 3 fairly often.


My take:

1) Developers of the software in question: Bluntly, the ~1-100 users
in the whole world shouldn't matter in our discussion - if they are
even running the RPM, they can and probably will install complete
debuginfo, enable logging and do other non-default things to make
their job easier; The Fedora defaults don't matter that much for them,
and the mini debuginfo is not that useful either.

2) Non-programming end-users. _This_ is the case that we need to get
right by default. In many cases, a developer is lucky if the end
user ever sends any crash report, they often don't respond to
follow-up questions, and the problem does not have to be reproducible
at all. From such users we definitely want as full crash information
as possible (IOW, including the variable contents information) because
there won't be a second change to get it. The mini debuginfo is
therefore irrelevant, we need to steer users to the retrace server (or
to attaching full core files to reports, which has much worse privacy
impact).

3) System administrators who do not routinely deal with this
particular program, but may need to get as much data as possible.
Now, this is _not_ the case that we need to get right by default -
although it would be nice if we could. And the question is what will
the system administrators do with the information?

3a) The system administrator will try to fully debug the crash,
perhaps even preparing a patch. In that case they need full source
code, understand the program etc, and having to install full debuginfo
is really not too much to ask; mini debuginfo would be marginally
useful.

3b) The system administrator will only attempt to roughly understand
the problem (_this_ is what a typical kernel user does, e.g. "ah, SCSI
error handling is in the backtrace, so there must be something wrong
with the disk subsystem"). This is where mini debuginfo comes in
useful.


Can we agree on the above, at least that 1) and 2) are not noticeably
improved by mini debuginfo, and that 3b benefits from mini debuginfo?
(There may be disagreement about 3a, but I'm not inclined to worry
about it too much - it's fairly similar to 1) anyway).

If so, good - let's talk about whether we want the additional code
complexity and packaging complexity and space usage, to benefit 3b)
only. (I'd say that it's not something I would work on, and not
something FESCo should mandate that it must be supported, but if
someone else writes the code and either upstreams or the respective
Fedora package owners want to accept it, why not?).

BTW, the feature suggests mini debuginfo would be useful for userspace
tracing - AFAIK such uses, e.g. systemtap, use the variable location
information very extensively, and would thus not benefit from mini
debuginfo.
Mirek
Alexander Larsson
2012-05-08 13:03:28 UTC
Permalink
Post by Miloslav Trmač
1) Developers of the software in question: Bluntly, the ~1-100 users
in the whole world shouldn't matter in our discussion - if they are
even running the RPM, they can and probably will install complete
debuginfo, enable logging and do other non-default things to make
their job easier; The Fedora defaults don't matter that much for them,
and the mini debuginfo is not that useful either.
I generally agree with this. When i'm working on an app I generally have
custom builds of it and its dependencies. However, at some point down
the dependency chain i start relying on distro packages, and it would be
kind of nice to have some info for that when e.g. profiling or
something.
Post by Miloslav Trmač
2) Non-programming end-users. _This_ is the case that we need to get
right by default. In many cases, a developer is lucky if the end
user ever sends any crash report, they often don't respond to
follow-up questions, and the problem does not have to be reproducible
at all. From such users we definitely want as full crash information
as possible (IOW, including the variable contents information) because
there won't be a second change to get it. The mini debuginfo is
therefore irrelevant, we need to steer users to the retrace server (or
to attaching full core files to reports, which has much worse privacy
impact).
I agree that we need to get this right, and that its the most important
problem. However, I don't agree with your reasoning. Its true that it
would be nice to have as much information as possible, and having the
retraced data availible when it works is nice. However, the details with
retracing, having to show the full backtrace letting you ack the
backtrace for privacy issue, the waiting for the retracing to happen,
etc, risks scaring the user away resulting in nothing being reported at
all.

Take this post for instance:

https://plus.google.com/110933625728671692704/posts/iFXggK7Q8KJ

It show exactly why this is a problem. We try to get more information,
but the result is less information.

A report based on the minidebuginfo already existing on the system will
give you a basic backtrace that is quite useful, and this should be
reportable with a single, fast operation just sending the data to the
server (as well as logging it to the system logs). The developer can
then do the retrace from that on the server side to get line numbers if
they are needed. We can also have an optional method of reporting that
gives the "full" stacktrace information, does the retracing over the
network and whatnot, but I don't think its a good idea to do by default.
Post by Miloslav Trmač
BTW, the feature suggests mini debuginfo would be useful for userspace
tracing - AFAIK such uses, e.g. systemtap, use the variable location
information very extensively, and would thus not benefit from mini
debuginfo.
True.
Gerd Hoffmann
2012-05-08 13:10:28 UTC
Permalink
Post by Miloslav Trmač
On Mon, May 7, 2012 at 11:36 PM, Lennart Poettering
Post by Lennart Poettering
I mean, just think of this: you have a pool of workstations to
administer. It's all the same machines, with the same prepared OS
image. You want to know about the backtraces as admin. Now, since the OS
images are all the same some errors will happen across all the machines
at the same time. Now, with your logic this would either result in all
of them downloading a couple of GB of debuginfo for glibc and stuff like
that, or all of them bombarding the retrace server, if they can.
No, someone administering a pool of machines would also want to
collect the crash information centrally instead of running tools
manually on every machine in the pool
Who talks about running stuff manually? I'd expect we'll have some
service (abrt?) doing it automagically and send the trace to syslog, so
the userspace traces end up in the logs like the kernel oopses do today.
Post by Miloslav Trmač
- and it turns out ABRT was from
the start designed to support such data collection; all core files can
be configured to end up at a single analysis machine.
The minidebuginfo traces can easily go a central logserver too.
Post by Miloslav Trmač
1) Developers of the software in question: Bluntly, the ~1-100 users
in the whole world shouldn't matter in our discussion - if they are
even running the RPM, they can and probably will install complete
debuginfo, enable logging and do other non-default things to make
their job easier; The Fedora defaults don't matter that much for them,
and the mini debuginfo is not that useful either.
Depends. My internet link isn't exactly fast. For stuff I'm working on
I have the debuginfo packages locally mirrored / installed. For other
stuff I havn't and it can easily take hours to fetch it. Having at
least a basic trace without delay has its value. Often this is enougth
to track it down.

Or when debugging your own program (with full debuginfo) it is useful to
have at least the symbols of the libraries used in the trace too.
Post by Miloslav Trmač
2) Non-programming end-users. _This_ is the case that we need to get
right by default. In many cases, a developer is lucky if the end
user ever sends any crash report, they often don't respond to
follow-up questions, and the problem does not have to be reproducible
at all. From such users we definitely want as full crash information
as possible (IOW, including the variable contents information) because
there won't be a second change to get it. The mini debuginfo is
therefore irrelevant, we need to steer users to the retrace server (or
to attaching full core files to reports, which has much worse privacy
impact).
Wrong. From /me you don't get abrt reports at all, because abrt simply
is a pain with a slow internet link due to the tons of data it wants
transmit. Also it doesn't say what it is going to do (download ?? MB
debuginfo / upload ?? MB core). And there is no progress bar. Ok,
might have changed meanwhile, its a while back I tried last.
Post by Miloslav Trmač
Can we agree on the above, at least that 1) and 2) are not noticeably
improved by mini debuginfo,
No.
Post by Miloslav Trmač
BTW, the feature suggests mini debuginfo would be useful for userspace
tracing - AFAIK such uses, e.g. systemtap, use the variable location
information very extensively, and would thus not benefit from mini
debuginfo.
How about 'perf top -p $pid'?

cheers,
Gerd
Peter Robinson
2012-05-07 17:55:53 UTC
Permalink
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
My personal opinion is that we should go with compressed data, in the
original files without the line number information. This means we use
minimal space (i.e. an installation increase by only 0.5%) while being
completely transparent to users. It does however make the normal
packages larger in a non-optional way which some people disagree with.
What sort of size impact are we talking about here, there's a lot of
devices that people are starting to use Fedora on such as ARM devices
that don't have a lot of storage space. One of the most widely
deployed devices running Fedora for example is the OLPC XO-1 which
only has 1gb of space so every size increase is a hit and Fedora is
already starting to have quite a large muffin top to deal with.

Peter
Alexander Larsson
2012-05-08 06:15:54 UTC
Permalink
Post by Peter Robinson
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
My personal opinion is that we should go with compressed data, in the
original files without the line number information. This means we use
minimal space (i.e. an installation increase by only 0.5%) while being
completely transparent to users. It does however make the normal
packages larger in a non-optional way which some people disagree with.
What sort of size impact are we talking about here, there's a lot of
devices that people are starting to use Fedora on such as ARM devices
that don't have a lot of storage space. One of the most widely
deployed devices running Fedora for example is the OLPC XO-1 which
only has 1gb of space so every size increase is a hit and Fedora is
already starting to have quite a large muffin top to deal with.
See the feature page for detail on the space use. On my F17 desktop
install with and 8 gigabytes /usr it would add 43 megabytes of data.
Bill Nottingham
2012-05-07 20:24:15 UTC
Permalink
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
My personal opinion is that we should go with compressed data, in the
original files without the line number information. This means we use
minimal space (i.e. an installation increase by only 0.5%) while being
completely transparent to users. It does however make the normal
packages larger in a non-optional way which some people disagree with.
1) minidebuginfo.rpm is silly. Either it's small enough (and 0.5% is
certainly that, IMO) that it goes in the main package, or it's too big and
we should just do regular debuginfo packages.

2) "It will also make it easier to do things like system wide profiling,
userspace dynamic probes and causual debugging."

However, the Scope: is only gdb and rpm. Wouldn't said tools also need
changes? Would this be done in libdwarf, or similar?

3) You mention this being done in find-debuginfo.sh, via injection(?). Is
this possible to be done automatically even for non-rpm-packaged code?

4) I disagree with the contention that this should all be done via the
retrace server. For the retrace server to work, you have to have
all of the following:

- all relevant binaries and DSOs built in Fedora
- all relevant binary and DSO information imported into the retrace server
- a working connection to fedoraproject.org
- sufficient bandwidth to transmit the core information
- retrace server capacity and availablilty

For this to provide a reasonable amount of information, all you need is:
- an unwinder

Simpler is usually better.

Bill
Jan Kratochvil
2012-05-07 20:44:51 UTC
Permalink
Post by Bill Nottingham
4) I disagree with the contention that this should all be done via the
retrace server. For the retrace server to work, you have to have
- all relevant binaries and DSOs built in Fedora
When we are considering Fedora Bugzilla bugreports then it is valid.
Custom downloaded binaries will not have this compressed-.symtab anyway.
Post by Bill Nottingham
- all relevant binary and DSO information imported into the retrace server
It is present there.
Post by Bill Nottingham
- a working connection to fedoraproject.org
- sufficient bandwidth to transmit the core information
This can be further optimized down to several KBs, if it is a concern.
Post by Bill Nottingham
- retrace server capacity and availablilty
AFAIK Retrace Server has not reached yet its capacity. If it does Retrace
Server is easily replicable to arbitrary number of machines.


You seem to think Retrace Server is impossible to implement.
Do you find its current functionality is lacking somehow?
Post by Bill Nottingham
- an unwinder
The problem is .symtab is not sufficient information for a backtrace.

It depends on what do you consider by the term 'backtrace'.


Thanks,
Jan
Alexander Larsson
2012-05-08 06:24:11 UTC
Permalink
Post by Jan Kratochvil
Post by Bill Nottingham
4) I disagree with the contention that this should all be done via the
retrace server. For the retrace server to work, you have to have
- all relevant binaries and DSOs built in Fedora
When we are considering Fedora Bugzilla bugreports then it is valid.
Custom downloaded binaries will not have this compressed-.symtab anyway.
Any rpm built by anyone with this feature will have this information in
it. Be it a locally rebuild fedora rpm such as a scratch build or a
totally custom rpm. Just like we build debuginfo rpms for such rpms.
Post by Jan Kratochvil
Post by Bill Nottingham
- an unwinder
The problem is .symtab is not sufficient information for a backtrace.
You keep saying this, but I and several others think that having it is a
sufficient for a great many things.
Alexander Larsson
2012-05-08 06:22:03 UTC
Permalink
Post by Bill Nottingham
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
My personal opinion is that we should go with compressed data, in the
original files without the line number information. This means we use
minimal space (i.e. an installation increase by only 0.5%) while being
completely transparent to users. It does however make the normal
packages larger in a non-optional way which some people disagree with.
1) minidebuginfo.rpm is silly. Either it's small enough (and 0.5% is
certainly that, IMO) that it goes in the main package, or it's too big and
we should just do regular debuginfo packages.
I completely agree.
Post by Bill Nottingham
2) "It will also make it easier to do things like system wide profiling,
userspace dynamic probes and causual debugging."
However, the Scope: is only gdb and rpm. Wouldn't said tools also need
changes? Would this be done in libdwarf, or similar?
I'm not sure what these tools use to unwind, I expect that we'd have to
implement it in libunwind too (added it to the deps) at the very least.
However, anything that already supports separate debug info should be
able to also load this with very little work as it is very similar.
Post by Bill Nottingham
3) You mention this being done in find-debuginfo.sh, via injection(?). Is
this possible to be done automatically even for non-rpm-packaged code?
It surely is, the actual change is just a few lines of added shell code.

Basically, when you've separated out the "normal" separate debug info
you make a copy of it, then run some strip operations on the copy to
remove all but the minimal debug info, then you do:
xz $debuginfofile
objcopy --add-section .gnu_debugdata=$debuginfofile.xz $executable
Post by Bill Nottingham
4) I disagree with the contention that this should all be done via the
retrace server.
- an unwinder
Simpler is usually better.
Agree.
Frank Ch. Eigler
2012-05-09 14:30:41 UTC
Permalink
Post by Bill Nottingham
[...]
2) "It will also make it easier to do things like system wide profiling,
userspace dynamic probes and causual debugging."
However, the Scope: is only gdb and rpm. Wouldn't said tools also need
changes? Would this be done in libdwarf, or similar?
[...]
Profiling configurations of systemtap (and perf) would definitely use
this extra local data if it were available. Ideally for stap's case,
support for these sections would likely reside in elfutils and not
require actual systemtap changes.

- FChE
Alexander Larsson
2012-05-08 14:37:31 UTC
Permalink
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
My personal opinion is that we should go with compressed data, in the
original files without the line number information. This means we use
minimal space (i.e. an installation increase by only 0.5%) while being
completely transparent to users. It does however make the normal
packages larger in a non-optional way which some people disagree with.
I'd like to point out that I'm not actually proposing that we remove the
full debug info or the ability to do stack winding on the server, as
some people seem to worry about that. This is really about increasing
the minimal quality of bug reports and debugging information.
Jan Kratochvil
2012-05-09 06:23:38 UTC
Permalink
Post by Alexander Larsson
https://plus.google.com/110933625728671692704/posts/iFXggK7Q8KJ
+
Post by Alexander Larsson
Wrong. From /me you don't get abrt reports at all, because abrt simply
is a pain with a slow internet link due to the tons of data it wants
transmit. Also it doesn't say what it is going to do (download ?? MB
debuginfo / upload ?? MB core). And there is no progress bar. Ok,
might have changed meanwhile, its a while back I tried last.
Great, these were the first useful posts in this thread.

Therefore IIUC the problem is ABRT is not good enough.

I was also told in the meantime ABRT Retrace Server is not the default
/ automatic option of ABRT, which is also wrong.

So we should restate this Feature as:

Because ABRT has not yet met its expectations we should provide at
least this temporary solution before ABRT gets fixed.

So you agree?


Thanks,
Jan
Alexander Larsson
2012-05-09 07:27:57 UTC
Permalink
Post by Jan Kratochvil
Because ABRT has not yet met its expectations we should provide at
least this temporary solution before ABRT gets fixed.
So you agree?
I agree that ABRT should be better, but I don't agree that having local
minimal debug info is only a temporary solution. Various problems that
ABRT have are inherent to it requiring a network connection and a
centralized server setup, which is not something you can fix. So, having
at least some level of quality local backtraces will still be good even
if ABRT becomes better.
Jan Kratochvil
2012-05-09 07:41:00 UTC
Permalink
So, having at least some level of quality local backtraces will still be
good even if ABRT becomes better.
Some new option is always good.

Questionable is what should be the default. IMO ABRT Retrace Server should be
the default one (if it has its issues - such as reported UI - fixed).

Then remains the question whether .symtab-unwinds are worth the 0.5-2% size
cost which I think they are not but in reality I do not mind at all.


Thanks,
Jan
Gerd Hoffmann
2012-05-09 08:35:16 UTC
Permalink
Post by Jan Kratochvil
Post by Alexander Larsson
https://plus.google.com/110933625728671692704/posts/iFXggK7Q8KJ
+
Post by Alexander Larsson
Wrong. From /me you don't get abrt reports at all, because abrt simply
is a pain with a slow internet link due to the tons of data it wants
transmit. Also it doesn't say what it is going to do (download ?? MB
debuginfo / upload ?? MB core). And there is no progress bar. Ok,
might have changed meanwhile, its a while back I tried last.
Great, these were the first useful posts in this thread.
Therefore IIUC the problem is ABRT is not good enough.
There is room for improvements indeed. It is one but not the only problem.
Post by Jan Kratochvil
Because ABRT has not yet met its expectations we should provide at
least this temporary solution before ABRT gets fixed.
No. There will always be cases where the current[1] abrt model fails:

Local trace generation requires downloading lots of debuginfo. Might
not work / work badly because:
(1) you are offline.
(2) your internet link is slow.
(3) you are on 3G and don't want to pay the volume.
(4) you don't have the disk space to store debuginfo.

Server-based trace generation requires uploading a potentially large
core file (which probably can be reduced using mozilla-like minidumps).
Bandwidth requirements aside there are still issues with that:
(1) works only when online.
(2) you might not want upload to the fedora server for privacy
or company policy reasons.
(3) private / company-wide retrace server needs extra effort (both
hardware and work time), you can't count on it being available.

cheers,
Gerd

[1] I expect abrt support minidebuginfo traces too once the feature
is there.
Jan Kratochvil
2012-05-09 08:45:24 UTC
Permalink
Post by Gerd Hoffmann
Server-based trace generation requires uploading a potentially large
core file (which probably can be reduced using mozilla-like minidumps).
"mozilla-like minidumps" would bring us unusable backtraces due to other
reasons such as GDB Pretty Printers support for C++ classes.

For better speed of ABRT Retrace Server core files upload we should
re-implement
gdbserver for core files
https://fedorahosted.org/pipermail/crash-catcher/2010-December/001441.html

But so far I have no idea how / if / why not ABRT Retrace Server is being
used, if the core files upload speed is a real concern etc. so I did not work
more on thue gdbserver core files access feature for ABRT Retrace Server.
Post by Gerd Hoffmann
(1) works only when online.
(2) you might not want upload to the fedora server for privacy
or company policy reasons.
(3) private / company-wide retrace server needs extra effort (both
hardware and work time), you can't count on it being available.
These are all valid concerns but the question is if they are relevant to 99%
of users who install Mozilla binary plugins and even Adobe Flash already
losing any security anyway.


Thanks,
Jan
Jaroslav Reznik
2012-05-09 08:07:13 UTC
Permalink
----- Original Message -----
Post by Jan Kratochvil
So, having at least some level of quality local backtraces will
still be
good even if ABRT becomes better.
Some new option is always good.
Questionable is what should be the default. IMO ABRT Retrace Server
should be
the default one (if it has its issues - such as reported UI - fixed).
Yep, I second to this - and it's the right way to move the hassle of
debug info stuff away from client side and our users. Current problem
is more unfinished ABRT (as UI is hardly usable, retrace server should
be default) and no other requirements should be put on users. I like mitr's
analysis - covering most of the users. Back to objective reasons - see below.
Post by Jan Kratochvil
Then remains the question whether .symtab-unwinds are worth the
0.5-2% size
cost which I think they are not but in reality I do not mind at all.
For us, as KDE SIG taking care of Fedora KDE offering, it's quite a lot
as we're unfortunately balancing on the CD capacity edge (these 3.5 MB
and probably more are quite a huge problem...) for several releases.

On the other hand, it can help Dr. Konqui to have at least some debug
information (and yeah, debug info installer in Dr. Konqui is ugly hack).
But as far as I know, ABRT and Dr. Konqui guys are in touch regarding
the retrace server. So again moving the debug side of thing away from
users machine.

R.
Post by Jan Kratochvil
Thanks,
Jan
--
devel mailing list
devel at lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Miroslav Lichvar
2012-05-09 08:36:01 UTC
Permalink
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
What is the overall effect on the rpm size? On installation media
every percent counts, if it's close to 3%, that might be too much for
some spins.
--
Miroslav Lichvar
Alexander Larsson
2012-05-09 11:33:29 UTC
Permalink
Post by Miroslav Lichvar
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
What is the overall effect on the rpm size? On installation media
every percent counts, if it's close to 3%, that might be too much for
some spins.
The debuginfo is xz compressed, so its unlikely to not be further
compressed by rpms, this means the absolute size measurements from the
feature page is likely to be about right. That would means 43Mb larger
footprint for all the packages installed on my system, which is a pretty
heavy F17 desktop install with both kde, gnome and a lot of other
packages.

I don't know what that means in percent though, as I don't know how to
figure out the size of the rpms I have installed.
Jan Kratochvil
2012-05-09 11:42:16 UTC
Permalink
Post by Alexander Larsson
That would means 43Mb larger
I guess you mean 43MB and not 5MB.


Jan
Jiri Moskovcak
2012-05-09 11:32:08 UTC
Permalink
Post by Alexander Larsson
I just wrote a new Feature proposal for shipping minimal debug info by
https://fedoraproject.org/wiki/Features/MiniDebugInfo
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
My personal opinion is that we should go with compressed data, in the
original files without the line number information. This means we use
minimal space (i.e. an installation increase by only 0.5%) while being
completely transparent to users. It does however make the normal
packages larger in a non-optional way which some people disagree with.
There is so many questions and opinions in this thread I can't just
reply to all of them, so I'll try to sum it here:

The privacy concerns: yes, uploading corefile might be dangerous, that's
why there is the other possibility of downloading the debuginfo and
generating the backtrace locally, but as it seems many users trust us
and use the retrace server as you can see from the stats:
http://retrace.fedoraproject.org/stats

Appart from that I see two questions here:
1. Whether to add the minidebuginfo in Fedora
2. Whether to use this stripped backtrace when reporting a bug.


For 1: The decision to use it or not should be based on some real-life
tests like "how it impacts the current gnome/kde live cd" or other
spins. If the additional payload is really small then I don't see a
problem here (but I'm glad the decision is not mine ;))

For 2: At this point (F18 timeframe) probably not. From ABRT point of
view the minidebug is not a problem at all if we can use gdb to generate
some backtrace using the mindebuginfo. But what matters are developers
who will need to deal with this stripped backtrace and I can guarantee
that there will be many unhappy devels. And also the ABRT server
projects rely on the coredumps:
http://git.fedorahosted.org/git?p=abrt.git;a=blob_plain;f=doc/project/abrt.pdf;hb=HEAD
And once put into life these server side projects will be a great help
in bugfixing.

As for the bandwidth limitations when using ABRT - I hope Lennart's core
stripping library might help here.

--Jirka
Jan Kratochvil
2012-05-09 11:41:15 UTC
Permalink
Post by Jiri Moskovcak
As for the bandwidth limitations when using ABRT - I hope Lennart's
core stripping library might help here.
But this degrades backtrace quality again as I have shown in:
https://fedorahosted.org/pipermail/crash-catcher/2010-September/000984.html

C++ classes no longer have displayed any read/write strings/data.

If there are a real concern about core files upload speed and ABRT is going to
deploy the full-quality core file transfer via gdbserver as I have shown in:
gdbserver for core files
https://fedorahosted.org/pipermail/crash-catcher/2010-December/001441.html

I can reimplement it (from elfutils based gdbserver to FSF/bfd gdbserver).
But so far I had no response to either question so I did not spend time on
something without any need and/or not used.


Thanks,
Jan
Alexander Larsson
2012-05-09 11:44:03 UTC
Permalink
Post by Jiri Moskovcak
1. Whether to add the minidebuginfo in Fedora
2. Whether to use this stripped backtrace when reporting a bug.
For 1: The decision to use it or not should be based on some real-life
tests like "how it impacts the current gnome/kde live cd" or other
spins. If the additional payload is really small then I don't see a
problem here (but I'm glad the decision is not mine ;))
That requires us to rebuild the entire distro to get the minidebuginfo
rpms. Its certainly doable, but some work. I can produce a patch to
rpm-build that does this, but I can't really do the rebuild stuff, that
would need help from someone on the build team.
Post by Jiri Moskovcak
For 2: At this point (F18 timeframe) probably not. From ABRT point of
view the minidebug is not a problem at all if we can use gdb to generate
some backtrace using the mindebuginfo. But what matters are developers
who will need to deal with this stripped backtrace and I can guarantee
that there will be many unhappy devels. And also the ABRT server
http://git.fedorahosted.org/git?p=abrt.git;a=blob_plain;f=doc/project/abrt.pdf;hb=HEAD
And once put into life these server side projects will be a great help
in bugfixing.
I'm not proposing that we drop the existing backtraces with full debug
info, but (appart from the other places where backtraces are also
useful) I'd like it if ABRT could somehow catch all the cases where
people abort a bugreport because things are scary/slow/bad
network/whatever and at least report the low quality backtrace, which
should be very quick and require little work from the user.

I don't have a full design in mind, but I'm thinking that as soon as the
user acks that he wants to report the bug we would start by just
uploading the low quality backtrace, and *then* start retracing the bug
and show the user the backtrace with full data etc, asking them if its
ok to submit the data. That way we get at least *something* in all
crashes, and perfect reports for users that goes all the way.
Jakub Jelinek
2012-05-09 11:51:15 UTC
Permalink
Post by Alexander Larsson
I'm not proposing that we drop the existing backtraces with full debug
info, but (appart from the other places where backtraces are also
useful) I'd like it if ABRT could somehow catch all the cases where
people abort a bugreport because things are scary/slow/bad
network/whatever and at least report the low quality backtrace, which
should be very quick and require little work from the user.
But you don't need any kind of minidebuginfo for that first step,
you can make it even faster by just uploading the backtrace + build-ids
and on the server side the rest of transforming that to a low-quality
backtrace can be handled automatically, without
further user intervention, in case the user didn't go through to uploading
the high quality thing from retrace server.

Jakub
Jiri Moskovcak
2012-05-09 13:11:23 UTC
Permalink
Post by Jakub Jelinek
Post by Alexander Larsson
I'm not proposing that we drop the existing backtraces with full debug
info, but (appart from the other places where backtraces are also
useful) I'd like it if ABRT could somehow catch all the cases where
people abort a bugreport because things are scary/slow/bad
network/whatever and at least report the low quality backtrace, which
should be very quick and require little work from the user.
But you don't need any kind of minidebuginfo for that first step,
you can make it even faster by just uploading the backtrace + build-ids
and on the server side the rest of transforming that to a low-quality
backtrace can be handled automatically, without
further user intervention, in case the user didn't go through to uploading
the high quality thing from retrace server.
Jakub
- that's something what we have right now (should be in F18), we have a
server which accepts something we call microreport (kind of a backtrace
without debuginfo + build_id and some other information - yes, it's
similar to minidump) this is small enough to be uploaded on almost any
internet connection and contains enough information to find duplicates.

The workflow we would like to use is something like this:
1. ABRT detects a crash
2. User clicks report which sends a microreport (few kilobytes)
3. is it a dupe?
YES: send back response with the ticket url, increase the dupe counter
NO: ask user to upload the core or full backtrace

But, I think that's a totally different use-case than what minidebuginfo
is trying to solve.

From what I understand the use case for minidebuginfo is when something
crashes on machine where the full debuginfo is not available and for
some reason the machine is configured to not store the coredumps, but we
still want to have something more in log than just "process foo has
crashed".

--Jirka
Kevin Kofler
2012-05-09 11:35:20 UTC
Permalink
Post by Alexander Larsson
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
There is no room left on the KDE live image for installing any sort of
debugging information by default.
Post by Alexander Larsson
My personal opinion is that we should go with compressed data,
Compression does not help at all, because the live images are xz-compressed
and the maximum size is 700 MiB compressed. If you pre-compress the data, it
will likely only make the xz compression rate worse.
Post by Alexander Larsson
This means we use minimal space (i.e. an installation increase by only
0.5%)
Even that is too much, even more so if that's when compressed!

Kevin Kofler
Frank Ch. Eigler
2012-05-09 14:22:38 UTC
Permalink
[...] There is no room left on the KDE live image for installing
any sort of debugging information by default. [...]
What are the live-image spins' plans as to management of future
growth? At what point, if ever, do they intend to abandon the CD-ROM
format limits?

- FChE
Rex Dieter
2012-05-09 14:32:11 UTC
Permalink
Post by Frank Ch. Eigler
[...] There is no room left on the KDE live image for installing
any sort of debugging information by default. [...]
What are the live-image spins' plans as to management of future
growth? At what point, if ever, do they intend to abandon the CD-ROM
format limits?
some folks seem to want that to happen asap (ie, for f18 or f19)

count me as 'some folk'

-- rex
Matthias Clasen
2012-05-09 15:45:03 UTC
Permalink
Post by Kevin Kofler
Post by Alexander Larsson
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
There is no room left on the KDE live image for installing any sort of
debugging information by default.
We could easily drop some of less-than-half-complete translations to
make room for a bit of minidebuginfo. Last time I looked, translations,
fonts, etc made up upwards of 25% of the livecd. Or we could just drop
the obsolescent cdrom size limitation...
Adam Jackson
2012-05-09 15:57:28 UTC
Permalink
Post by Matthias Clasen
Post by Kevin Kofler
Post by Alexander Larsson
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
There is no room left on the KDE live image for installing any sort of
debugging information by default.
We could easily drop some of less-than-half-complete translations to
make room for a bit of minidebuginfo. Last time I looked, translations,
fonts, etc made up upwards of 25% of the livecd. Or we could just drop
the obsolescent cdrom size limitation...
I know I've said this before, but: we should break the CD size barrier
precisely so people can't burn things to CDs. If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek time
is entirely worth it.

1G fits on both the smallest MiniDVD format and most extant USB sticks.
Let's do it already.

- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20120509/8f820338/attachment.sig>
John Reiser
2012-05-09 18:20:43 UTC
Permalink
Post by Adam Jackson
I know I've said this before, but: we should break the CD size barrier
precisely so people can't burn things to CDs. If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek time
is entirely worth it.
1G fits on both the smallest MiniDVD format and most extant USB sticks.
Let's do it already.
If so, then please acknowledge explicitly that Fedora would be discarding
some 4% of running, otherwise-capable machines (especially old laptops)
that can read only CD and not DVD, some 7% of working USB sticks that are
512MB or less, and some 5% of working boxes that cannot boot from USB.

--
Adam Jackson
2012-05-09 18:46:22 UTC
Permalink
Post by John Reiser
Post by Adam Jackson
I know I've said this before, but: we should break the CD size barrier
precisely so people can't burn things to CDs. If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek time
is entirely worth it.
1G fits on both the smallest MiniDVD format and most extant USB sticks.
Let's do it already.
If so, then please acknowledge explicitly that Fedora would be discarding
some 4% of running, otherwise-capable machines (especially old laptops)
that can read only CD and not DVD, some 7% of working USB sticks that are
512MB or less, and some 5% of working boxes that cannot boot from USB.
Those are wonderful numbers. How ever did you arrive at them?

Also: Live image still not the only install method, hyperbole is not
necessary.

- ajax

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20120509/ab07c564/attachment.sig>
John Reiser
2012-05-09 19:00:15 UTC
Permalink
Post by Adam Jackson
Post by John Reiser
If so, then please acknowledge explicitly that Fedora would be discarding
some 4% of running, otherwise-capable machines (especially old laptops)
that can read only CD and not DVD, some 7% of working USB sticks that are
512MB or less, and some 5% of working boxes that cannot boot from USB.
Those are wonderful numbers. How ever did you arrive at them?
They're from my own laboratory of 20 boxes and 15 USB sticks accumulated
slowly and semi-regularly over the last decade or so. That omits
6 really ancient boxes (>15 years old each) that have been discarded
along the way.

--
Adam Jackson
2012-05-09 19:02:33 UTC
Permalink
Post by John Reiser
Post by Adam Jackson
Post by John Reiser
If so, then please acknowledge explicitly that Fedora would be discarding
some 4% of running, otherwise-capable machines (especially old laptops)
that can read only CD and not DVD, some 7% of working USB sticks that are
512MB or less, and some 5% of working boxes that cannot boot from USB.
Those are wonderful numbers. How ever did you arrive at them?
They're from my own laboratory of 20 boxes and 15 USB sticks accumulated
slowly and semi-regularly over the last decade or so. That omits
6 really ancient boxes (>15 years old each) that have been discarded
along the way.
Forgive me for not considering that a representative sample.

- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20120509/f74dd610/attachment-0001.sig>
Chris Adams
2012-05-09 18:56:48 UTC
Permalink
Post by John Reiser
If so, then please acknowledge explicitly that Fedora would be discarding
some 4% of running, otherwise-capable machines (especially old laptops)
that can read only CD and not DVD, some 7% of working USB sticks that are
512MB or less, and some 5% of working boxes that cannot boot from USB.
[Citation needed]

Also: "some 7% of working USB sticks that are 512MB or less" - when have
any of the standard Live images _ever_ fit on a 512M media?
--
Chris Adams <cmadams at hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Matej Cepl
2012-05-10 05:19:29 UTC
Permalink
Post by Chris Adams
Also: "some 7% of working USB sticks that are 512MB or less" - when have
any of the standard Live images _ever_ fit on a 512M media?
It of course depends on your definition of “standard”, but Tiny Core
Linux is less than 12MB ...
http://distro.ibiblio.org/tinycorelinux/welcome.html ;)

Matěj
Adam Williamson
2012-05-10 05:28:38 UTC
Permalink
Post by Matej Cepl
Post by Chris Adams
Also: "some 7% of working USB sticks that are 512MB or less" - when have
any of the standard Live images _ever_ fit on a 512M media?
It of course depends on your definition of “standard”, but Tiny Core
Linux is less than 12MB ...
http://distro.ibiblio.org/tinycorelinux/welcome.html ;)
I think he's talking about Fedora ones, and it's a reasonable point.
None of our main live images are under 512MB in size. So given that
no-one makes 700MB or 768MB USB sticks, by going from 700MB to 1GB in
size we would effectively lose precisely zero USB sticks.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net
Dave Jones
2012-05-09 19:14:03 UTC
Permalink
Post by John Reiser
Post by Adam Jackson
1G fits on both the smallest MiniDVD format and most extant USB sticks.
Let's do it already.
If so, then please acknowledge explicitly that Fedora would be discarding
some 4% of running, otherwise-capable machines (especially old laptops)
that can read only CD and not DVD
As someone who frequently sees bugs that are attributed to old half-dead hardware
that belongs in a recycling center, I'll happily acknowledge anything that leaves
ancient junk behind.

I'd like to see us introduce more explicit cut-offs for support of older hardware.

Dave
Christoph Wickert
2012-05-11 14:42:56 UTC
Permalink
Post by Adam Jackson
Post by Matthias Clasen
Post by Kevin Kofler
Post by Alexander Larsson
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
There is no room left on the KDE live image for installing any sort of
debugging information by default.
We could easily drop some of less-than-half-complete translations to
make room for a bit of minidebuginfo. Last time I looked, translations,
fonts, etc made up upwards of 25% of the livecd. Or we could just drop
the obsolescent cdrom size limitation...
I know I've said this before, but: we should break the CD size barrier
precisely so people can't burn things to CDs. If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek time
is entirely worth it.
1G fits on both the smallest MiniDVD format and most extant USB sticks.
Let's do it already.
As an ambassador and former EMEA media wrangler I tend to agree.

Currently both EMEA and NA only do dual layer DVDs, both for live and
the installer. EMEA did separate installer vor i386 and x86_64, but
after NA had no problems with exclusively providing dual layer, we
decided to do the same.

This being said I don't care how big we grow as long as we can still fit
all 4 desktops (GNOME, KDE, Xfce, LXDE) in 2 arches each on one multi
desktop live image. A dual layer DVD has a maximum capacity of 8.5 GB,
so fitting 8 x 1 GB is not a problem.

We might have to drop Sugar, but if only GNOME and KDE go for 1 GB and
Xfce and LXDE still target 700 MB or less, we should even be able to
keep it.

This being said I am +1 for 1 GB, but please note that I only speak for
myself or the NA and EMEA ambassadors.

Kind regards,
Christoph

Kevin Kofler
2012-05-10 07:14:39 UTC
Permalink
Post by Matthias Clasen
We could easily drop some of less-than-half-complete translations to
make room for a bit of minidebuginfo. Last time I looked, translations,
fonts, etc made up upwards of 25% of the livecd. Or we could just drop
the obsolescent cdrom size limitation...
There are (almost) no translations on the KDE spin. They're all in
kde-l10n-* packages which add up to almost the size of a CD on their own.

Kevin Kofler
Alexander Larsson
2012-05-09 21:33:35 UTC
Permalink
Post by Kevin Kofler
Post by Alexander Larsson
The feature page lists some of the background and statistics. It also
lists some options in how to implement this, which all have various
different pros and cons. I'd like to hear what peoples opinions on these
are.
There is no room left on the KDE live image for installing any sort of
debugging information by default.
Its not particularly hard to strip the debuginfo when constructing the
live image, although then installation from it will not really work as
the rpms checksums will be wrong.
Kevin Kofler
2012-05-10 07:20:38 UTC
Permalink
Post by Alexander Larsson
Its not particularly hard to strip the debuginfo when constructing the
live image, although then installation from it will not really work as
the rpms checksums will be wrong.
Indeed, that doesn't sound like a sane solution to me.

I'd rather we just don't add yet another size overhead to every package. Our
packages keep growing and growing even without that. A few KiB here, a few
KiB there, in many packages, adding up to a few MiB, and we keep running
into size issues with our live image at every single release. Size matters!

Kevin Kofler
drago01
2012-05-10 08:02:34 UTC
Permalink
Post by Kevin Kofler
Post by Alexander Larsson
Its not particularly hard to strip the debuginfo when constructing the
live image, although then installation from it will not really work as
the rpms checksums will be wrong.
Indeed, that doesn't sound like a sane solution to me.
Getting out of the CD cage is.
Post by Kevin Kofler
I'd rather we just don't add yet another size overhead to every package. Our
packages keep growing and growing even without that. A few KiB here, a few
KiB there, in many packages, adding up to a few MiB, and we keep running
into size issues with our live image at every single release. Size matters!
Not really, you are restricting yourself by the artificial CD size limit.
You don't have to use the full size of whatever bigger medium you
choose (DVD, 1 or 2GB stick) but you are currently providing a poorer
user experience because you insist on a medium from the last century.
Alexander Larsson
2012-05-10 08:52:27 UTC
Permalink
Post by drago01
Post by Kevin Kofler
I'd rather we just don't add yet another size overhead to every package. Our
packages keep growing and growing even without that. A few KiB here, a few
KiB there, in many packages, adding up to a few MiB, and we keep running
into size issues with our live image at every single release. Size matters!
Not really, you are restricting yourself by the artificial CD size limit.
You don't have to use the full size of whatever bigger medium you
choose (DVD, 1 or 2GB stick) but you are currently providing a poorer
user experience because you insist on a medium from the last century.
I agree, I think bumping the image size to 1GB and use
DVD/mini-dvd/usb-stick is the sane way forward, since we consistently
run into the cd limit and are forced to make changes that negatively
affect the user experience in various ways.
Kevin Kofler
2012-05-10 10:08:47 UTC
Permalink
Post by drago01
Not really, you are restricting yourself by the artificial CD size limit.
You don't have to use the full size of whatever bigger medium you
choose (DVD, 1 or 2GB stick) but you are currently providing a poorer
user experience because you insist on a medium from the last century.
If every live image gets larger, that will also negatively affect the nice
Multi Desktop Live DVDs the Ambassadors are now mass-producing. Those
contain all our live CDs (all desktops in both 32-bit and 64-bit versions,
where the bitness is autodetected at boot, but can be manually overridden)
on one DVD, which is a great thing to hand out at events.

We really shouldn't bloat our images just because we can.

Downloading debugging information (and complete debugging information!) on
demand is really the best solution. Or use the retrace server if you'd
rather have a web service do the work for you.

Kevin Kofler
Adam Jackson
2012-05-10 14:56:31 UTC
Permalink
Post by Kevin Kofler
Post by drago01
Not really, you are restricting yourself by the artificial CD size limit.
You don't have to use the full size of whatever bigger medium you
choose (DVD, 1 or 2GB stick) but you are currently providing a poorer
user experience because you insist on a medium from the last century.
If every live image gets larger, that will also negatively affect the nice
Multi Desktop Live DVDs the Ambassadors are now mass-producing. Those
contain all our live CDs (all desktops in both 32-bit and 64-bit versions,
where the bitness is autodetected at boot, but can be manually overridden)
on one DVD, which is a great thing to hand out at events.
I am unable to find any ISOs of that media. It appears this is somewhat
intentional:

http://lists.fedoraproject.org/pipermail/devel/2011-June/152520.html

Therefore I have difficulty evaluating just how much impact this would
be. Do you have a link to the recipe for building such an image? I
suspect the incremental cost of each additional desktop environment
would be successively lower, but without data...

- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20120510/e39379f1/attachment.sig>
Tom Callaway
2012-05-10 15:43:57 UTC
Permalink
Post by Adam Jackson
Post by Kevin Kofler
Post by drago01
Not really, you are restricting yourself by the artificial CD size limit.
You don't have to use the full size of whatever bigger medium you
choose (DVD, 1 or 2GB stick) but you are currently providing a poorer
user experience because you insist on a medium from the last century.
If every live image gets larger, that will also negatively affect the nice
Multi Desktop Live DVDs the Ambassadors are now mass-producing. Those
contain all our live CDs (all desktops in both 32-bit and 64-bit versions,
where the bitness is autodetected at boot, but can be manually overridden)
on one DVD, which is a great thing to hand out at events.
I am unable to find any ISOs of that media. It appears this is somewhat
http://lists.fedoraproject.org/pipermail/devel/2011-June/152520.html
Therefore I have difficulty evaluating just how much impact this would
be. Do you have a link to the recipe for building such an image? I
suspect the incremental cost of each additional desktop environment
would be successively lower, but without data...
http://fedoraproject.org/wiki/Multi_Boot_Media_SOP

Caveat: I wrote the tooling there. It does use the the generated Desktop
Live ISOs as a base for making the large "super" ISO.

~tom

==
Fedora Project
Kevin Kofler
2012-05-10 22:36:38 UTC
Permalink
Post by Adam Jackson
Therefore I have difficulty evaluating just how much impact this would
be. Do you have a link to the recipe for building such an image? I
suspect the incremental cost of each additional desktop environment
would be successively lower, but without data...
The DVD is composed by glueing together the independent live images plus a
boot menu, so no, the cost will not be lower for each additional desktop
environment, the size is exactly the sum of the sizes of all the CD ISOs
(plus a negligible overhead).

Kevin Kofler
Gerd Hoffmann
2012-05-11 09:04:08 UTC
Permalink
Post by Kevin Kofler
Post by Adam Jackson
Therefore I have difficulty evaluating just how much impact this would
be. Do you have a link to the recipe for building such an image? I
suspect the incremental cost of each additional desktop environment
would be successively lower, but without data...
The DVD is composed by glueing together the independent live images plus a
boot menu, so no, the cost will not be lower for each additional desktop
environment, the size is exactly the sum of the sizes of all the CD ISOs
(plus a negligible overhead).
Sounds more useful to me to just have a single live image which has
multiple desktop environments included, so you don't have the common
bits multiple times at the dvd ...

cheers,
Gerd
Kevin Kofler
2012-05-11 10:11:17 UTC
Permalink
Post by Gerd Hoffmann
Sounds more useful to me to just have a single live image which has
multiple desktop environments included, so you don't have the common
bits multiple times at the dvd ...
That sounds nice in theory, but is just not practical:
* The per-desktop live images are what we develop and test. Nobody is
testing an "everything" live DVD (with all merged into one image, as opposed
to the Multi DVD we're doing now).
* The live images also have different display managers. An "everything" live
DVD would most likely end up with GDM, which makes it particularly hard to
select a non-GNOME session. (Many users complain about not finding the
session type selection in GDM.) Using GDM might also otherwise degrade the
experience for the non-GNOME desktops. (We try hard to make things like
shutdown, restart or user switching work for KDE Plasma sessions in GDM, but
upstream always lags behind the current GDM/ConsoleKit/systemd/
 in support,
and it doesn't get the kind of testing KDM gets. I still don't know whether
user switching from KDE Plasma Desktop with GDM actually works now in F17,
and if it doesn't work, whether it's a bug in my code or in systemd/GDM/
)
* The menus would get crowded with up to 4 (or even 5 if Sugar activities
start registering in menus as well) applications for each task. And no,
OnlyShowIn is not a solution because some users WANT to use the "foreign"
apps. It's just installing them all by default which would lead to a poor
user experience.

I think the current system is a much better solution. It also allows doing
both CDs and the Multi DVD with the same development effort.

Kevin Kofler
Jaroslav Reznik
2012-05-09 20:07:19 UTC
Permalink
Post by Adam Jackson
I know I've said this before, but: we should break the CD size
barrier
precisely so people can't burn things to CDs. If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek
time
is entirely worth it.
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries... For me personally CD is history, even
DVD, same 1 GB flash drive. We can afford it. But some people can't
and are our users thanks to the ability to get a cheap OS, that can
run on cheap HW and is still modern.

The question is - how many people will be affected? Or should we
provide some fallback option - stripped down CD media size image? And
make the bigger one primary one?

R.
Post by Adam Jackson
1G fits on both the smallest MiniDVD format and most extant USB
sticks.
Let's do it already.
- ajax
--
devel mailing list
devel at lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
drago01
2012-05-09 20:33:47 UTC
Permalink
Post by Jaroslav Reznik
Post by Adam Jackson
I know I've said this before, but: we should break the CD size barrier
precisely so people can't burn things to CDs.  If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek time
is entirely worth it.
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries...
Where are the numbers to back this nonsense up?
A DVD burner costs ~12 € ... and any computer that old isn't really
that capable of running fedora reasonably anyway.
Post by Jaroslav Reznik
For me personally CD is history, even
DVD, same 1 GB flash drive. We can afford it. But some people can't
and are our users thanks to the ability to get a cheap OS, that can
run on cheap HW and is still modern.
See above.
Post by Jaroslav Reznik
The question is - how many people will be affected? Or should we
provide some fallback option - stripped down CD media size image? And
make the bigger one primary one?
Well anyone can create a specialized spin for ancient hardware, but we
should not restrict ourselves because of ancient hardware.
Gerry Reno
2012-05-09 20:40:37 UTC
Permalink
If you watch, you can get DVD burners for about $15 USD.

eg: http://slickdeals.net/permadeal/62972/newegg-liteon-external-cddvd-burner-w-lightscribe-support

Or used for about $5-$10 at any flea market.
Post by drago01
Post by Jaroslav Reznik
Post by Adam Jackson
I know I've said this before, but: we should break the CD size barrier
precisely so people can't burn things to CDs. If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek time
is entirely worth it.
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries...
Where are the numbers to back this nonsense up?
A DVD burner costs ~12 € ... and any computer that old isn't really
that capable of running fedora reasonably anyway.
Post by Jaroslav Reznik
For me personally CD is history, even
DVD, same 1 GB flash drive. We can afford it. But some people can't
and are our users thanks to the ability to get a cheap OS, that can
run on cheap HW and is still modern.
See above.
Post by Jaroslav Reznik
The question is - how many people will be affected? Or should we
provide some fallback option - stripped down CD media size image? And
make the bigger one primary one?
Well anyone can create a specialized spin for ancient hardware, but we
should not restrict ourselves because of ancient hardware.
John Reiser
2012-05-09 21:34:21 UTC
Permalink
Post by drago01
Post by Jaroslav Reznik
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries...
Where are the numbers to back this nonsense up?
A DVD burner costs ~12 € ... and any computer that old isn't really
that capable of running fedora reasonably anyway.
Such a claim is FALSE. My 700MHz PentiumIII with 384MB RAM runs Fedora 11
just fine. OpenOffice is eminently usable, for example. It's a 2001
laptop that has only CD-ROM and USB1.1, and the BIOS cannot boot from USB.
I have added USB2.0 via PCMCIA card, and somewhere around Fedora 12
could boot from external DVD via USB2.0 (via trampoline from the harddrive)
because the PCMCIA drivers for the bridge that enables the USB2.0 card
were in the initrd. But then the PCMCIA drivers were dropped from initrd,
so it no longer boots newer Fedora from DVD. Meanwhile deteriorating
support for RagePro graphics has nudged me back to Fedora 11. Fedora 11
is only 3 years old.

--
Gerry Reno
2012-05-09 21:39:12 UTC
Permalink
Post by John Reiser
Post by drago01
Post by Jaroslav Reznik
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries...
Where are the numbers to back this nonsense up?
A DVD burner costs ~12 € ... and any computer that old isn't really
that capable of running fedora reasonably anyway.
Such a claim is FALSE. My 700MHz PentiumIII with 384MB RAM runs Fedora 11
just fine. OpenOffice is eminently usable, for example. It's a 2001
laptop that has only CD-ROM and USB1.1, and the BIOS cannot boot from USB.
I have added USB2.0 via PCMCIA card, and somewhere around Fedora 12
could boot from external DVD via USB2.0 (via trampoline from the harddrive)
because the PCMCIA drivers for the bridge that enables the USB2.0 card
were in the initrd. But then the PCMCIA drivers were dropped from initrd,
so it no longer boots newer Fedora from DVD. Meanwhile deteriorating
support for RagePro graphics has nudged me back to Fedora 11. Fedora 11
is only 3 years old.
Just install over the network and not be stuck in Fedora 11.
Michael Cronenworth
2012-05-09 21:54:06 UTC
Permalink
Post by John Reiser
My 700MHz PentiumIII with 384MB RAM
If Fedora Live media is going to be held back due to your requirements
then I'm going to find myself a new distro to contribute to.

Yes, Fedora Live media should support a *reasonable* set of hardware.
Your hardware is no longer *reasonable*. It is time to move on. End of
discussion - as you will end up dragging this on until the horse is a
ghost (it is already a skeleton).

If the infrastructure team wants to increase default Live image sizes to
1GB then they should do it.

If you want to create your own, custom Live image on that P3 you can
easily do so[1]. I'd expect it will take about a week to complete. It
takes my year old Core i5 about 15 minutes to perform the same operation.

[1] http://fedoraproject.org/wiki/How_to_create_and_use_a_Live_CD
Przemek Klosowski
2012-05-09 22:06:09 UTC
Permalink
Post by John Reiser
Post by drago01
A DVD burner costs ~12 € ... and any computer that old isn't really
that capable of running fedora reasonably anyway.
Such a claim is FALSE. My 700MHz PentiumIII with 384MB RAM runs Fedora 11
just fine. OpenOffice is eminently usable, for example. It's a 2001
laptop that has only CD-ROM and USB1.1, and the BIOS cannot boot from USB.
I have added USB2.0 via PCMCIA card, and somewhere around Fedora 12
could boot from external DVD via USB2.0 (via trampoline from the harddrive)
because the PCMCIA drivers for the bridge that enables the USB2.0 card
were in the initrd. But then the PCMCIA drivers were dropped from initrd,
so it no longer boots newer Fedora from DVD. Meanwhile deteriorating
support for RagePro graphics has nudged me back to Fedora 11. Fedora 11
is only 3 years old.
Would that laptop not have a floppy disk that'd let you boot in
combination with an external USB flash/CD/DVD drive? 1-2GB flash drives
cost 2-3$ so they would be the cheapest/simplest thing to use if your
system doesn't already have a DVD. The next best thing would be an
external USB DVD burner that should be less than $30 or so, and is
actually a good thing to have around the den anyway.

This reminds me of the old days in the mid- to late 90s when we were
running DCLUG Linux Installfests in Washington DC and RedHat crew drove
up from North Carolina, to test their install process. People would
bring the strangest hardware, and we'd give it our best, sometimes
working the entire afternoon on the most recalcitrant systems.

That experience taught me to make a judgement call: while, on one hand,
hardware constraints are good because they keep things honest and
simple, and make things fast on modern hardware, at the same time some
limitations are just too onerous. I would say that BIOS inability to
boot off USB devices crosses that line.
Vít Ondruch
2012-05-10 05:41:29 UTC
Permalink
Post by John Reiser
Post by drago01
Post by Jaroslav Reznik
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries...
Where are the numbers to back this nonsense up?
A DVD burner costs ~12 € ... and any computer that old isn't really
that capable of running fedora reasonably anyway.
Such a claim is FALSE. My 700MHz PentiumIII with 384MB RAM runs Fedora 11
just fine. OpenOffice is eminently usable, for example. It's a 2001
laptop that has only CD-ROM and USB1.1, and the BIOS cannot boot from USB.
I have added USB2.0 via PCMCIA card, and somewhere around Fedora 12
could boot from external DVD via USB2.0 (via trampoline from the harddrive)
because the PCMCIA drivers for the bridge that enables the USB2.0 card
were in the initrd. But then the PCMCIA drivers were dropped from initrd,
so it no longer boots newer Fedora from DVD. Meanwhile deteriorating
support for RagePro graphics has nudged me back to Fedora 11. Fedora 11
is only 3 years old.
This discussion is about Live CD of F18+, so don't worry, nobody will
increase Live CD size of F11 ;)


Vit
Chris Murphy
2012-05-09 21:17:13 UTC
Permalink
Post by Jaroslav Reznik
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries... For me personally CD is history, even
DVD, same 1 GB flash drive. We can afford it. But some people can't
and are our users thanks to the ability to get a cheap OS, that can
run on cheap HW and is still modern.
The question is - how many people will be affected? Or should we
provide some fallback option - stripped down CD media size image? And
make the bigger one primary one?
Is it marginally easier to stay below the CD size limit with 32-bit builds vs 64-bit? i.e. could Fedora retain Live CD for i386, and move to a Live DVD for x86_64?

Or what problems are there abandoning Live CD for < 10% (by estimates thus far), but retaining the ability to use netinst.iso for that hardware? I think the negative loss of this hardware for Live Desktop trial is minimal compared to the gain by dropping the limit.

But if it's almost trivial to have two Live Desktop builds: CD and DVD, then I'd suggest that route.

Chris Murphy
Adam Williamson
2012-05-10 04:23:11 UTC
Permalink
Post by Chris Murphy
But if it's almost trivial to have two Live Desktop builds: CD and DVD, then I'd suggest that route.
I can tell you it's very unlikely they'd both get comprehensively QA'ed.
And the more spins we have, the more likely some of them are to fail to
build.

We actually already nominally *have* a 1G sized desktop spin, but it's
rarely actually spun so it's often broken. See fedora-live-desktop.ks
vs. fedora-livecd-desktop.ks.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
http://www.happyassassin.net
Adam Jackson
2012-05-09 21:17:15 UTC
Permalink
Post by Jaroslav Reznik
Post by Adam Jackson
I know I've said this before, but: we should break the CD size
barrier
precisely so people can't burn things to CDs. If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek
time
is entirely worth it.
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries... For me personally CD is history, even
DVD, same 1 GB flash drive. We can afford it. But some people can't
and are our users thanks to the ability to get a cheap OS, that can
run on cheap HW and is still modern.
The question is - how many people will be affected? Or should we
provide some fallback option - stripped down CD media size image? And
make the bigger one primary one?
Even if all of your objections are true, and who knows, they might be:
we already do provide alternatives. The Live media is not the only
install media.

- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20120509/1d1c3d11/attachment.sig>
Kevin Kofler
2012-05-10 07:16:42 UTC
Permalink
Post by Adam Jackson
we already do provide alternatives. The Live media is not the only
install media.
The other alternatives are either already DVDs or netinstall CDs which
require a fast Internet connection (which people who don't even have a DVD
drive are unlikely to have).

Kevin Kofler
Adam Jackson
2012-05-10 15:00:48 UTC
Permalink
Post by Kevin Kofler
Post by Adam Jackson
we already do provide alternatives. The Live media is not the only
install media.
The other alternatives are either already DVDs or netinstall CDs which
require a fast Internet connection (which people who don't even have a DVD
drive are unlikely to have).
So the set of people we'd be inconveniencing is exactly the set of
people with no bandwidth and the inability to boot from anything larger
than a CD.

Do we think that's a statistically significant number of people, or are
we just arguing?

- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20120510/c17abb44/attachment.sig>
Johannes Lips
2012-05-10 15:05:13 UTC
Permalink
Post by Adam Jackson
Post by Kevin Kofler
Post by Adam Jackson
we already do provide alternatives. The Live media is not the only
install media.
The other alternatives are either already DVDs or netinstall CDs which
require a fast Internet connection (which people who don't even have a
DVD
Post by Kevin Kofler
drive are unlikely to have).
So the set of people we'd be inconveniencing is exactly the set of
people with no bandwidth and the inability to boot from anything larger
than a CD.
Do we think that's a statistically significant number of people, or are
we just arguing?
Would be interesting to get some input from lower-income countries.
Ambassadors from those countries could perhaps tell us about the hardware
which is most common.

Johannes
Post by Adam Jackson
- ajax
--
devel mailing list
devel at lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20120510/775e6523/attachment.html>
Chris Murphy
2012-05-10 16:50:10 UTC
Permalink
Post by Adam Jackson
So the set of people we'd be inconveniencing is exactly the set of
people with no bandwidth and the inability to boot from anything larger
than a CD.
Do we think that's a statistically significant number of people, or are
we just arguing?
Isn't it also true the Live CD is English only? English + ancient hardware + middle of nowhere. Quite honestly, this sounds like rural America (we have piss poor bandwidth in this country).

Chris Murphy
Kevin Kofler
2012-05-10 22:40:58 UTC
Permalink
Post by Chris Murphy
Isn't it also true the Live CD is English only?
Most of the CDs carry translations, the KDE one does not though, due to how
KDE translations work (they sit in huge kde-l10n-* packages).

The idea is that you install from the live CD and then you install the
translation for your language(s) only. I have no need for every single kde-
l10n-* langpack shipped by upstream. Hardly anybody does. Most people need
only one or two languages.

Kevin Kofler
Gerd Hoffmann
2012-05-11 08:58:33 UTC
Permalink
Post by Adam Jackson
Post by Kevin Kofler
Post by Adam Jackson
we already do provide alternatives. The Live media is not the only
install media.
The other alternatives are either already DVDs or netinstall CDs which
require a fast Internet connection (which people who don't even have a DVD
drive are unlikely to have).
So the set of people we'd be inconveniencing is exactly the set of
people with no bandwidth and the inability to boot from anything larger
than a CD.
Do we think that's a statistically significant number of people, or are
we just arguing?
I suspect the number is pretty small if non-zero at all.

Fedora raises the hardware requirements now and then. The minimum cpu
required for i386 was changed a few versions back. Likewise very old
gfx cards tend to not be supported very well (see the guy running F11
for that reason). You need a not too small amout of memory to run the
livecd and the anaconda installer. I guess it is pretty hard to find
hardware which runs f18 well and can not boot from dvd or usb ...

Also, can the netinst.iso install from local media too? A usb key for
example? So you can use netinst.iso @ CD and install-dvd @ usbkey to
install if your box can boot from cd only ...

cheers,
Gerd
Kevin Kofler
2012-05-11 10:12:36 UTC
Permalink
Post by Gerd Hoffmann
Also, can the netinst.iso install from local media too? A usb key for
install if your box can boot from cd only ...
Why do we have to complicate things so much instead of just stopping the
creeping biggerism?

Kevin Kofler
drago01
2012-05-11 10:26:57 UTC
Permalink
Post by Kevin Kofler
Also, can the netinst.iso install from local media too?  A usb key for
install if your box can boot from cd only ...
Why do we have to complicate things so much instead of just stopping the
creeping biggerism?
Even without minidebug info we already don't have enough space.
No office suite on the deskop spin; no translations on the kde spin ....

We complicate things by insisting that a CD is the upper limit. Which
might have been true in the 90s but sure isn't in 2012.
Kushal Das
2012-05-11 10:31:45 UTC
Permalink
Post by drago01
Even without minidebug info we already don't have enough space.
No office suite on the deskop spin; no translations on the kde spin  ....
We complicate things by insisting that a CD is the upper limit. Which
might have been true in the 90s but sure isn't in 2012.
Even in 2012 I can see many systems people still use without any DVD
drive or network connections. LiveCD still helps to install the latest
Fedora in those systems and has been very useful in general.

Kushal
--
http://fedoraproject.org
http://kushaldas.in
drago01
2012-05-11 11:05:21 UTC
Permalink
Post by Kushal Das
Post by drago01
Even without minidebug info we already don't have enough space.
No office suite on the deskop spin; no translations on the kde spin  ....
We complicate things by insisting that a CD is the upper limit. Which
might have been true in the 90s but sure isn't in 2012.
Even in 2012 I can see many systems people still use without any DVD
drive or network connections.
Where do you see them? How many? Can they just use USB?
Kushal Das
2012-05-11 11:14:22 UTC
Permalink
Post by drago01
Post by Kushal Das
Post by drago01
Even without minidebug info we already don't have enough space.
No office suite on the deskop spin; no translations on the kde spin  ....
We complicate things by insisting that a CD is the upper limit. Which
might have been true in the 90s but sure isn't in 2012.
Even in 2012 I can see many systems people still use without any DVD
drive or network connections.
Where do you see them? How many? Can they just use USB?
I see them regularly in India, people don't upgrade their hardware
that frequently.
LiveUSB they can use, but sending out/copying/burning LiveCD is much
easier solution in most cases.


Kushal
--
http://fedoraproject.org
http://kushaldas.in
Troy Dawson
2012-05-10 13:06:07 UTC
Permalink
Post by Jaroslav Reznik
Post by Adam Jackson
I know I've said this before, but: we should break the CD size
barrier
precisely so people can't burn things to CDs. If you must burn to
optical media, do yourself a favor and burn a DVD, the reduced seek
time
is entirely worth it.
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries... For me personally CD is history, even
DVD, same 1 GB flash drive. We can afford it. But some people can't
and are our users thanks to the ability to get a cheap OS, that can
run on cheap HW and is still modern.
The question is - how many people will be affected? Or should we
provide some fallback option - stripped down CD media size image? And
make the bigger one primary one?
R.
I like the idea of a separate stripped down live CD image.
But it doesn't have to be too stripped down. What if we made the LXDE
and/or Xfce spin's CD size, while the Gnome and KDE live images would be
DVD size.

*braces for the Gnome is our default desktop replies*

Troy
Jon VanAlten
2012-05-09 20:23:38 UTC
Permalink
----- Original Message -----
From: "Adam Jackson" <ajax at redhat.com>
To: "Development discussions related to Fedora" <devel at lists.fedoraproject.org>
Sent: Wednesday, May 9, 2012 3:02:33 PM
Subject: Re: Proposed F18 feature: MiniDebugInfo
Post by John Reiser
Post by Adam Jackson
Post by John Reiser
If so, then please acknowledge explicitly that Fedora would be
discarding
some 4% of running, otherwise-capable machines (especially old
laptops)
that can read only CD and not DVD, some 7% of working USB sticks
that are
512MB or less, and some 5% of working boxes that cannot boot
from USB.
Those are wonderful numbers. How ever did you arrive at them?
They're from my own laboratory of 20 boxes and 15 USB sticks
accumulated
slowly and semi-regularly over the last decade or so. That omits
6 really ancient boxes (>15 years old each) that have been
discarded
along the way.
Forgive me for not considering that a representative sample.
- ajax
Isn't there some hardware profile report thingo? Would it be
possible to use that data to quantify the potential effect of
growing live media beyond CD size limit? (I would support
breaking the limit, but would prefer the decision be made with
all available information).

cheers,
jon
Adam Jackson
2012-05-09 21:14:13 UTC
Permalink
Post by Jon VanAlten
Isn't there some hardware profile report thingo? Would it be
possible to use that data to quantify the potential effect of
growing live media beyond CD size limit? (I would support
breaking the limit, but would prefer the decision be made with
all available information).
Yeah, someone always says this, and then I go try to get data out of
smolt, and then I drink myself into a coma. It is (and always has been)
infuriatingly difficult to get usable numbers out of it.

At least right now it's just throwing varnish cache errors at me, so I
don't waste my time.

- ajax

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://lists.fedoraproject.org/pipermail/devel/attachments/20120509/7f637227/attachment.sig>
Jaroslav Reznik
2012-05-09 21:22:35 UTC
Permalink
----- Original Message -----
Post by Adam Jackson
Post by Jaroslav Reznik
Post by Adam Jackson
I know I've said this before, but: we should break the CD size
barrier
precisely so people can't burn things to CDs. If you must burn
to
optical media, do yourself a favor and burn a DVD, the reduced
seek
time
is entirely worth it.
I'd like to break CD limit too but we should not forgot there are
users
for which CD is top technology from dreams and we have a lot of
these
users among some countries... For me personally CD is history, even
DVD, same 1 GB flash drive. We can afford it. But some people can't
and are our users thanks to the ability to get a cheap OS, that can
run on cheap HW and is still modern.
The question is - how many people will be affected? Or should we
provide some fallback option - stripped down CD media size image?
And
make the bigger one primary one?
Even if all of your objections are true, and who knows, they might
we already do provide alternatives. The Live media is not the only
install media.
Yep, it's not the only way, we even have our bigger offering already.
And yeah, let's break CD rule but first - let ask if it still apply
or not. Maybe it's my imagination and 3rd world is not anymore
interested in this :)

For example to Africa, we even do not ship CDs but DVDs - so at least,
most people have a DVD-ROM drive :) The reason is - network bandwidth
and Installation DVD fits more packages...

R.
Post by Adam Jackson
- ajax
--
devel mailing list
devel at lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Orcan Ogetbil
2012-05-11 00:15:26 UTC
Permalink
Post by Jaroslav Reznik
----- Original Message -----
Post by Jaroslav Reznik
Post by Adam Jackson
I know I've said this before, but: we should break the CD size
barrier
precisely so people can't burn things to CDs.  If you must burn
to
optical media, do yourself a favor and burn a DVD, the reduced seek
time
is entirely worth it.
I'd like to break CD limit too but we should not forgot there are users
for which CD is top technology from dreams and we have a lot of these
users among some countries... For me personally CD is history, even
DVD, same 1 GB flash drive. We can afford it. But some people can't
and are our users thanks to the ability to get a cheap OS, that can
run on cheap HW and is still modern.
The question is - how many people will be affected? Or should we
provide some fallback option - stripped down CD media size image? And
make the bigger one primary one?
we already do provide alternatives.  The Live media is not the only
install media.
Yep, it's not the only way, we even have our bigger offering already.
And yeah, let's break CD rule but first - let ask if it still apply
or not. Maybe it's my imagination and 3rd world is not anymore
interested in this :)
For example to Africa, we even do not ship CDs but DVDs - so at least,
most people have a DVD-ROM drive :) The reason is - network bandwidth
and Installation DVD fits more packages...
An alternative would be to ship a live DVD, right? How hard is it to
create a live DVD? Why do we not leave the decision of choosing
between a live CD and a live DVD as the live image, to the spin
maintainers? Even better, (hypothetically) a spin can choose to have
both a live CD and a live DVD.

Orcan
Jaroslav Reznik
2012-05-09 21:27:03 UTC
Permalink
----- Original Message -----
Post by Adam Jackson
Post by Jon VanAlten
Isn't there some hardware profile report thingo? Would it be
possible to use that data to quantify the potential effect of
growing live media beyond CD size limit? (I would support
breaking the limit, but would prefer the decision be made with
all available information).
Yeah, someone always says this, and then I go try to get data out of
smolt, and then I drink myself into a coma. It is (and always has
been)
infuriatingly difficult to get usable numbers out of it.
Smolt is a good idea! But looks like I should send you some bottle
of our home distilled snaps - the legal one, Slivovice :)

R.
Post by Adam Jackson
At least right now it's just throwing varnish cache errors at me, so
I
don't waste my time.
- ajax
--
devel mailing list
devel at lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel
Continue reading on narkive:
Loading...