Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 29 Apr 2005 10:50:13 +0200
From:      Alexander Leidinger <Alexander@Leidinger.net>
To:        Mathew Kanner <mat@cnd.mcgill.ca>
Cc:        Julian Elischer <julian@elischer.org>
Subject:   Re: uaudio patch,
Message-ID:  <20050429105013.t6r88igogs44gsk8@netchild.homeip.net>
In-Reply-To: <20050428170154.GG14507@cnd.mcgill.ca>
References:  <20050306184416.5603976c@Magellan.Leidinger.net> <20050307030419.GC951@kt-is.co.kr> <20050308.121415.847025091.kazuhito@ph.noda.tus.ac.jp> <426F409D.6010007@elischer.org> <426F4280.9030206@elischer.org> <426F49C3.1020009@elischer.org> <20050427184115.GC11709@cnd.mcgill.ca> <20050428110656.wqnp94nnwosc80ck@netchild.homeip.net> <20050428112754.GB14507@cnd.mcgill.ca> <20050428142007.11hjbs1pcgws4g0w@netchild.homeip.net> <20050428170154.GG14507@cnd.mcgill.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
Mathew Kanner <mat@cnd.mcgill.ca> wrote:

> On Apr 28, Alexander Leidinger wrote:
>> Mathew Kanner <mat@cnd.mcgill.ca> wrote:
>>
>> >	I realise I'm the only one whose taken this position so I'll
>> >withdraw it.  But for the record, this is my reasoning, what the heck
>> >are you going to do with this informaiton: it doesn't help you.
>>
>> If you want to go to the soundcard directly, without any conversation, you
>> need to know what you can use.
>
> 	But this doesn't help.  At all.

Think about latencity. Or teach me that our in-kernel conversation routines
are invoked even when there's nothing to do. Or tell me that it isn't
wasting clock-cycles if a sound generating app without any connection to the
soundcard produces a format which needs conversation instead of a format
which doesn't need conversation.

>> > Other
>> >interesting but often usesless information is printed in boot_verbose.
>> >The listed capablities are often way beyound what our sound interface
>> >can offer.
>>
>> Just because it can't offer it _yet_, it doesn't mean we don't teach the
>> interface to use it later. And parts of this improvement can be addressed
>> from both sides (step by step).
>
> 	But this doesn't help.  At all.  (Because it isn't an
> interface, it's only text output).  Expanding our abilities for
> different formats and flagging which ones are hardware based is
> *completly* serperate project.

Yes. And as long as nobody invests time into this project, the text output is
much better than nothing at all. And for the above mentioned case (app
generates sound, has no connection to the soundcard, maybe because it runs
on a different system) the user needs to know the capabilities. This means
at one point he needs to see text. /dev/sndstat is our "sound device status"
interface, and I think the capabilities of the soundcard are part of the
status.

>> > If you want to know what avaiable then connect to the
>> >sound device and issue an ioctl like every other app.
>>
>> If I have an app which plays some audio files. And if I have the opportunity
>> to generate the right file with a conversation program which converts
>> "something" (e.g. text to speach) but doesn't knows about an output device,
>> and I want to generate the right output, I have to know what I can use. A
>> human being which just knows about tools, but not about ioctl(), I need
>> something which tells me what I can use. Do we have an app which does this?
>> Do we need such an app, or would it be convenient to just look at "cat
>> /dev/sndstat"?
>
> 	I'm not sure I %100 understand here but I think we should have
> an app (say mixer) whihc will IOCTL for the caps and report to the
> user.  This is a reporting tool only.  Apps that need to know will
> IOCTL as usual.

Any way of presenting the capabilities to the use is ok for me. One way may
be better then the other way. I don't think the mixer app is ok for this, as
it traditionally is used to change the volume or change the recording source.
I think about it as a channel control utility. Yes, it reports some
properties, the volume and the recording channel, but those are
1. variable properties
2. falls withhin an ergonomic interface (report back what you do)

/dev/sndstat traditionally reports static properties (please let's think
about the number of virtual channels as a semi-static property; it's subject
to sysctl modifications, and you have to be root to change it). The volume
isn't a static property, the capabilities of a sound device are static
properties.

>> >	Anyway, as a general concept, I think we should start
>> >expanding our using sysctls.
>>
>> I think it depends. For status/capabilities/static like output, we should
>> look at enhancing existing interfaces (if they fit into the big picture of
>> what we want to add), like our "sound status device".
>>
>> For general "mode switching" (whatever this means) which doesn't fit into
>> the
>> 4Front-OSS model, sysctl looks like a nice candidate. But another nice
>> candidate would be a "sndctl" program which may interact with the device
>> over /dev/dspX.ctl orsomething like this.
>
> 	Oo, a nice thought.

Thanks. It's really preferable over the sysctl approach, since it allows an
user to change various aspects. A sysctl should only be used for "non user
servicable parts". I haven't througly thought about what this means, but
e.g. the vchans affect the behavior of the entire sound system regardles who
uses it, so an users shouldn't be allowed to change it.

>> Is there something specific you have in mind regarding the sysctl proposal?
>
> 	Yes, for now we need commit Kazuhito HONDA patch for uaudio
> that provices sysctls for all available mixer/device settings.

I think this is the wrong approach. The mixer settings belong into the mixer
(and I've already thought about enhaning our mixer.. it's just the lack of
time).

> 	I have a dream (just a dream, I doubt I'll ever achieve it) is
> that our sound model is based on what the uadio standard provides.

I've thought about this already. The above mentioned enhancement of our mixer
was based upon looking at what channels uaudio is able to differentiate and
about what our mixer does display.

> When an app wants to know that caps they get a descriptor just like
> what uaudio gives.  Their format has quite a bit of thought behind it
> and its very flexible.  Really, one could take that uadio description
> and convert it and run it through 'dot' and get a block diagram of the
> soundcard.  That block diagram is usually only available in comments
> in the source.

Sounds interesting... such an descriptor export could be done with a sysctl
or as part of sndctl. Even if it is static status, it doesn't belongs into
/dev/sndstat, since I think about the uaudio description as a blob... or as
xml data if you like.


Should we start to join our ideas in the FreeBSD wiki and form a design
document or TODO list out of it?

Bye,
Alexander.

-- 
http://www.Leidinger.net  Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org     netchild @ FreeBSD.org  : PGP ID = 72077137
You cannot achieve the impossible without attempting the absurd.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050429105013.t6r88igogs44gsk8>