Screen readers and chat or instant messaging

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Screen readers and chat or instant messaging

Aaron Leventhal-3
Question for all the screen reader users out there ...

How should a screen reader behave in a chat
application? Are there any special commands or
changes in the way review mode works?

Or is it just like anything else? One reviews the
chat log like it's a document. That makes me
wonder whether there's a quick command to navigate
to the end of the chat log, which is typically the
most recent message.

- Aaron
_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

Jason White
On Fri, Jan 26, 2007 at 01:38:41AM -0500, Aaron Leventhal wrote:
 
> Or is it just like anything else? One reviews the chat log like it's a
> document. That makes me wonder whether there's a quick command to navigate to
> the end of the chat log, which is typically the most recent message.

Yes, as long as that means the beginning of the most recent message, not the
end of it. If I'm reading the chat log and start typing, does the focus switch
to the window in which I am entering text? This would be a good default
(assistive technologies generally allow the review position, e.g., the
location of the braille window, to be disconnected from the system cursor if
desired, and this can be useful at times in chat applications).

Also, if a new message is appended to the chat log while I'm reading an
earlier message, will the focus shift to the new message? My preference would
for "no shift" in this situation, by default, but maybe some users would like
the new message to receive focus as soon as it appears.

I've used IRC and Jabber chat applications in console mode, and cursor
movement has always presented problems, particularly with a braille display.

_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

David hilbert Poehlman
In reply to this post by Aaron Leventhal-3
Hi Aaron and all,

My screen reader uses two windows for IRC. one for input and one for  
output and I suspect this would be true for each of however many  
chanels I have open.  I'd like to see lots of changes.  One good  
thing I've noticed is that if I focus on the lutput window, I am  
placed where I left off in the log.  This is fine, but if I am only  
focused on the input window, I'd like a quick way to read the newest  
information.  It would alos be helpful to be able to identify any  
message I want to read in the log.  I'd be able to list them by namme  
or time stamp or whatever and move among them.  I might want to read  
more than the newest content.  I might also want to just hear the new  
stuff as it comes in.

All I can do now is to move to the outputt window andd arrow through it.

I hope this helps.

Thanks!

On Jan 26, 2007, at 1:38 AM, Aaron Leventhal wrote:

Question for all the screen reader users out there ...

How should a screen reader behave in a chat application? Are there  
any special commands or changes in the way review mode works?

Or is it just like anything else? One reviews the chat log like it's  
a document. That makes me wonder whether there's a quick command to  
navigate to the end of the chat log, which is typically the most  
recent message.

- Aaron
_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility


_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

T.V Raman
In reply to this post by Jason White

Here are things I've found useful with the various speech-enabled
chat apps I've built/used in  the emacspeak environment:

0In general, )        You want to hear a message as it comes in irrespective of
          your context,

1)        You want the ability to turn  the above off in the case
          of excessively chatty buddies.

2)        More generally,  the combination of "automatically
          speak messages" and "selectively turn off buddies" needs to be
          customizable together; as an example, an alternative
          work habit might be to turn off auto speaking of *all*
          messages, and selectively turn on special buddies.

3)        In all cases, you need an auditory alert as a message
          comes in (separate from speaking the message).

4)        You need a global kbd shortcut to jump to the last
          buddy who talked to you and that you haven't responded
          yet.

5)        In general the above commands works off of the ring of
          pending buddies, pushing the buddy you just jumped to the end of
          the queue.

6) Finally, you need a means of quickly finding out all the
          buddies who have  messaged you while you were AFK when you
          return; in Emacspeak this information is placed at the
          end of the mode-line output.

So to summarize, if you treat chat like any other piece of
scrolling content, the user will be able to read the chat log --
but it wont lead to an interface that lends itself to the chatter
being very responsive or effective.

Jason White writes:
 > On Fri, Jan 26, 2007 at 01:38:41AM -0500, Aaron Leventhal wrote:
 >  
 > > Or is it just like anything else? One reviews the chat log like it's a
 > > document. That makes me wonder whether there's a quick command to navigate to
 > > the end of the chat log, which is typically the most recent message.
 >
 > Yes, as long as that means the beginning of the most recent message, not the
 > end of it. If I'm reading the chat log and start typing, does the focus switch
 > to the window in which I am entering text? This would be a good default
 > (assistive technologies generally allow the review position, e.g., the
 > location of the braille window, to be disconnected from the system cursor if
 > desired, and this can be useful at times in chat applications).
 >
 > Also, if a new message is appended to the chat log while I'm reading an
 > earlier message, will the focus shift to the new message? My preference would
 > for "no shift" in this situation, by default, but maybe some users would like
 > the new message to receive focus as soon as it appears.
 >
 > I've used IRC and Jabber chat applications in console mode, and cursor
 > movement has always presented problems, particularly with a braille display.
 >
 > _______________________________________________
 > dev-accessibility mailing list
 > [hidden email]
   > https://lists.mozilla.org/listinfo/dev-accessibility

5)      In general

--
Best Regards,
--raman

Title:  Research Scientist      
Email:  [hidden email]
WWW:    http://emacspeak.sf.net/raman/
Google: tv+raman
GTalk:  [hidden email], [hidden email]
PGP:    http://emacspeak.sf.net/raman/raman-almaden.asc

_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

Aaron Leventhal-3
In reply to this post by Jason White
Raman,

You're a good person to answer this.

Should ARIA contain a specific role="log" for
regions like chat logs. Where each new line of
information is a new event, in time order. It can
be used for other kinds of logs, IM logs, error
logs and "game logs" for online sports game
trackers. If we have "log" do we need "logitem" as
well (we might be able to just use
aaa:atomic="true" when more than 1 DOM node is
used per log item).

Or do logs not really need their own role(s).
Instead we'd just use aaa:live="polite" or
aaa:live="assertive". Some context to this
question -- at the ARIA meeting yesterday it was
proposed that log is unnecessary and that there
are too many roles. I tend to think logs really
need special functionality in ATs, which shouldn't
require per-application scripts, but should be
enable-able from the semantics.

- Aaron

T.V Raman wrote:

> Here are things I've found useful with the various speech-enabled
> chat apps I've built/used in  the emacspeak environment:
>
> 0In general, )        You want to hear a message as it comes in irrespective of
>           your context,
>
> 1)        You want the ability to turn  the above off in the case
>           of excessively chatty buddies.
>
> 2)        More generally,  the combination of "automatically
>           speak messages" and "selectively turn off buddies" needs to be
>           customizable together; as an example, an alternative
>           work habit might be to turn off auto speaking of *all*
>           messages, and selectively turn on special buddies.
>
> 3)        In all cases, you need an auditory alert as a message
>           comes in (separate from speaking the message).
>
> 4)        You need a global kbd shortcut to jump to the last
>           buddy who talked to you and that you haven't responded
>           yet.
>
> 5)        In general the above commands works off of the ring of
>           pending buddies, pushing the buddy you just jumped to the end of
>           the queue.
>
> 6) Finally, you need a means of quickly finding out all the
>           buddies who have  messaged you while you were AFK when you
>           return; in Emacspeak this information is placed at the
>           end of the mode-line output.
>
> So to summarize, if you treat chat like any other piece of
> scrolling content, the user will be able to read the chat log --
> but it wont lead to an interface that lends itself to the chatter
> being very responsive or effective.
>
> Jason White writes:
>  > On Fri, Jan 26, 2007 at 01:38:41AM -0500, Aaron Leventhal wrote:
>  >  
>  > > Or is it just like anything else? One reviews the chat log like it's a
>  > > document. That makes me wonder whether there's a quick command to navigate to
>  > > the end of the chat log, which is typically the most recent message.
>  >
>  > Yes, as long as that means the beginning of the most recent message, not the
>  > end of it. If I'm reading the chat log and start typing, does the focus switch
>  > to the window in which I am entering text? This would be a good default
>  > (assistive technologies generally allow the review position, e.g., the
>  > location of the braille window, to be disconnected from the system cursor if
>  > desired, and this can be useful at times in chat applications).
>  >
>  > Also, if a new message is appended to the chat log while I'm reading an
>  > earlier message, will the focus shift to the new message? My preference would
>  > for "no shift" in this situation, by default, but maybe some users would like
>  > the new message to receive focus as soon as it appears.
>  >
>  > I've used IRC and Jabber chat applications in console mode, and cursor
>  > movement has always presented problems, particularly with a braille display.
>  >
>  > _______________________________________________
>  > dev-accessibility mailing list
>  > [hidden email]
>    > https://lists.mozilla.org/listinfo/dev-accessibility
>
> 5)      In general
>
_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

Michael Curran
In reply to this post by Aaron Leventhal-3
Hi Aaron,

I think personally, I like my chat program / screen reader combinations to
allow me to hear the latest message coming in. I don't think it needs to
interupt other speech or anything like that, just simply cause an event that
allows the screen reader to speak the new text when it can. I would want
this new text to be spoken if I am in an input window or an output window.
I'm not too bothered about any other features; I sometimes feel that
accessibility can get a bit over the top if its doing all these things that
it thinks is good for the user. I have always just liked systems where its
up to the user to actively seek information, at the time they need it.
Having said that, this does mean though that although the software may not
have all wonderful hooks to make the task as seemless and as easy as
possible, it still must be as accessible as possible. E.g. the log of
messages must be navigatable by arrrowing with a cursor, time stamps should
be able to be turned on and off, the input window must be navigatable by
arrowing with a cursor, plus possibly an accessible input history (if in
deed the application has this).

One other thing that might be good is to be able to pause the log of
messages, so that the user can safely arrow around with out new messages
mucking up their position.

Mick



----- Original Message -----
From: "David Poehlman" <[hidden email]>
To: "Aaron Leventhal" <[hidden email]>
Cc: <[hidden email]>
Sent: Saturday, January 27, 2007 3:11 AM
Subject: Re: Screen readers and chat or instant messaging


> Hi Aaron and all,
>
> My screen reader uses two windows for IRC. one for input and one for
> output and I suspect this would be true for each of however many  chanels
> I have open.  I'd like to see lots of changes.  One good  thing I've
> noticed is that if I focus on the lutput window, I am  placed where I left
> off in the log.  This is fine, but if I am only  focused on the input
> window, I'd like a quick way to read the newest  information.  It would
> alos be helpful to be able to identify any  message I want to read in the
> log.  I'd be able to list them by namme  or time stamp or whatever and
> move among them.  I might want to read  more than the newest content.  I
> might also want to just hear the new  stuff as it comes in.
>
> All I can do now is to move to the outputt window andd arrow through it.
>
> I hope this helps.
>
> Thanks!
>
> On Jan 26, 2007, at 1:38 AM, Aaron Leventhal wrote:
>
> Question for all the screen reader users out there ...
>
> How should a screen reader behave in a chat application? Are there  any
> special commands or changes in the way review mode works?
>
> Or is it just like anything else? One reviews the chat log like it's  a
> document. That makes me wonder whether there's a quick command to
> navigate to the end of the chat log, which is typically the most  recent
> message.
>
> - Aaron
> _______________________________________________
> dev-accessibility mailing list
> [hidden email]
> https://lists.mozilla.org/listinfo/dev-accessibility
>
>
> _______________________________________________
> dev-accessibility mailing list
> [hidden email]
> https://lists.mozilla.org/listinfo/dev-accessibility
>

_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

Gijs Kruitbosch ("Hannibal")
In reply to this post by Jason White
T.V Raman wrote:

> Here are things I've found useful with the various speech-enabled
> chat apps I've built/used in  the emacspeak environment:
>
> 0In general, )        You want to hear a message as it comes in irrespective of
>           your context,
>
> 1)        You want the ability to turn  the above off in the case
>           of excessively chatty buddies.
>
> 2)        More generally,  the combination of "automatically
>           speak messages" and "selectively turn off buddies" needs to be
>           customizable together; as an example, an alternative
>           work habit might be to turn off auto speaking of *all*
>           messages, and selectively turn on special buddies.
>
> 3)        In all cases, you need an auditory alert as a message
>           comes in (separate from speaking the message).
>
> 4)        You need a global kbd shortcut to jump to the last
>           buddy who talked to you and that you haven't responded
>           yet.
>
> 5)        In general the above commands works off of the ring of
>           pending buddies, pushing the buddy you just jumped to the end of
>           the queue.
>
> 6) Finally, you need a means of quickly finding out all the
>           buddies who have  messaged you while you were AFK when you
>           return; in Emacspeak this information is placed at the
>           end of the mode-line output.
>
> So to summarize, if you treat chat like any other piece of
> scrolling content, the user will be able to read the chat log --
> but it wont lead to an interface that lends itself to the chatter
> being very responsive or effective.
>

I like this list a great deal, however, I'm curious about point 6 -
wouldn't you (also?) want to be able to have all the messages addressed
to you read out loud? I suppose this is more of a group chat problem,
where it's considered normal to use chat as a background activity, and
it'd be normal you want to check in on messages by anyone anywhere that
mentioned your name (or particular topics you're interested in). Just
getting the names of the people sending these messages would in general
not really be useful. Of course, in IM you will have a one-on-one
conversation with all those buddies, so perhaps it will be enough - but
in group chat there could be multiple contexts in which someone could
contact you, or talk about a topic that interests you.

~ Gijs
_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

Peter-206
In reply to this post by Aaron Leventhal-3
Good question, I'm still trying to work this out as I work on
developing another accessible chat program. I've thought about this
post for a few days though.

My first instinct was to lean on simplicity and in the case of the
aria role=*, keep it also as simple as possible. However, I'm not sure
about that anymore.

I'm currently struggling a bit with context issues in my chat. For
example I'd like the chat program to aid users when they fall behind.
My idea for this is currently a pause feature that allows a user to
travel up and down the paused chat thread. So for example, suppose a
screen reader user keys the pause button, at this point some basic
information would helpful such as: number of lines behind in current
chat, filter options (e.g. budies), chat message summaries (e.g. first
2 words from each chat message), and so on. All of this could be
accomplished (I think - still working on it :) on the server-side and
by only displaying the relevant information to the screen reader.

The above would involve some hack-ish work on the chat log, traversing
up and down and gathering information creatively. Having chat events
(each "new line of information is a new event, in time order"-Aaron)
would make me jump for joy! I imagine at least that it would make
traversing a log easier and cleaner. Also, future unkown uses of Ajax,
if this were implemented, would probably creep up in ways we couldn't
now predict. Think of HTML tables - a data table for layout, I doubt
anyone back in the day predicted that.


As an off--this-thread-topic.

I'm kind of stuck on a theoretical question about a chat program.
Which is "better":
a) a chat program that includes all features/options: accessible/non/*

-or-

b) a chat program that has separate interfaces for different types of
users: accessible/non/*

This actually started out as an argument between colleague and I. On
one hand fewer options keeps things simple for the user and the chat
can arguably be more customized to, for example, an assistive
technology. On the other hand I would argue that a chat program that
doesn't incorporate the different features/options/interfaces well
that either the code or the UI is flawed and should be re-designed.

Which is "better"?

-PeterT


On Jan 26, 1:38 am, Aaron Leventhal <[hidden email]>
wrote:

> Question for all the screen reader users out there ...
>
> How should a screen reader behave in a chat
> application? Are there any special commands or
> changes in the way review mode works?
>
> Or is it just like anything else? One reviews the
> chat log like it's a document. That makes me
> wonder whether there's a quick command to navigate
> to the end of the chat log, which is typically the
> most recent message.
>
> - Aaron


_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

Tom Brunet
I've heard a similar topic come up more than once, and I have been
momentarily convinced once or twice that separate interfaces would be a
good solution.  Each time I was convinced, however, I realized that my
focus had gotten too narrow.

Separate interfaces are a great idea if you assume that there are
specific classes of abilities.  If this were the case, you could, for
example, tailor one interface for a sighted individual, and one for a
blind individual.  As long as you keep the features in sync, no big
deal, right?

The 'problem' is that for any classification make, you are likely to
forget about the people who lie somewhere inbetween.  For example, one
of my coworkers is severely visually impaired, but is not blind.  He
uses a computer like most sighted users, but he also uses a screen
reader to supplement what he's seeing.  If there were one interface for
blind individuals, and one for individuals with 20/20 eyesight, I
believe he would be unproductive in either environment.

So, I would argue that a single interface is best because it addresses a
larger number of individuals.  Users can mix and match ATs and figure
out what works for them, rather than being told what works for them.

Tom

> As an off--this-thread-topic.
>
> I'm kind of stuck on a theoretical question about a chat program.
> Which is "better":
> a) a chat program that includes all features/options: accessible/non/*
>
> -or-
>
> b) a chat program that has separate interfaces for different types of
> users: accessible/non/*
_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Screen readers and chat or instant messaging

Jason White
On Tue, Jan 30, 2007 at 06:25:34PM -0600, Tom Brunet wrote:
> I've heard a similar topic come up more than once, and I have been momentarily
> convinced once or twice that separate interfaces would be a good solution.  
> Each time I was convinced, however, I realized that my focus had gotten too
> narrow.

The ultimate solution to this, I think, will be delivery context profiling
that includes details of the user's interface preferences. Of course, this
needs to operate across multiple Web sites in order to be effective: having to
declare preferences separately in each case would rapidly become tedious. The
content can then be adapted to the user's delivery context. This is what lies
behind CC/PP and related technologies. It also frees the content developer
from trying to accommodate everyone's needs in a single user interface,
running the risk of an unsuccessful compromise.

_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Application design, was RE: Screen readers and chat or instant messaging

Sina Bahram
In reply to this post by Peter-206
I have copied the blind programming list on this single response, just
because I think this is a topic that comes up a lot, and I thought they
might enjoy the discussion.

Well, speaking from the pro abstraction, pro generic, and pro design camp,
my thoughts are below.

I think that an interface is nothing more than a methodology of accessing
the heart of what we are calling an application. Several design patterns
actually go along with this concept; for example, a model view controller
design splits off the application into a model which contains the business
logic of the application, a view which is responsible for controlling what,
how, and in what way the user receives information from the model, and a
controller which is responsible for interacting with the model and
instructing, requesting, or in any other way facilitating the request for an
action to take place which may or may not be reflected in the view.

There are other designs which integrate the view and the controller, because
sometimes this is simply easier. For example, it is quite easy and
convenient to store the methods of accessing a menu along with the menu, but
now if you want to use the option in that menu with a speech activated
command, single switch device, or via some programmatic means: you will have
to get around the fact that it is hardcoded as a menu item. The alternative
method of designing something along the lines of separation of the model and
the view, as well as the controller, would be to design the application in
such a way that you have actions in the model which can be acted upon, act
on their own, or in any other way affect the internal representation of the
program's state. You then have a model which wraps these actions and exposes
these behaviors, functionality, and information to the view for display and
the controller for interaction. In this way, multiple views can be written
with absolutely 0% change of the model.

Let's take this back to the menu item that was hard coded before. Now what
would have to happen is that a new speech input based view could simply be
placed into the application, which simply called the same feature in the
model that the menu item did in the visual view.

A new view sounds complicated, but it could be as easy as a single method
that had to be written, a new class, or as complex as a new package being
added to the application.

So, to wrap up. I think that abstraction is going to be your friend if you
are going to struggle with issues of multiple interfaces. Simply abstract
out the actions into a group which can be wrapped by a model, provide
multiple forms of accessing this model via various controllers, and then
provide multiple views if needed.

So for example, you can have the same model representing that action which
that menu item pointed to, but now you could have a visual UI that displayed
the menu, an audio based UI which read it out loud, a speech input
controller that worked on spoken commands, and a controller that is
keyboard/mouse driven.

We now have provided the user with four fundamental ways to use the
application, and we haven't touched a single line of model code. They can
either have a visual UI with keyboard/mouse control, a visual UI with speech
input control, an audio based UI with keyboard/mouse control, or an audio
based UI with speech input control.

Furthermore, they can have combinations of the above, so for example, they
could have a visual UI and audio based UI with both keyboard/mouse and
speech input control.

None of this has affected the model's code at all.

If you actually do the math on that, by abstracting it out this way, you
have provided four factorial ways of using the application which amounts to
24 possible methodologies of access, when we only started with two, because
we were hard coding.

I hope this made sense? I don't usually type so much on this list, but
design and abstraction are really close to my heart, and I believe that is
the way we are going to enter the new generation of computing.

Take care,
Sina

-----Original Message-----
From: [hidden email]
[mailto:[hidden email]] On Behalf Of Peter
Sent: Tuesday, January 30, 2007 6:35 PM
To: [hidden email]
Subject: Re: Screen readers and chat or instant messaging

Good question, I'm still trying to work this out as I work on developing
another accessible chat program. I've thought about this post for a few days
though.

My first instinct was to lean on simplicity and in the case of the aria
role=*, keep it also as simple as possible. However, I'm not sure about that
anymore.

I'm currently struggling a bit with context issues in my chat. For example
I'd like the chat program to aid users when they fall behind.
My idea for this is currently a pause feature that allows a user to travel
up and down the paused chat thread. So for example, suppose a screen reader
user keys the pause button, at this point some basic information would
helpful such as: number of lines behind in current chat, filter options
(e.g. budies), chat message summaries (e.g. first
2 words from each chat message), and so on. All of this could be
accomplished (I think - still working on it :) on the server-side and by
only displaying the relevant information to the screen reader.

The above would involve some hack-ish work on the chat log, traversing up
and down and gathering information creatively. Having chat events (each "new
line of information is a new event, in time order"-Aaron) would make me jump
for joy! I imagine at least that it would make traversing a log easier and
cleaner. Also, future unkown uses of Ajax, if this were implemented, would
probably creep up in ways we couldn't now predict. Think of HTML tables - a
data table for layout, I doubt anyone back in the day predicted that.


As an off--this-thread-topic.

I'm kind of stuck on a theoretical question about a chat program.
Which is "better":
a) a chat program that includes all features/options: accessible/non/*

-or-

b) a chat program that has separate interfaces for different types of
users: accessible/non/*

This actually started out as an argument between colleague and I. On one
hand fewer options keeps things simple for the user and the chat can
arguably be more customized to, for example, an assistive technology. On the
other hand I would argue that a chat program that doesn't incorporate the
different features/options/interfaces well that either the code or the UI is
flawed and should be re-designed.

Which is "better"?

-PeterT


On Jan 26, 1:38 am, Aaron Leventhal <[hidden email]>
wrote:

> Question for all the screen reader users out there ...
>
> How should a screen reader behave in a chat application? Are there any
> special commands or changes in the way review mode works?
>
> Or is it just like anything else? One reviews the chat log like it's a
> document. That makes me wonder whether there's a quick command to
> navigate to the end of the chat log, which is typically the most
> recent message.
>
> - Aaron


_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility

_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

Re: Application design, was RE: Screen readers and chat or instant messaging

Steve Lee-3
Hi Sina, that's interesting and has some overlap with my thoughts
about alternative input support allowing user selected gestures across
various devices (we calling this a capability palette).

As you describe it, it is the programs responsibility to support all
possible view/controlers but by using the current model of external AT
accessing via a11y API the user can choose, selecting the AT they want
to use on all their programs independent of applications design
decisions

Where AT uses an a11y API the assumption is that you go through the
UI/view/controller to the model and that obvious works reasonably
well. For alt input we'd really like a standard way to allow AT to get
at the actions independent of the application's particular UI
bindings.

Unfortunately we can't easily standardise as that would require
developers to use new design patterns and they will usually just
design for the common OS supported input devices (mouse & keyboard) or
add specific support for other devices (joystick, speech).

If we could assume that every action has a keyboard shortcut then that
could be the action ID for the AT to use but that's not ideal. That's
a big if.

I've seen similar patterns to those you describe defined to allow
testing of business objects without going through the UI which can be
fragile (change often and break the tests). However you still need to
test the UI.

--
Steve Lee
www.oatsoft.org
www.schoolforge.org.uk
www.fullmeasure.co.uk


On 1/31/07, Sina Bahram <[hidden email]> wrote:

> I have copied the blind programming list on this single response, just
> because I think this is a topic that comes up a lot, and I thought they
> might enjoy the discussion.
>
> Well, speaking from the pro abstraction, pro generic, and pro design camp,
> my thoughts are below.
>
> I think that an interface is nothing more than a methodology of accessing
> the heart of what we are calling an application. Several design patterns
> actually go along with this concept; for example, a model view controller
> design splits off the application into a model which contains the business
> logic of the application, a view which is responsible for controlling what,
> how, and in what way the user receives information from the model, and a
> controller which is responsible for interacting with the model and
> instructing, requesting, or in any other way facilitating the request for an
> action to take place which may or may not be reflected in the view.
>
> There are other designs which integrate the view and the controller, because
> sometimes this is simply easier. For example, it is quite easy and
> convenient to store the methods of accessing a menu along with the menu, but
> now if you want to use the option in that menu with a speech activated
> command, single switch device, or via some programmatic means: you will have
> to get around the fact that it is hardcoded as a menu item. The alternative
> method of designing something along the lines of separation of the model and
> the view, as well as the controller, would be to design the application in
> such a way that you have actions in the model which can be acted upon, act
> on their own, or in any other way affect the internal representation of the
> program's state. You then have a model which wraps these actions and exposes
> these behaviors, functionality, and information to the view for display and
> the controller for interaction. In this way, multiple views can be written
> with absolutely 0% change of the model.
>
> Let's take this back to the menu item that was hard coded before. Now what
> would have to happen is that a new speech input based view could simply be
> placed into the application, which simply called the same feature in the
> model that the menu item did in the visual view.
>
> A new view sounds complicated, but it could be as easy as a single method
> that had to be written, a new class, or as complex as a new package being
> added to the application.
>
> So, to wrap up. I think that abstraction is going to be your friend if you
> are going to struggle with issues of multiple interfaces. Simply abstract
> out the actions into a group which can be wrapped by a model, provide
> multiple forms of accessing this model via various controllers, and then
> provide multiple views if needed.
>
> So for example, you can have the same model representing that action which
> that menu item pointed to, but now you could have a visual UI that displayed
> the menu, an audio based UI which read it out loud, a speech input
> controller that worked on spoken commands, and a controller that is
> keyboard/mouse driven.
>
> We now have provided the user with four fundamental ways to use the
> application, and we haven't touched a single line of model code. They can
> either have a visual UI with keyboard/mouse control, a visual UI with speech
> input control, an audio based UI with keyboard/mouse control, or an audio
> based UI with speech input control.
>
> Furthermore, they can have combinations of the above, so for example, they
> could have a visual UI and audio based UI with both keyboard/mouse and
> speech input control.
>
> None of this has affected the model's code at all.
>
> If you actually do the math on that, by abstracting it out this way, you
> have provided four factorial ways of using the application which amounts to
> 24 possible methodologies of access, when we only started with two, because
> we were hard coding.
>
> I hope this made sense? I don't usually type so much on this list, but
> design and abstraction are really close to my heart, and I believe that is
> the way we are going to enter the new generation of computing.
>
> Take care,
> Sina
>
> -----Original Message-----
> From: [hidden email]
> [mailto:[hidden email]] On Behalf Of Peter
> Sent: Tuesday, January 30, 2007 6:35 PM
> To: [hidden email]
> Subject: Re: Screen readers and chat or instant messaging
>
> Good question, I'm still trying to work this out as I work on developing
> another accessible chat program. I've thought about this post for a few days
> though.
>
> My first instinct was to lean on simplicity and in the case of the aria
> role=*, keep it also as simple as possible. However, I'm not sure about that
> anymore.
>
> I'm currently struggling a bit with context issues in my chat. For example
> I'd like the chat program to aid users when they fall behind.
> My idea for this is currently a pause feature that allows a user to travel
> up and down the paused chat thread. So for example, suppose a screen reader
> user keys the pause button, at this point some basic information would
> helpful such as: number of lines behind in current chat, filter options
> (e.g. budies), chat message summaries (e.g. first
> 2 words from each chat message), and so on. All of this could be
> accomplished (I think - still working on it :) on the server-side and by
> only displaying the relevant information to the screen reader.
>
> The above would involve some hack-ish work on the chat log, traversing up
> and down and gathering information creatively. Having chat events (each "new
> line of information is a new event, in time order"-Aaron) would make me jump
> for joy! I imagine at least that it would make traversing a log easier and
> cleaner. Also, future unkown uses of Ajax, if this were implemented, would
> probably creep up in ways we couldn't now predict. Think of HTML tables - a
> data table for layout, I doubt anyone back in the day predicted that.
>
>
> As an off--this-thread-topic.
>
> I'm kind of stuck on a theoretical question about a chat program.
> Which is "better":
> a) a chat program that includes all features/options: accessible/non/*
>
> -or-
>
> b) a chat program that has separate interfaces for different types of
> users: accessible/non/*
>
> This actually started out as an argument between colleague and I. On one
> hand fewer options keeps things simple for the user and the chat can
> arguably be more customized to, for example, an assistive technology. On the
> other hand I would argue that a chat program that doesn't incorporate the
> different features/options/interfaces well that either the code or the UI is
> flawed and should be re-designed.
>
> Which is "better"?
>
> -PeterT
>
>
> On Jan 26, 1:38 am, Aaron Leventhal <[hidden email]>
> wrote:
> > Question for all the screen reader users out there ...
> >
> > How should a screen reader behave in a chat application? Are there any
> > special commands or changes in the way review mode works?
> >
> > Or is it just like anything else? One reviews the chat log like it's a
> > document. That makes me wonder whether there's a quick command to
> > navigate to the end of the chat log, which is typically the most
> > recent message.
> >
> > - Aaron
>
>
> _______________________________________________
> dev-accessibility mailing list
> [hidden email]
> https://lists.mozilla.org/listinfo/dev-accessibility
>
> _______________________________________________
> dev-accessibility mailing list
> [hidden email]
> https://lists.mozilla.org/listinfo/dev-accessibility
>


--
Steve Lee
www.oatsoft.org
www.schoolforge.org.uk
www.fullmeasure.co.uk
_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility
Reply | Threaded
Open this post in threaded view
|

RE: Application design, was RE: Screen readers and chat or instant messaging

Sina Bahram
Testing is actually facilitated by this, if the pattern is carried out to
its fullest and done correctly.

My suggestion would be to use a common communication layer such a software
buss to implement such an interface.

If all members/components of the buss are listening on this xml software
buss, then the ability of the application not to need hardcoded components
is greatly increased.

Take care,
Sina

-----Original Message-----
From: [hidden email]
[mailto:[hidden email]] On Behalf Of Steve Lee
Sent: Wednesday, January 31, 2007 6:28 AM
To: Sina Bahram
Cc: [hidden email]; Peter;
[hidden email]
Subject: Re: Application design, was RE: Screen readers and chat or instant
messaging

Hi Sina, that's interesting and has some overlap with my thoughts about
alternative input support allowing user selected gestures across various
devices (we calling this a capability palette).

As you describe it, it is the programs responsibility to support all
possible view/controlers but by using the current model of external AT
accessing via a11y API the user can choose, selecting the AT they want to
use on all their programs independent of applications design decisions

Where AT uses an a11y API the assumption is that you go through the
UI/view/controller to the model and that obvious works reasonably well. For
alt input we'd really like a standard way to allow AT to get at the actions
independent of the application's particular UI bindings.

Unfortunately we can't easily standardise as that would require developers
to use new design patterns and they will usually just design for the common
OS supported input devices (mouse & keyboard) or add specific support for
other devices (joystick, speech).

If we could assume that every action has a keyboard shortcut then that could
be the action ID for the AT to use but that's not ideal. That's a big if.

I've seen similar patterns to those you describe defined to allow testing of
business objects without going through the UI which can be fragile (change
often and break the tests). However you still need to test the UI.

--
Steve Lee
www.oatsoft.org
www.schoolforge.org.uk
www.fullmeasure.co.uk


On 1/31/07, Sina Bahram <[hidden email]> wrote:

> I have copied the blind programming list on this single response, just
> because I think this is a topic that comes up a lot, and I thought they
> might enjoy the discussion.
>
> Well, speaking from the pro abstraction, pro generic, and pro design camp,
> my thoughts are below.
>
> I think that an interface is nothing more than a methodology of accessing
> the heart of what we are calling an application. Several design patterns
> actually go along with this concept; for example, a model view controller
> design splits off the application into a model which contains the business
> logic of the application, a view which is responsible for controlling
what,
> how, and in what way the user receives information from the model, and a
> controller which is responsible for interacting with the model and
> instructing, requesting, or in any other way facilitating the request for
an
> action to take place which may or may not be reflected in the view.
>
> There are other designs which integrate the view and the controller,
because
> sometimes this is simply easier. For example, it is quite easy and
> convenient to store the methods of accessing a menu along with the menu,
but
> now if you want to use the option in that menu with a speech activated
> command, single switch device, or via some programmatic means: you will
have
> to get around the fact that it is hardcoded as a menu item. The
alternative
> method of designing something along the lines of separation of the model
and
> the view, as well as the controller, would be to design the application in
> such a way that you have actions in the model which can be acted upon, act
> on their own, or in any other way affect the internal representation of
the
> program's state. You then have a model which wraps these actions and
exposes
> these behaviors, functionality, and information to the view for display
and

> the controller for interaction. In this way, multiple views can be written
> with absolutely 0% change of the model.
>
> Let's take this back to the menu item that was hard coded before. Now what
> would have to happen is that a new speech input based view could simply be
> placed into the application, which simply called the same feature in the
> model that the menu item did in the visual view.
>
> A new view sounds complicated, but it could be as easy as a single method
> that had to be written, a new class, or as complex as a new package being
> added to the application.
>
> So, to wrap up. I think that abstraction is going to be your friend if you
> are going to struggle with issues of multiple interfaces. Simply abstract
> out the actions into a group which can be wrapped by a model, provide
> multiple forms of accessing this model via various controllers, and then
> provide multiple views if needed.
>
> So for example, you can have the same model representing that action which
> that menu item pointed to, but now you could have a visual UI that
displayed
> the menu, an audio based UI which read it out loud, a speech input
> controller that worked on spoken commands, and a controller that is
> keyboard/mouse driven.
>
> We now have provided the user with four fundamental ways to use the
> application, and we haven't touched a single line of model code. They can
> either have a visual UI with keyboard/mouse control, a visual UI with
speech

> input control, an audio based UI with keyboard/mouse control, or an audio
> based UI with speech input control.
>
> Furthermore, they can have combinations of the above, so for example, they
> could have a visual UI and audio based UI with both keyboard/mouse and
> speech input control.
>
> None of this has affected the model's code at all.
>
> If you actually do the math on that, by abstracting it out this way, you
> have provided four factorial ways of using the application which amounts
to
> 24 possible methodologies of access, when we only started with two,
because

> we were hard coding.
>
> I hope this made sense? I don't usually type so much on this list, but
> design and abstraction are really close to my heart, and I believe that is
> the way we are going to enter the new generation of computing.
>
> Take care,
> Sina
>
> -----Original Message-----
> From: [hidden email]
> [mailto:[hidden email]] On Behalf Of Peter
> Sent: Tuesday, January 30, 2007 6:35 PM
> To: [hidden email]
> Subject: Re: Screen readers and chat or instant messaging
>
> Good question, I'm still trying to work this out as I work on developing
> another accessible chat program. I've thought about this post for a few
days
> though.
>
> My first instinct was to lean on simplicity and in the case of the aria
> role=*, keep it also as simple as possible. However, I'm not sure about
that
> anymore.
>
> I'm currently struggling a bit with context issues in my chat. For example
> I'd like the chat program to aid users when they fall behind.
> My idea for this is currently a pause feature that allows a user to travel
> up and down the paused chat thread. So for example, suppose a screen
reader
> user keys the pause button, at this point some basic information would
> helpful such as: number of lines behind in current chat, filter options
> (e.g. budies), chat message summaries (e.g. first
> 2 words from each chat message), and so on. All of this could be
> accomplished (I think - still working on it :) on the server-side and by
> only displaying the relevant information to the screen reader.
>
> The above would involve some hack-ish work on the chat log, traversing up
> and down and gathering information creatively. Having chat events (each
"new
> line of information is a new event, in time order"-Aaron) would make me
jump
> for joy! I imagine at least that it would make traversing a log easier and
> cleaner. Also, future unkown uses of Ajax, if this were implemented, would
> probably creep up in ways we couldn't now predict. Think of HTML tables -
a

> data table for layout, I doubt anyone back in the day predicted that.
>
>
> As an off--this-thread-topic.
>
> I'm kind of stuck on a theoretical question about a chat program.
> Which is "better":
> a) a chat program that includes all features/options: accessible/non/*
>
> -or-
>
> b) a chat program that has separate interfaces for different types of
> users: accessible/non/*
>
> This actually started out as an argument between colleague and I. On one
> hand fewer options keeps things simple for the user and the chat can
> arguably be more customized to, for example, an assistive technology. On
the
> other hand I would argue that a chat program that doesn't incorporate the
> different features/options/interfaces well that either the code or the UI
is

> flawed and should be re-designed.
>
> Which is "better"?
>
> -PeterT
>
>
> On Jan 26, 1:38 am, Aaron Leventhal <[hidden email]>
> wrote:
> > Question for all the screen reader users out there ...
> >
> > How should a screen reader behave in a chat application? Are there any
> > special commands or changes in the way review mode works?
> >
> > Or is it just like anything else? One reviews the chat log like it's a
> > document. That makes me wonder whether there's a quick command to
> > navigate to the end of the chat log, which is typically the most
> > recent message.
> >
> > - Aaron
>
>
> _______________________________________________
> dev-accessibility mailing list
> [hidden email]
> https://lists.mozilla.org/listinfo/dev-accessibility
>
> _______________________________________________
> dev-accessibility mailing list
> [hidden email]
> https://lists.mozilla.org/listinfo/dev-accessibility
>


--
Steve Lee
www.oatsoft.org
www.schoolforge.org.uk
www.fullmeasure.co.uk
_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility

_______________________________________________
dev-accessibility mailing list
[hidden email]
https://lists.mozilla.org/listinfo/dev-accessibility