1 Introduction
The purpose of this American National Standard is to specify design guidance for human-system software interfaces to provide the highest level of accessibility for as many users as possible. A primary goal of these design guidelines is human-system interfaces with increased usability that promote increased effectiveness, efficiency, and satisfaction for people who have a wide variety of capabilities and preferences. Accessibility is closely related to the concept of usability, and the properties of these concepts overlap and intertwine in so many ways among various applications, tasks, devices, and users, that the boundary between them is unavoidably fuzzy.
These design guidelines focus on user-system interfaces and interfaces between operating systems, middleware, application software, and other software layers, to facilitate the development of interfaces to systems and products that are intended for use by people with the widest range of capabilities. It is desirable to define accessibility goals and features for a particular product as early as possible in the design process so that developments costs can be reduced compared to modifying products for accessibility after they have been design.
The design guidelines in this document are primarily based on the current understanding of users with the widest range of capabilities and limitations. These users include individuals who: 1) have particular sensory, motor and/or cognitive impairments, 2) have limitations as a result of aging or disease processes, and 3) are affected by environments that may limit normal sensory, motor, or cognitive functioning.
Therefore, accessibility addresses a widely defined group of users including:
- people with physical, sensory and cognitive impairments present at birth or acquired during life,
- elderly people who can benefit from new products and services but experience reduced physical, sensory and cognitive capacities,
- people with temporary disabilities, such as a person with a broken arm or someone who has forgotten his/her glasses, and
- people who experience difficulties in particular situations, such as a person who works in a noisy environment or has both hands occupied by other work.
When designing and evaluating human system interfaces other terms are often used that are associated with accessibility.
The North American term “Universal Design” is similar to the European term ‘Design for All’ that identify the goal of enabling maximum access to the maximum number and diversity of users, irrespective of their skill level, language, culture, environment or disability. However, it is always true that there will be a minority of disabled people with severe or multiple impairments who will need adaptations or specialized products. The term accessibility as defined in this standard emphasizes the twin goals of: 1) maximizing the number of users and 2) striving to increase the level of usability that these users experience.
The design requirements and recommendations in this document are specified for user interface system design, appearance and behavior. The primary objective is to allow software to be used by as broad an audience as possible. It is recognized that some users of software will need assistive devices in order to use a system. Therefore, this Standard includes, in the concept of accessibility, the capability of a system to provide connections to and enable successful interaction with assistive technologies. Guidance is provided on designing software that integrates as effectively as possible with common assistive technologies (e.g. speech synthesizers, Braille input and output devices), when they are utilized.
In addition, this standard addresses the increasing need to consider social and legislative demands to ensure accessibility by removing barriers that prevent people with special requirements from participating in life activities including the use of environments, services, products and information. Designing software-user interfaces for accessibility increases the number of people who can use systems by taking into account the varying physical and sensory capabilities of user populations. Designing for accessibility benefits disabled users by providing features that enable them to use software that would otherwise be inaccessible, and also by making software easier for them to use.
Many accessibility features also benefit users who do not have a disability, by enhancing usability and providing additional individualization possibilities. They may also help to overcome a temporary defect (e.g. a broken arm or hand). They benefit designers and suppliers by expanding the number of potential users (and thus sales for their products) and often by making the product compliant with legal requirements for accessibility. They benefit companies buying software by expanding the number of employees who may use the software.
It is important to NOTE that accessibility may be provided by a combination of both hardware and software. Assistive technologies typically provide specialized input and output capabilities not provided by the system. Software examples include on-screen keyboards that replace physical keyboards, screen-magnification software that allows users to view their screen at various levels of magnification, and screen-reading software that allows blind users to navigate through applications, determine the state of controls, and read text via text-to-speech conversion. Hardware examples include head-mounted pointers that replace mice and Braille output devices that replace a video display. There are many other examples not listed here. When users provide add-on assistive software and/or hardware, usability is enhanced to the extent that systems and applications integrate with those technologies. For this reason, operating systems need to provide “hooks”, Application Programming Interfaces (APIs), programmatic access, or other features to allow software to operate effectively with add-on assistive software and hardware as recommended in this standard. If systems do not provide support for assistive technologies, the probability increases that users will encounter problems with compatibility, performance and usability.
The ultimate beneficiary of this Standard will be the end user of the software. Although it is unlikely that the end-users will read this document, its application by designers, developers, buyers and evaluators should provide user interfaces that are more accessible. This standard concerns the development of software for user interfaces. However, those involved in designing the hardware aspects of user interfaces may also find the standard useful.
NOTE: In this document the term ‘developers’ is used as a shorthand for all those involved in the development of software design and creation which sometimes can span different collaborating or contracting organizations.
2 Scope
The scope of HFES 200.2 is primarily focused on the design of accessible software for personal, educational, business, and public use, most commonly implemented on a desktop computer or a client/server combination (e.g., see ANSI/HFS 100 DSTU, 2003) Most of the recommendations in this standard also apply to home and mobile computing and to interactive voice response applications. Many of the recommendations may also apply to other software. Although the recommendations in this standard do not definitively address areas such as high-risk applications, nuclear power plant control room environments, alarm/security applications, process control, and entertainment, many of the recommendations in HFES 200 can be used to improve the quality of applications in these environments. Developers in these areas are advised to obtain guidance from more directly applicable standards or guideline documents.
The standard promotes increased usability of systems for a wider range of users. While it does not cover the behavior or requirements for assistive technologies (including assistive software), it addresses the use of assistive technologies as an integrated component of interactive systems.
This part of the standard is intended for use by those responsible for the specification, design, development, evaluation and procurement of software operating systems and software applications. It is meant to be used in concert with the other parts of this standard.
3 Normative References
The following standards contain provisions that, through reference in this text, constitute provisions of this standard. At the time of publication, the editions indicated were valid. All standards are subject to revision, and parties to agreements based on this standard are encouraged to investigate the possibility of applying the most recent editions of the standards indicated below.
ISO 9241-10: Ergonomic requirements for office work with visual display terminals (VDTs) - Part 10: Dialogue principles
ISO 13407 Human-centered design processes for interactive systems
HFES 200.3: Human Factors Engineering of Software User Interfaces – Interaction Techniques
HFES 200.4: Human Factors Engineering of Software User Interfaces – Interactive Voice Response (IVR) and Telephony
HFES 200.5: Human Factors Engineering of Software User Interfaces – Visual Presentation and Use of Color
4 Terms and Definitions
For the purposes of this document, the following terms and definitions apply.
4.1 Accelerator Keys:
Keys, key sequences, or key combinations which invoke an action immediately without displaying intermediate information (such as menus) or requiring pointer movement or any other user activityNOTE:
Also called shortcut keys and hot keys.EXAMPLE:
Users can often invoke the OK button by pressing ENTER, and the Cancel button by pressing ESC.4.2 Accessibility Feature:
feature (etc) that is specifically designed to increase the usability of products for those experiencing disabilities4.3
Activation: Initiation of an action associated with a selected user interface element4.4 Assistive Technology:
Hardware or software that is added to or incorporated within a system that increases accessibility for an individualEXAMPLES:
Braille displays, screen readers, screen magnification software and eye tracking devices are assistive technologies.4.5 Chorded Key-press:
Keyboard key or pointing device button presses where more than one button is held down simultaneously to invoke an actionNOTE:
This includes both uses of modifier keys with other (non-modifier) keys as well as use of multiple non-modifier keys to enter data or invoke an action.4.6 Closed System:
system that does not allow user connection or installation of assistive technology that would have programmatic access to the full user interfaceNOTE:
This can be caused by policy, system architecture, physical constraints or any other reason.4.7 Color Scheme:
set of color assignments used for rendering user interface elementsNOTE:
Color refers to a combination of hue, saturation, and brightness.4.8 Contrast:
In a perceptual sense, assessment of the difference in appearance of two or more parts of a field seen simultaneously or successivelyExamples Brightness contrast, lightness contrast, color contrast, etc.
NOTE:
Adapted from CIE 17.4 definition 845-02-474.9 Cursor:
Visual indication of where the user interaction via keyboard (or keyboard emulator) will occur.NOTE:
1 Keyboard focus cursors and text cursors are types of cursors.NOTE:
2 Contrast with keyboard focus cursor (4.20), text cursor (4.32), and contrast with pointer (4.28).4.10 Developers:
Individuals or organizations that design and/or create software.4.11 Explicit designator:
A code or abbreviation for a menu option or control label, set apart (usually to the left) from the name, and typed in for selection.NOTE:
Contrast with “implicit designator.”Example: In the following menu the explicit designators are “O”, “C”, “S”, and “P.”:
O Open
C Close
S Save
P Print
4.12 Focus Cursor:
Indicator showing which user interface element has keyboard focusNOTE 2:
The appearance of this indicator usually depends on the kind of user interface element that has focus. The user interface element with focus can be activated if it is a control (e.g. button, menu item) or selected if it is a selectable user interface element (e.g. icon, list item).EXAMPLE:
A box or highlighted area around a text field, button, list or menu options can serve as a focus cursor.4.13 Icon:
A graphic on a visual display terminal that represents an object or action of the user's task.4.14 Implicit Designator:
Portion of an option name or control label used for a keyboard selectionNOTE:
Also called access keyEXAMPLE 1:
In the portion of a menu shown in Figure 1, the implicit designators are the underlined letters T,S,E,W, g, m, L, and D in top-to-bottom order. (NOTE that in “Large Icons”, the “g” is underlined.)Figure 1
EXAMPLE 2:
In the pushbuttons in Figure 2, the implicit designators are D and P
Figure 2
4.15 Individualization:
Modification of interaction and presentation of information to suit individual capabilities and needs of users4.16 Input Focus:
Current assignments of the input from an input device to a user interface elementEXAMPLES:
Pointer focus and keyboard focus are input foci.4.17 Keyboard Emulator:
Software or hardware that generates input that is identical to that which comes from a keyboard.NOTE:
A keyboard emulator may provide a representation of keys (e.g. on-screen keyboard) or it may not (e.g. voice recognition).EXAMPLE:
Platform-based on-screen keyboards, speech input, and handwriting are all examples of keyboard emulators if their output appears to applications as keystroke input.4.18 Keyboard Equivalent:
Key or key combination that provides access to a function usually activated by a pointing device, voice input, or other input or control mechanism.4.19 Keyboard Focus:
Current assignment of the input from the keyboard or equivalent to a user interface elementNOTE:
For an individual user interface element focus is indicated by a focus cursor.4.20 Keyboard Focus Cursor:
visual indication of where the user interaction via keyboard (or keyboard emulator) will occur4.21 Label: short, descriptive title for a user interface element (object).
NOTE:
1 Labels include, but are not limited to, headings, prompts for entry fields, text or graphics that accompany and identify controls (e.g. displayed on the face of buttons) and audible prompts used by interactive voice response systems.NOTE 2:
Contrast with 4.25 “Name”, In this section label refers to the presented title for a user interface element. It contrasts with the Name attribute which may or may not be presented to users but is available to assistive technologies. Textual labels are often a visual display of the name.EXAMPLE 1:
On a screen used for initiating a print job, the control label is displayed as “Print”, and the underlined “P” indicates that “P” is the implicit designator for the “Print” command.EXAMPLE 2:
Figure 3 Text field with label.
EXAMPLE 3:
Pagination
☑ Widow/Orphan Control | ☑ Keep with next |
☐ Keep lines together | ☐ Page break before |
Figure 4. Check box group, with a label for the group and a label for each check box.
EXAMPLE 4:
A window displays the image of a printer that the user can click to print the current document. This image’s label is image, its name attribute is “Print”, its role attribute is “Push Button”, and its description attribute is “A printer”.See also Name (4.25)
4.22 Latch:
Mode in which any modifier key remains logically pressed (active) in combination with a single subsequent non-modifier keypress or pointing device button action.NOTE:
Contrast with Lock (4.23).4.23 Lock:
Persistent mode in which one or more modifier keys or pointing device buttons remain logically pressed (active) until lock mode for the key or button is turned off.NOTE 1:
Contrast with latch (4.22): unlike LATCH which affects only keyboard and pointing device actions, LOCK would affect any software that uses the modifier key(s) to alter its behavior.NOTE 2:
Lock mode is usually turned off explicitly by the user - but may also be turned off at other times such as system shutdown or restart.4.24 Modifier Key:
Keyboard key that changes the action or effect of another key or a pointing deviceNOTE:
Modifier key states are sometimes used to alter software behavior in other ways as well such as when inserting a CD to prevent the operating system from automatically playing the CDEXAMPLE 1:
Moving the keyboard focus with the shift key held down extends the current selection in the direction of cursor movement, rather than merely moving the position of the cursor..EXAMPLE 2:
Pressing “C” results in the input of that character, and pressing “Ctrl+C” results in a “Copy” function.4.25 Name:
word or phrase that is associated with a user interface element and is used to identify the element to the userNOTE 1:
Names are most useful when they are the primary word or phrase by which the on screen instructions, software documentation, and its users refer to the element, and do not contain the type or status of the user interface element.NOTE 2:
Contrast with label (4.17). The Name attribute may or may not be presented to users but is available to assistive technologies. It contrasts with label which, In this part of ISO 9241, refers to the presented title for a user interface element. Textual labels are often a visual display of the name.NOTE 3:
When a textual label is provided it would generally present the name or a shortened version of the name. Not all user interface elements have labels however. In those cases the names would be available to assistive technologies (or sometimes by pop up tool tips, etc.).NOTE 4:
Names should not be confused with an internal identifier (ID) which may be used by software and which may not be designed to be understood by a human.EXAMPLE 3:
A window displays the image of a printer that the user can click to print the current document. This image’s label is the image, its name attribute is “Print”, its role attribute is “Push Button”, and its description attribute is “A printer”.4.26 Natural Language:
Language which is or was in active use in a community of people, and the rules of which are mainly deduced from the usage4.27 Platform software:
software that interacts with hardware or provides services for other softwareNOTE 1:
A browser can function both as an application and as platform software.NOTE 2:
In this document the use of the word software by itself means both platform software and application software.EXAMPLES:
An operating system, device drivers, windowing systems, and software toolkits.4.28 Pointer:
Graphical symbol that is moved on the screen according to operations with a pointing deviceNOTE 1:
The location or representation of the pointer may also change to reflect the current state of software operations.NOTE 2:
Users typically interact with user interface elements on the screen by moving the pointer to an object's location and manipulating that object.NOTE 3:
Examples of devices that are used to control pointers include mice, tablets, fingers, and 3D wands. The pointer can also be moved using the keyboard (e.g. MouseKeys).NOTE 4:
Although the pointer is sometimes called a “pointing cursor”, this document uses the word ‘cursor’ only to refer to an indicator of keyboard focus.4.29 Pointer Focus:
current assignment of the input from the pointing device to a windowNOTE:
The window with pointer focus usually has some distinguishing characteristic, such as a highlighted border and/or title bar.4.30 Pointing Device:
Hardware and associated software used to position the pointerNOTE 1:
Pointing devices usually have buttons that are used to activate or manipulate user interface elements.NOTE 2:
Almost any hardware can be used to control the pointer with appropriate software.EXAMPLES:
Pointing devices include mice and trackballs, but they may also include head trackers, hand or finger trackers, touch screens, tablet pens, voice input, and many other hardware/software combinations that systems treat as a pointing device.4.31 Screen Reader:
Assistive technology that allows users to operate software without needing to view the visual displayNOTE 1:
Output of screen readers is typically text-to speech or dynamic Braille output on a refreshable Braille display.NOTE 2:
Screen readers rely on the availability of information from the platform and applications, such as the name or label of the user interface.4.32 Text cursor:
visual indication of the current insertion point for text entryNOTE:
Contrast with “pointer” and “focus cursor.”4.33 Usability:
Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use4.34 User Interface Element:
A user interface entity that accepts input, provides information, and/or groups other user interface elementsNOTE 1:
User interface elements may or may not be interactive.NOTE 2:
Both entities relevant to the task and entities of the user interface are regarded as user interface elements. Different user interface element types are text, graphics and controls. A user interface element may be a visual representation or an interaction mechanism for a task object (such as a letter, a sales order, electronic parts, or a wiring diagram) or a system object (such as a printer, hard disk, or network connection). It may be possible for the user to directly manipulate some of these user interface elements.NOTE 3:
User interface elements in a graphical user interface include such things as basic objects (such as window title bars, menu items, push buttons, image maps, and editable text fields) or containers (such as windows, grouping boxes, menu bars, menus, groups of mutually-exclusive option buttons, and compound images that are made up of several smaller images). User interface elements in an audio user interface include such things as menus, menu items, messages, and action prompts.NOTE 4:
Also referred to as “user interface object” as in HFES 200.35 Rationale and Benefits of Implementing Accessibility
Accessibility is an important consideration in the design of products, systems, environments and facilities because it affects the range of people who are able to use them.
Accessibility can be improved by incorporating features and attributes known to benefit users with special requirements. To determine the achieved level of accessibility, it is necessary to measure the effectiveness, efficiency, and satisfaction of users working with a product or interacting with an environmentfor the widest range of users.. Measurement of accessibility is particularly important in view of the complexity of the interactions with the user, the goals, the task characteristics and the other elements of the context of use. A product, system, environment or facility can have significantly different levels of accessibility when used in different contexts
Planning for accessibility as an integral part of the design and development process involves the systematic identification of requirements for accessibility, including accessibility measurements and verification criteria within the context of use. These provide design targets that may be the basis for verification of the resulting design.
The approach adopted in this part of HFES 200 has the following benefits.
- The framework can be used to identify the aspects of accessibility and the components of the context of use to be taken into account when specifying, designing or evaluating the accessibility of a product.
- The performance and satisfaction of the users can be used to measure the extent to which a product, system, environment or facility is accessible in a specific context.
- Measures of the performance and satisfaction of the users can provide a basis for determining and comparing the accessibility of products having different technical characteristics, which are used in the same context.
- The accessibility planned for a product can be defined, documented and verified (e.g. as part of a quality plan).
6 Sources of Variation in User Characteristics
All user populations vary significantly in terms of their characteristics, capabilities and preferences. Any interactive system will include, within the user group for which it is designed, people with very different physical, sensory and cognitive abilities. These differences will have many different sources including innate characteristics, culture, experience and learning, as well as changes that occur throughout life. While the recommendations in this standard are based on the current understanding of the individual characteristics of people who have particular physical, sensory and cognitive impairments, their application addresses the diversity of abilities within any intended user population that may lead to limitation on activities.
Disabilities considered include not only those due to restrictions on mobility or physical performance, such as loss of a limb or tremor, but also those associated with sensory impairment, such as low vision or hearing loss, as well as cognitive factors, such as declining short term memory or dyslexia. Annex C provides an outline of some of the limitations typically encountered by individuals with various types of disability, but does not constitute an exhaustive account of all the issues that may arise. In addition, limitations on activities may be created by the environment, such as stress, poor lighting or noise, or by the need to deal with other tasks in parallel, such as taking care of another person, and these are also taken into account.
The extent to which the particular source of any disability creates limitations varies and some of the guidance provided will be specific to the degree of disability experienced. Thus visual impairments may range from declining ability to resolve small detail to having been blind from birth. Different provisions in the design of the interactive system may be needed to deal with different degrees of disability. For example the facility to enlarge the size of detail presented on a screen will not address the problems of those people who are blind. It is also important to recognize that people may experience multiple forms of disability. Guidance that is appropriate to addressing a specific type of disability may not work if somebody who has that disability also has some other type of disability. For example, auditory output of written text will not provide support for the deaf-blind. It is therefore important that different approaches to access be supported so that interfaces can be individualized to the user and their task.
7 How to Use This Standard
7.1 General
Users of this standard will need to consider all the recommendations that it contains. In many cases the application of a recommendation may depend on the particular context of use so that it will be necessary to determine whether a given piece of guidance is applicable.
In order to achieve accessibility it is necessary to provide support in different parts of the system of software, which includes platform software (the operating system and associated layers, and toolkits) and other software (such as most applications) that run on and make use of services provided by platform software.
While much can be done to improve accessibility in the design of an application, it is not possible to provide all of the input and output support that users require in every circumstance at the application level alone. To the extent that any particular part of the software is dependent upon a level below it for its operational characteristics it will be necessary to ensure that the lower levels enable the implementation of recommended accessibility characteristics in any layers that depend upon them. Similarly, accessibility characteristics implemented by the platform may require cooperation from layers running on top of them in order to be fully effective. The majority of the requirements and recommendations in clauses 8, 9, 10 and 11 require that the issue be addressed at more than one level of the software system if the particular requirement or recommendation is to be satisfied.
These dependencies may occur in relation to different layers in the platform (e.g. window management on top of process management and screen drawing, which are on top of hardware drivers) and in relation to the applications that are mounted on the platform. Applications themselves may have layers that result in dependencies arising within different levels of the application.
Most of the guidance in this document applies to all software that implements or contributes to the software user interface, regardless of whether or not it is part of the platform. Some guidance is only applicable to portions of platform software (such as guidelines about low-level input, window management, or system-wide behaviours); for example, the platform is the general means by which accessibility features that involve control of hardware devices, in particular those involving input, are implemented and controlled. Similarly, other guidelines may only apply to software that displays user interface elements, generates sounds, or exhibits other specific behaviours. In these cases the applicable layers or type of software is indicated in the text of the requirement or recommendation, and may further be elaborated upon in notes accompanying the guidelines.
7.2 Conformance
If a product is claimed to have met the applicable recommendations in any part of HFES 200, the procedures used in establishing requirements for, developing, and/or evaluating the recommendations shall be specified. The level of specification of the procedure is a matter of negotiation between the involved parties.
Software used on, or intended to be used on closed systems should be evaluated in conjunction with the intended hardware configuration and conform to all clauses except section 8.5.
Server software (used in client-server and mainframe environments) should be evaluated in conjunction with the client (including terminal) software that will be used with it.
In part 2 (only) of this standard the provisions are assigned to three levels in order to provide information on the relative priority of the provisions. This also provides harmonization with ISO 9241-171, which also assigns priority levels to provisions. ISO 9241-171 has two levels of provisions (SHALL and SHOULD). HFES 200.2 has three levels. Level 1 provisions in HFES 200.2 correspond to 9241-171 SHALL statements. Levels 2 and 3 provisions of HFES 200.2 correspond to 9241-171 SHOULD statements. Three (rather than two) levels were provided in HFES 200.2 to provide additional prioritization information for the large number of provisions beyond Level 1. In addition, some provisions were designated as level 3 provisions because they may be difficult or inappropriate to implement in some circumstances. It should be NOTEd that provisions at all three levels are important to different individuals and that these provisions do not constitute all that can be done to make software more accessible to people with all types, degrees and combinations of disability.
8 General Guidelines
8.1 Names and labels for user interface elements
8.1.1 Provide a name for each user interface element (Level 1)
Software should associate an identifying name with every user interface element except where the name would be redundant.
NOTE 1:
A name conveys the identity of the user interface element to the user. It complements the role attribute that tells an element’s function (such as that it acts as a push button) and the description attribute that summarizes the element’s visual appearance.EXAMPLE 1:
The application provides a label showing the name “File name” for a static text field that shows the name of the file being described in the fields below.EXAMPLE 2:
Dialog boxes or windows have meaningful names, so that a user who is hearing rather than seeing the screen gets appropriate contextual information.NOTE 2:
If some names are missing, assistive technology may be unable to sufficiently identify or operate the user interface elements for the user.NOTE 3:
Names would be redundant for user interface elements whose entire informational content is already conveyed by their role attribute (such as a horizontal rule), static text elements that serve to name other elements, and elements that only serve as an integral portion of a parent element (such as the rectangular border around a button).EXAMPLE 1:
The software does not need to provide a name for a static text field that says “Last name:” and serves to identify text box that follows it, as that string would be exposed using the field’s value attribute.EXAMPLE 2:
The software does not need to provide a name for an image that forms part of a raised border around a non-standard button.NOTE 3:
In some cases the name will be displayed visibly, but in other cases it will only be provided programmatically for use by assistive technology as described in section 8.5.EXAMPLE 1:
A control is listed in the product documentation as “the Print button,” therefore its identifying name in the software is “Print” (regardless of whether the word “Print” appears on the visual representation of the button or not).NOTE 4:
Such elements may be containers that serve to group one or more sub-elements. In a typical graphical user interface, examples of UI elements include basic elements such as window title bars, menu items, push buttons, images, text labels and editable text fields, while examples of containers include windows, list boxes, grouping boxes, menu bars, menus, groups of mutually-exclusive option buttons, and compound images that are made up of several smaller images. In a typical audio user interface, examples of interactive UI elements include menus, menu items, messages, prompt tones, and pauses.EXAMPLE 1:
Compound user interface elements that consist of a collection of other user interface elements have a group name. A Web page image composed of a series of smaller image files provides a group name (“Construction Site”) in addition to the names of the individual component images (“building”, “bulldozer”, “dump truck”, “crane”, etc.).8.1.2 Provide meaningful names (Level 2)
Names of user interface elements should be comprised of natural language words that are meaningful to the intended users.
NOTE 1:
This means that each word in the name would occur in a standard dictionary or in electronic documentation for end-users included with the software.NOTE 2:
The names are most useful when they are the primary name by which the software, its documentation, and its users refer to the element and do not contain the type or status of the user interface element.EXAMPLE 1:
The name of a checkbox is 'Gender' and not 'Gender checkbox.'NOTE 3:
User interface elements that represent a real entity (such as a documents, location, or person) may be provided with the name of that entity even if the name is too lengthy or cryptic to be read easily.NOTE 4:
Names may use terms that are specific to a particular task domain provided that they have established meanings for the intended users.EXAMPLE 1:
A control is listed in the product documentation as “the Print button,” therefore its identifying name in the software is “Print” (regardless of whether the word “Print” appears on the visual representation of the button or not).EXAMPLE 2:
Dialog boxes or windows have meaningful names, so that a user who is using speech output because they cannot see the screen gets appropriate contextual information8.1.3 Provide unique names within context (Level 3)
Each name of a user interface element specified by software developers should be unique within its context.
NOTE 1:
Users will not be able to use the name to identify an element if several elements have the same name within the same context.NOTE 2:
A name is considered unique if no other user interface element with the same name and role attributes shares the same container or parent element (such as a window, group box, section, etc.)NOTE 3:
User interface elements that represent a real entity (such as a documents, location, or person) may be provided with the name of that entity even if the name is not unique in its context.EXAMPLE 1:
A form displays areas containing fields with a customer's home and business details. Each area has an associated “Change” button. Rather than duplicate the name, the buttons are named “Change Home” and “Change Business”EXAMPLE 2:
A form for a purchasing application has several rows of items, each containing a text field displaying the title of a book, followed by “Buy” button that is used to purchase that book. Even though the face of each button looks identical, the form is implemented so that each provides a unique name for use by assistive technology, such as “Buy The Grapes of Wrath” and “Buy Pride and Prejudice”.EXAMPLE 3:
A user opens a second window of the same document using their word processing application. Both windows are of the same document and both are editable. The word processor adds a “:1” to the end of the document name to form the name of the first window. It names the second window with the same document name except that it appends a “:2” to the end of the second window name so the two windows have unique names.EXAMPLE 4:
When a script or object hosted within a Web browser attempts to set the window's title to a string that is already used by another of the browser's windows, the browser modifies this string to be unique.8.1.4 Make names available to assistive technology (Level 1)
Except when running on closed systems, each name of a user interface element and its association should be made available by the software system to assistive technology in a documented and stable fashion
NOTE 1:
Section 8.5.4 (“Make user interface element information available to assistive technologies”) describes how to make the information available.NOTE 2:
In a platform that does not provide a standard service for the association of names and elements, application developers document how assistive technologies can access that information.8.1.5 Display names (Level 2)
If a user interface element has a visual representation and is not part of standard components of the user interface,software should present its name to users (either by default or at the user's request).
NOTE:
Standard components of the user interface are components that are provided by the platform and look and behave the same across applications.EXAMPLE 1:
In an application the window scroll up and scroll down buttons do not have a label or pop up text because these buttons are standard across applications on the platform. At the bottom of the scrollbar however are special “jump to next find” and “jump to last find” arrows that do have pop up text that describe their function.EXAMPLE 2:
A print button has a picture of a printer with a textual name that pops-up when the user pauses a pointer over the button and also when the user moves the keyboard focus to the button and presses a specific keyboard command.8.1.6 Provide names and labels that are short (Level 2)
Each name or label of a user interface element specified by software developers should be short enough to be rendered succinctly..
NOTE 1:
Developers are encouraged to put the most distinctive parts of the name first, so users can skip over the latter parts once they have read enough to identify the element.NOTE 2:
User interface elements that represent a real entity (such as a document's, location, or person) may be provided with the name of that entity even if the name is too lengthy or cryptic to be read easily.NOTE 3:
Using brief labels also benefits users of auditory, visual and tactile output.EXAMPLE 1:
“Print” is used instead of “Print button” or “This button prints the current document.”EXAMPLE 2:
An icon representing a document is labeled with the file name or title of the document, even if that string is too lengthy or cryptic to be read easily, because that string was determined by the document author rather than by software developers.8.1.7 Provide text label display option for icons (Level 3)
Software should allow users to choose between displaying: a) icon images with text labels, b) icon images only, or c) icon text labels only..
NOTE 1:
It is useful for the user to be able to adjust the font size8.1.8 Properly position the labels of user interface elements on screen (Level 3)
The labels for user interface elements provided by software should be consistently positioned, relative to the elements that they are labeling, on the display (HFES 200.3 Section 8.3.4.).
If the platform software has conventions for positioning labels relative to the elements that they are labeling, these conventions should be followed.
NOTE 1:
This helps assistive technology correctly associate the labels with their corresponding controls, and helps users of screen magnification software know where to look for a label or control.8.2 User preference settings
8.2.1 Enable individualization of user preference settings (Level 2)
When the software enables the user to set personal preferences, these settings should be easily adjustable..
EXAMPLE 1:
A software application allows users to conFigure and save settings for font size and style within a particular window.NOTE:
1 It is very important to use system-wide user preference settings provided by the platform, in addition to any preference settings for product-specific optionsEXAMPLE 2:
A software application allows a user with cognitive disabilities to choose the number and size of icons displayed at any one time.NOTE 2:
Requiring users to hand edit a configuration file is not an easy method for individualizing preference settings because it is too easy for the user to accidentally enter invalid values or otherwise corrupt the file.EXAMPLE 3:
A user chooses preference settings through a graphical user interface, rather than directly editing the configuration files.NOTE:
3 Business considerations related to consistency of operations, performance-based considerations, safety, privacy, and security concerns may all lead to some necessary restriction by system administrators of the user's capability to modify the behavior and appearance user interface elements in certain contexts. Administrators need to show restraint in limiting user control. Not all options/preferences settings are appropriate for such administrative control.NOTE:
4 Administrators within this business environment can make specific permission profiles for users which require more flexibility in their options/preference settings for usability and accessibility.8.2.2 Enable adjustment of attributes of common user interface elements (Level 2)
Software should enable users to adjust the attributes of common user interface elements, if applicable to the task.
NOTE 1:
“Common attributes” for a visual interface could include, but are not limited to, font type, font size, and font color. For an auditory interface they could include, but are not limited to, aural cue type, rate, volume, pitch, position in 3D audio space, etc. For a tactile interface they could include, but are not limited to, haptic object size, texture, xy- or xyz-position, pressure sensitivity, solidity, etc.”NOTE 2:
Platform software often supports these options for standard user interface elements it provides. To enhance user experience, applications can use the settings defined at the platform level.EXAMPLE:
Software retains user preference for window size and location between sessions.8.2.3 Enable individualization of the user interface look and feel (Level 3)
Software should provide a mechanism enabling users to individualize the interface look and feel including the modification or hiding of command buttons.
EXAMPLE 1:
A user with a cognitive disability may, when using a given application, change the interface to simplify the application’s look and feel.EXAMPLE 2:
A word processor allows users to temporarily hide menu items and tool bar buttons that they do not find useful for a given situation.8.2.4 Enable individualization of the cursor and pointer (Level 1)
If the hardware supports the service, software should enable users to individualize attributes of all keyboard focus cursors, text cursors, and pointers including but not limited to shape, size, stroke width, color, blink rate (if any) and pointer trails (if any).
NOTE 1:
Platform software often supports these options for standard cursors and pointers it provides, and software using these cursors and pointers may automatically comply with this guideline.NOTE 2:
The ability to set the cursor to non-blinking is important for users with attention deficits, who may be easily distracted.NOTE 3:
The color aspect of this provision is not applicable if the presentation of the cursor or pointer is an inversion of the image and it has no color.EXAMPLE 1:
Users with low vision can change a text cursor from non-blinking to blinking, and adjust the size to be more readily visible given their visual capabilities.EXAMPLE 2:
Users with low vision and/or a color deficiency can change the thickness and color of the keyboard focus cursor so that they can more easily see the current input focus.EXAMPLE 3:
Users with low vision can make the pointer larger so that they can more readily locate it.8.2.5 Provide user-preference profiles (Level 3)
Software should enable users to create, save, edit and recall profiles of preference settings, including input and output characteristics, without having to carry out any restart that would cause a change of state or data.
NOTE 1:
For systems that provide access for multiple users, such as library systems, conversion back to a default profile may be advisable.NOTE 2:
It is often useful to be able to access the preference settings over a network. Doing this in a secure way would preserve privacy especially for peoplewho are worried about revealing the fact that they have a disabilityNOTE 3:
It is advisable to minimize the need to restart the system or application in order for changes in user interface settings to become effective.EXAMPLE 1:
Platform software allows each user to save global settings for font size, sound volume, and pointer-control settings that apply everywhere on the system.EXAMPLE 2:
The profile for a public library system is modified for the needs of a current user but returns to default values when that user is finished.EXAMPLE 3:
A user is able to load a system default configuration quickly on a computer that is currently using an alternative configuration for a user who has a disability.EXAMPLE 4:
A user is completing an on-line process and has to make adjustments to the accessibility feature to reduce errors. Restarting the operating system or the user agent would cause them to lose their work.8.2.6 Provide capability to use preference settings across locations (Level 3)
Software should permit users to transfer their preference settings easily onto a compatible system.
NOTE 1:
Portability is important for users with disabilities because they may find a system difficult or impossible to use without the preferences set to meet their interaction needs. The overhead and effort required to create preference settings can be a significant hindrance to system usability if it must be repeated at every location.NOTE 2:
User preferences profiles are sometimes made publicly available e.g. for download from internet. Because people can be concerned about others knowing of their disability, it would be helpful if their use of these resources could be kept private.NOTE 3:
Some platform software may provide a general mechanism for transferring their preference settings; in such cases software may not have to implement this feature itself, as long as it follows platform conventions for storing its user preference settings.EXAMPLE 1:
A user visiting a different building on the company network logs in and the system automatically locates and uses his or her personal preference settings from the network without having to edit configuration files.EXAMPLE 2:
A user loads a preference settings file from a USB drive onto a new computer.EXAMPLE 3:
A user’s preference settings are loaded from a smart card onto a new system.8.2.7 Enable user control of timed responses (Level 1)
Unless limits placed on the timing of user responses are essential to maintaining the integrity of the task or activity or are based on real life time constraints (e.g. an auction), software should allow users to adjust each software-specified user response time parameter in one or more of the following ways:
- the user is allowed to deactivate the time-out, or;
- the user is allowed to adjust the time-out over a wide range which is at least ten times the length of the default setting, or;
- the user is warned before time expires, allowed to extend the time-out with a simple action (for example, "hit any key") and given at least 20 seconds to respond.
EXAMPLE:
A logon prompt requires the user to enter their password within 30 seconds. Show the remaining time on the screen and provide a control to stop the time decrementing.8.3 Special considerations for accessibility adjustments
8.3.1 Make controls for accessibility features discoverable and operable (Level 1)
Software should enable any On/Off controls and adjustments for accessibility features to be discoverable, and operable by those who need that feature
NOTE:
Features are discoverable if their settings and description can be found by browsing the user interface (including browsing Help available through the application).EXAMPLE 1:
Rather than needing to use a combination of keys, a user can turn the “StickyKeys” accessibility feature on and off by pressing the Shift key five times in succession.EXAMPLE 2:
Accessibility features are turned on by use of a single toggle key held down at system start-up.EXAMPLE 3:
Controls for individualizing low-vision options are shown in large type by default.8.3.2 Safeguard against inadvertent activation or deactivation of accessibility features (Level 2)
Software should prevent inadvertent activation or deactivation of accessibility features.
EXAMPLE 1:
The software system requests confirmation before activating or deactivating accessibility features.8.3.3 Avoid interference with accessibility features (Level 1)
Software should not disable or interfere with the accessibility features of the platform.
EXAMPLE 1:
Software which intercepts keyboard input does not defeat the operation of keyboard filters such as key latching and locking.8.3.4 Inform user of accessibility feature on/off status (Level 2)
Software should allow the user to identifythe current status of accessibility features.
EXAMPLE 1:
A control panel shows the current state of all accessibility features.EXAMPLE 2:
A small icon on the screen indicates that an accessibility feature is switched on.8.3.5 Inform user of accessibility feature activation (Level 2)
When an accessibility feature can be activated unintentionally, software should inform users and provide an opportunity to accept or cancel the activation.
NOTE 1:
The alert for an individual accessibility feature could be disabled at user request but would be on by default.NOTE:
2 It is good practice to provide an alert whenever SlowKeys is activated via a keyboard shortcut method since an accidental activation could lead an uninformed user to think the keyboard was broken. Regular uses of SlowKeys however prefer to have a way to defeat the alert so that they do not have to dismiss the alert each time they turn their SlowKeys feature on.8.3.6 Enable persistent display (Level 2)
When users can activate a menu, control, or other user-interface element to display additional information or controls, software should allow that information or control to persist while the user engages in other tasks, until the user chooses to dismiss it, if it is appropriate to the task.
NOTE:
Persistent display of frequently used windows and controls may be helpful for users who have physical, language, learning or cognitive disabilities, and reduces the number of steps required to access them.EXAMPLE 1:
Users can keep a Help Window available as they go through the tasks described.EXAMPLE 2:
The user can “tear off” one or more menus and continue to view and/or use them while navigating and using other menus.EXAMPLE 3:
The user can add a toolbar button that duplicates the function of a specific menu command. By doing so this menu command is persistently displayed.8.4 General control and operation guidelines
8.4.1 Enable switching of input/output alternatives (Level 3)
Platform software should enable users to switch among the available input/output alternatives without requiring them to reconFigure or restart the system or applications, unless there would be no change of state or data.
NOTE:
This capability aids users with different abilities who are working together on the same system.EXAMPLE 1:
A person who is blind uses their system only through the keyboard, using keyboard substitutes for mouse actions. A sighted user working on the same system can use the mouse and type in text. The system does not have to be restarted in between each session.EXAMPLE 2:
One user points with a mouse to an icon for a document on their screen and says “print” to print the document while another clicks on the document and types Ctrl+P to print it. Still another uses the ‘Print” item in the “File” menu to print the document.”EXAMPLE 3:
To implement a setting change, an application must restart, but it restores all data including position of the keyboard focus cursor.8.4.2 Optimize the number of steps required for any task (Level 3)
Software should be designed to optimize the number of steps that the user has to perform for any given task
NOTE 1:
It is important to find a balance between reducing the steps to improve efficiency and adding steps to allow for sufficient explanation of an infrequent task.EXAMPLE 1:
A user who wants to print one document can do so in only two steps. Once they select the Print icon on the toolbar, a dialogue is displayed (which can, if desired, be used to change various print settings) and the user simply selects the OK button.EXAMPLE 2:
A user with cerebral palsy types very slowly, and so finds it much more convenient when they can save a document by pressing a single key combination rather than having to navigate menus and dialog boxes8.4.3 Provide “Undo” and/or “Confirm” functionality (Level 2)
Software should provide a mechanism that enables users to undo at least the most recent user action and/or cancel the action during a confirmation step
NOTE 1:
Although this is a general ergonomic principle, “Undo” mechanisms are particularly important for users who have disabilities that significantly increase the likelihood of an unintentional action. These users can require significant time and effort to recover from such unintentional actions.NOTE 2:
A macro is considered to be one user actionNOTE 3:
Generally, the more consecutive actions the user can undo, the better.NOTE 4:
It is preferable if “undo” operations themselves can be undone.NOTE 5:
This may not be possible for such interactions as operations which cause fundamental transformation of logical or physical devices, or may involve a data exchange with 3rd parties that are out of the software’s control, etc.NOTE 6:
It is preferable that the default configuration provides a confirmation step for any actions that the user cannot undo with a single Undo command. Software may allow the user to disable the confirmation for specific actions..EXAMPLE 1:
A user with Parkinson’s disease may inadvertently input a sequence of keystrokes, which activate several dialogues that need to be undone. The use of several steps of the undo function may permit the user to go back to the original state.EXAMPLE 2:
A user is about to format a hard disk. As this is an operation that cannot be undone, the software shows a confirmation dialog before the formatting begins.8.4.4 Provide alternatives when assistive technology is not operable (Level 1)
If a task requires user interaction when the state of the software prevents use of assistive technology or speech output (such as during system start up), the software should provide the user with an alternative means, which does not require user interaction during that period, for completing the task.
NOTE:
System start-up and restart include operations prior to the stage where the user's accessibility aids and preference settings are available. They can be either accessible or non-interactive.EXAMPLE 1:
A computer is configured with a small “boot loader” that lets the user to choose between two or more operating systems present on the system. Because the boot loader's menu is run before a full operating system or any assistive technology is running, it provides a mechanism by which the user can, during a normal session and using their assistive technology, specify which operating system will be loaded the next time the system starts.EXAMPLE 2:
A computer does not request any password or other user input until after it has loaded access features.EXAMPLE 3:
A public information kiosk restarts automatically and comes up accessible. There is no log on before access features work.8.4.5 Enable software-controlled media extraction (Level 1)
If the hardware allows for it, software should enable the user to perform software-controlled media extraction.
NOTE 1:
Those media include, but are not limited to, floppy disks, CD-ROMs and DVDs.NOTE 2:
In most cases the user can use this feature as provided by the platform software without explicit support from software applications.EXAMPLE:
A user who cannot press physical buttons on the computer or handle a CD-ROM can nevertheless use their on-screen keyboard to instruct the operating system to eject the disc, so that it will not interfere with the computer's next restart.8.4.6 Support “Copy” and “Paste” operations (Level 2)
Software should support “Copy” and “Paste” operations for all user interface elements that support text input.
NOTE 1:
“Copy” and “Paste” operations enable users with disabilities to avoid awkward, slow and error-prone manual re-entry of potentially large amounts of data.NOTE 2:
Copy operations from password entry fields and similar secure objects can copy the text that is displayed on the screen rather than the actual text.EXAMPLE:
As the user types into a password entry field, the field displays a dot to represent each character typed. The user can select this text and copy it to the clipboard, but the appropriate number of asterisk characters are copied rather than the typed password.NOTE:
3 “Cut” operations can also be provided as it enables faster user work.8.4.7 Support “Copy” operations in non-editable text (Level 3)
Software should support “Copy” operations for all user interface elements that display text..
NOTE:
The ability to copy non-editable text into the clip board can help users with disabilities avoid awkward, slow, or error-prone manual input of that text into other locations.EXAMPLE 1:
A user who communicates with the user assistance team by email provides examples of problems by copying the text from an error dialog box and pasting it into the email rather than having to retype it.EXAMPLE 2:
An operating system provides a function where the user can hold down the Ctrl and Alt key and select any text that was drawn to the screen using the OS text drawing routines.8.4.8 Enable selection of elements as an alternative to typing (Level 2)
Where the user can enter commands, file names, or similar choices from a limited set of options, software should provide at least one method for selecting or choosing that does not require the user to type the entire name.
NOTE 1:
This reduces cognitive load for all users, and reduces the amount of typing for users for whom spelling or typing is difficult, slow, or painful.EXAMPLE 1:
An application prompts the user for a file name by presenting a dialog box in which the user can type in a file name or choose from a list of existing filesEXAMPLE 2:
At a command line prompt, the user can type the first letter or letters of a file name and then press TAB to complete the name. Repeatedly pressing TAB would cycle through the names of additional files that match the string the user entered. This same mechanism can be used to enter command names as well as file names.NOTE 2:
In many cases these features are supported automatically when software incorporates standard user interface elements provided by the platform software.8.4.9 Allow warning or error information to persist (Level 1)
Software should ensure that error or warning information persists or repeats in a suitable manner as long as the source of the error or warning remains active, or until the user dismisses it.
EXAMPLE:
A dialog box indicating that a file was not saved remains visible until the user presses a “Close” button.NOTE:
See part 3 provision 11.6 for discussion of error and warning information.8.4.10 Present user notification using consistent techniques (Level 2)
Alerts, warnings, and other user notification should be presented by software using consistent techniques that enable a user to locate them and identify the category of the information (e.g., alerts versus error messages).
EXAMPLE 1:
A beep and positioning messages are provided in a consistent place that allows users who have low vision to look for and find the error message more easily.EXAMPLE 2:
Every error message occurs in a dialog box, while every informative (non-error) message appears in the bottom left of a window. The consistent position of error messages allows users who are viewing only part of the screen through magnification to predict where particular types of information are likely to be found.NOTE:
A screen reader can be programmed to read a message automatically as long as it appears in a consistent manner in a particular screen location.EXAMPLE 3:
Notification that a form field is mandatory or read-only is displayed on the status bar. On tabbing to the control the message is automatically picked up by a screen reader that has been programmed to monitor this area of the window.8.4.11 Provide understandable user notifications (Level 2)
Alerts, warnings, and other user notifications provided by software should be short, simple and written in a clear language.
NOTE 1:
Short messages do not preclude the provision of additional details on request.NOTE 2:
HFES 200.3 provides detailed recommendations on user guidance.EXAMPLE 1:
Notifications are presented in the language of the user, avoiding internal system codes, abbreviations, and developer-oriented terminology.EXAMPLE 2:
A message box appears, displaying the short, meaningful message “The network has become unavailable.” The user can choose the OK button to dismiss the message, or choose the “Details” button to see the more detailed message “Error #527: thread 0xA725 has failed to yield mutex during maximum timeout period” which may or may not be comprehensible to the user.8.4.12 Facilitate navigation to the location of errors (Level 2)
When software detects that users have entered invalid data, it should notify them in a way that allows the users to identify and navigate easily to the location of the error.
NOTE:
If the keyboard input focus is moved unexpectedly when software detects an error and not put back afterward, a screen reader user will be disorientated, and may find it difficult and time-consuming to find the location of the error in order to correct it.EXAMPLE:
The user is notified of an error using an informational dialog box. When the dialog is dismissed, the keyboard input focus is placed at the location of the error, ready for the user to correct it.8.5 Compatibility with assistive technology
8.5.1 General
NOTE:
The provisions in this section are intended to provide the information and programmatic access needed by assistive technologies to help users access and use software. These provisions would only apply to systems that allow installation of assistive technology or where assistive technology will be installed in conjunction with the software. They would not apply to closed systems. (See conformance section.)8.5.2 Enable communication between software and assistive technology (Level 1)
Platform software should provide a set of services that enable assistive technologies to interact with other software sufficient to enable compliance with guidelines 8.5.5, 8.5.6, 8.5.7, 8.5.8, 8.5.9, and 8.5.10.
If accessibility services are provided by the platform on which they are run software toolkits should make these services available to their client software.
NOTE 1:
Assistive technologies can use these accessibility services to access, identify, or manipulate the user interface elements of an application. Applications can use these services to provide information about its user interface elements and automation facilities to other software.NOTE:
2 Assistive technology may be running on the same system or on a separate system from the software.EXAMPLE 1:
A screen reader uses an accessibility service to query information about a non standard user interface elementthat appears on screen.EXAMPLE 2:
A screen magnifier uses an accessibility service to receive notifications of keyboard focus changes in applications so it can always display the user interface element which has the keyboard focus.EXAMPLE 3:
Speech recognition software uses the accessibility services to first get information about the custom toolbar of an application and then activate one of the elements of that toolbar.EXAMPLE 4:
A company developing a new operating system realizes the importance of having assistive technology products available at the time it is released, so they contact developers of assistive technology for similar platforms. Because the assistive technology companies are small and have limited resources, the platform developers provides them with financial and technical assistance while the platform is being developed. This allows the small companies to provide design feedback and develop assistive technology products that will be ready at the same time as the new platform.8.5.3 Use standard accessibility services (Level 1)
Software that provides user interface elements should use the accessibility services provided by the platform to cooperate with assistive technologies. If it is not possible to comply with clauses 8.5.5, 8.5.6, 8.5.7, 8.5.8, 8.5.9, and 8.5.10, using these means, the software should use other services that are supported, and publicly documented, and implemented by assistive technology
NOTE 1:
In many cases the standard user interface elements provided by platform software already make use of the accessibility services so software only has to take care of using the accessibility services when it uses non-standard user interface elements.NOTE 2:
Assistive technology may be running on the same system or on a separate system from the software.EXAMPLE 1:
An application having non-standard user interface elements uses the accessibility services of the operating system to provide information about the name, presentation description, role, state, etc of those user interface elements.EXAMPLE 2:
A word processing application uses the accessibility services to provide access to the text of the document being edited. For instance it can inform about the keyboard focus cursor position, the character, word or sentence at the text cursor position, the content of the current selection, etc.EXAMPLE 3:
An application uses the accessibility services to send notifications when its user interface changes, so that assistive technologies can update their internal representation of the screen state.EXAMPLE 4:
A company is developing a productivity application for a platform that does not provide any standardized method for letting applications communicate with assistive technology. The company determines that there is no toolkit available that would supply this functionality. They contact developers of assistive technology programs for that platform, and in cooperation with them design, implement, and publish a communication mechanism that each product implements.8.5.4 Make user interface element information available to assistive technologies (Level 1)
Software should provide assistive technology with information about individual user interface elements, using methods compatible with 8.5.3, except elements that only serve as an integral portion of a larger element, taking no input and conveying no information of their own.
NOTE 1:
User interface element information includes, but is not limited to: general states (such as existence, selection, keyboard focus, and position), attributes (such as size, color, role, and name), values (such as the text in a static or editable text field), states specific to particular classes of user interface elements (such as on/off, depressed/released), and relationships between user interface elements (such as when one user interface element contains, names, describes, or affects another). This applies to on screen user interface elements and UI status values such as toggle keys.NOTE 2:
User interface element information is typically available to users by inspection or interaction. Users with certain disabilities may not be able to see or otherwise detect this information without using assistive technologies.NOTE 3:
In many cases these features are supported automatically when software incorporates standard user interface elements provided by the platform software.NOTE 4:
See section 8.1, “Names and Labels for User Interface Elements” for more information on the name property and the relationship between an element and its visual label.NOTE 5:
See 8.5.7 for how an application uses the accessibility services to send notifications when its user interface changes, so that assistive technologies can update their internal representation of the screen state.EXAMPLE 1:
A person with dyslexia can have the text on the screen read to them, and highlighted as it is read, because a screen reader utility can determine the text along with word, sentence, and paragraph boundaries.EXAMPLE 2:
A blind user presses a keyboard command asking their screen reader to tell them where they are working. The screen reader uses accessibility services to ask the current application for the identity of the user interface element that has the keyboard focus, then queries that element parent or container, and repeats it all the way to the main application window. It then generates artificial speech saying “Down option button, Direction group box, Find dialog box, Status Report dot text dash NOTEpad application.”EXAMPLE 3:
A blind user can have the text on the screen voiced aloud to them by a screen reader utility, which uses a separate voice to indicate when the font, font size, or color changes. It also uses that voice to tell them when it reaches an embedded picture, and reads the picture's description if the author provided one.EXAMPLE 4:
A blind user can also ask their screen reader to voice the word and character at the current insertion point, and the text that is currently selected.EXAMPLE 5:
When displaying tabular data or data in columns, the application provides assistive technology with information about the data, including any row or column names.EXAMPLE 6:
A user presses a keyboard command asking their macro utility to move the keyboard focus upwards on the screen. The utility asks the current application for the focus element and its location, and then checks other locations above that point until it finds a user interface element that can take the focus. It then programmatically sets the focus to that element.EXAMPLE 7:
Speech recognition software uses the accessibility services to identify the application's toolbar and the controls on it, and adds the names of those controls to its active vocabulary list. When it hears the user say “Click Save” it activates the toolbar's Save button. (It was able to determine that the control's name was “Save” even though it visually appears as picture of a floppy disk.)EXAMPLE 8:
Developers build an application using standard controls provided by the operating system. Because those controls already include support for the platform's accessibility services, developers comply with this guideline by providing names and other attributes for those controls. They make sure their pre-release testing includes users who rely on assistive technology, who verify that the application works with their products.EXAMPLE 8:
An application having non-standard user interface elements uses the accessibility services of the operating system to provide information about the name, description, role, state, etc of those user interface elements.EXAMPLE 9:
Software provides assistive technology with information about a scroll bar, including its type (Scroll Bar), name (Vertical), value (47%), size and location. The application also provides information about the individual components of the scroll bar that can be independently manipulated, including the Up button, Down button, Page Up button, Page Down button, and Position Indicator. This allows users to click, drag, and otherwise manipulate those components using speech recognition programs.EXAMPLE 10:
Software does not bother to provide assistive technology with information about individual lines that are used to visual represent a control. Assistive technology can determine the control's boundaries by querying its size and location attributes, so it does not need to rely on the position of individual drawing elements.8.5.5 Allow assistive technology to change keyboard focus and selection (Level 1)
Software should allow assistive technology to modify keyboard focus and selection attributes of user interface elements, using methods as specified in 8.5.3
NOTE:
In many cases these features are supported automatically when software incorporates standard user interface elements provided by the platform software.EXAMPLE:
Speech recognition software listens for the user to speak the name of a user interface element in the current application window. Once it hears a matching name, it wants to givekeyboard focus and selection to that user interface element. The application and operating system allow the speech recognition software to do this directly. It does this because it may not be clear to the speech recognition software how to move the keyboard focus to the user interface element using simulated keystrokes or mouse movements.8.5.6 Provide user interface element descriptions (Level 1)
Where tasks require access to the visual or audible content of user interface elements beyond what the role and name attributes provide, software should provide descriptions of those elements. These descriptions shall be meaningful to the user and available to assistive technology through a standard programmatic interface (as described in 8.5.3)whether those descriptions are presented or not.
NOTE 1:
In contrast with the label attribute that names a user interface element (as described in section 8.1), and the role attribute that identifies its function, the description should convey the visual appearance of the element, and is only needed when the label and role attributes are insufficient to allow the user to fully interact with the element.NOTE 2:
Visual user interface elements that are purely decorative and contain no information need not be described. However, elements that at first appear decorative may in fact have an information function, such as acting as a separator, an icon, a visual label, etc. In such cases the element should be provided with role and/or label attributes as described in 8.5.4.NOTE 3:
Users who have low vision or are blind may use software that can present text descriptions to users who cannot view visually displayed user interface elements. Descriptions exist independently from names and labels that identify a user interface element. Descriptions describe the visual appearance or audible content in enough detail to allow the user to understand all important information being conveyed by the user interface element’s presentation.NOTE 4:
Descriptions also assist communication between people who use the visual display and people who use assistive technology.EXAMPLE 1:
Alice instructs Bob to click on the picture of a pencil. Bob is blind, but his screen reader utility program tells him that the button named 'Compose' has the description 'A picture of a pencil', so Bob instructs his screen reader to activate that button.EXAMPLE 2:
A map image has a name “Map of Europe”, with a description “A map depicts Western Europe, with a jagged line across France and Germany indicating where the glacial advance stopped in the last Ice Age.”EXAMPLE 3:
A graphic encyclopedia’s animation (dynamic object) provides a stored textual description: “A lava flow pours from the volcano, covering the town below it within seconds”.8.5.7 Make event notification available to assistive technologies (Level 1)
Software should provide assistive technology with notification of events relevant to user interactions, using methods described in 8.5.3.
NOTE 1:
Events relevant to user interaction include, but are not limited to, changes in user interface element status (such as creation of new user interface elements, changes in selection, changes in keyboard focus and changes in position), changes in attributes (such as size, color and name), and changes of relationships between user interface elements (such as when one user interface element contains, names, describes or affects another). Just as important are input events, such as key presses and mouse button presses, and output events, such as writing text to the screen or playing audio information. This also applies to user interface status values (such as the states of toggle keys).NOTE 2:
In many cases these features are supported automatically when software incorporates standard user interface elements provided by the platform software.EXAMPLE 1:
When a user selects an item in a list box, assistive software is notified that a selection event has occurred in the list box.EXAMPLE 2:
When a user changes the position of an icon, assistive software is notified that the icon has changed position.EXAMPLE 3:
When a user causes a push-button to gain keyboard focus, assistive software is notified that keyboard focus has changed to that button.EXAMPLE 4:
When a user changes the position of a pointer or cursor, assistive software is notified that the position has been changed.EXAMPLE 5:
When an audio is playing, notification is sent to assistive technology that generates speech so that speech output will not conflict with the audio.EXAMPLE 6:
When the Caps Lock becomes active, either in response to the user pressing a key or through a programmatic action, a screen reader can be notified and inform the user who cannot see the status light on the keyboard.8.5.8 Allow assistive technology to access resources (Level 2)
If mechanism(s) exist, software should provide assistive technology with access to shared system resources on the system where the technology is installed or directly connected.
NOTE:
Such resources include, but are not limited to, processor time, space on the display, control of and input from the pointing device and keyboard, and system-wide accelerator keys. This is important so that the user is not prevented from effectively using assistive technology in conjunction with an application or tool.EXAMPLE 1:
Speech-recognition software running in the background receives enough processor time to keep up with the user’s speech because the foreground application does not try to use all available processor time. (When software does try to use all available processor time, it lets accessibility aids override or supersede that behavior.)EXAMPLE 2:
A screen magnifier can display a window that is always visible on the screen, because applications do not insist on obscuring all other windows. (When software does obscure other windows, including docked toolbars, it lets accessibility aids override or supersede that behavior.)EXAMPLE 3:
The user can move the pointer over the window of an on-screen keyboard utility, because the active application does not restrict the pointer to its own window.EXAMPLE 4:
The user can use a screen magnifier and a voice recognition utility at the same time becausethey both are given access to the shared keyboard resources (and the user has configured them so they do not rely on the same key combinations.EXAMPLE 5:
A keyboard macro utility can monitor the user’s keystrokes because applications avoid using low-level functions that read input directly from the keyboard and would bypass the layers that the macro package relies on.EXAMPLE 6:
The user is able to view instructions in one window while carrying them out in another, because neither window insists on taking up the entire screen8.5.9 Use system-standard input/output (Level 1)
Software should use standard input and output methods provided by the platform, or if this is not possible make equivalent information available through methods described in 8.5.3.
NOTE:
These capabilities are also useful in enabling automated testing applications, pervasive macro/scripting facilities, and other software that works on behalf of the user.EXAMPLE 1:
Software moves a keyboard focus cursor using system routines. This allows assistive software to read the current cursor position.EXAMPLE 2:
Software bypasses the system routines for graphic drawings for better performance. The software provides an option that detects the state of an “assistive technology flag”. When the flag is set, the software uses the system routines for graphics.8.5.10 Enable appropriate presentation of tables (Level 1)
When presenting information in the form of tables, or multiple rows or columns, information about layout, row and column headings, and explicit (presented) relationships among the data presented should also be communicated to assistive technology using methods discussed in provision 8.5.3
EXAMPLE:
When displaying tabular data or data in columns, the application provides assistive technology with information about the data, including any row or column names.8.5.11 Accept the installation of keyboard and/or pointing device emulators (Level 1)
Platform software should accept the installation of keyboard and/or pointing device emulators that work in parallel with standard input devices.
NOTE:
It is important that the pointing device alternatives work in parallel with the regular pointing device (mouse, trackball, touch-s creen, etc.). A user with low motor function might move the mouse pointer to the general vicinity of a target, and then fine-tune the position using the alternate pointing device. There may also be multiple users of the machine.EXAMPLE 1:
The operating system accepts a button-based mouse emulator that can be used at the same time as the standard mouse.EXAMPLE 2:
The operating system accepts a mouse-based on-screen keyboard emulator that can be used at the same time as, or independently of, the physical keyboard.8.5.12 Allow assistive technology to monitor output operations (Level 1)
Platform software should provide a mechanism that allows assistive technology to receive notification about standard output operations, and to identify the source and the original data associated with each operation.
EXAMPLE 1: An operating system provides services by which applications draw text to the screen, but the operating system internally converts the text to an image before passing it to the display driver. The operating system therefore provides services by which a screen reader can get notified about drawing operations, and examine both the original text and the location at which they will be displayed, before they are converted to images.
EXAMPLE 2:
A graphic toolkit provides services by which applications draw to and otherwise manipulate bitmap images in memory, and later copy those images to the screen. The toolkit provides services by which a screen reader can get notified about those drawing operations, so that it can keep track of the shapes, text, and pictures visible in the image.EXAMPLE 3:
Assistive technology monitors separate output to the left and right audio channels so that a training application can inform the user of important spatial information.8.5.13 Support combinations of assistive technologies (Level 3)
Software should enable multiple assistive technologies to operate at the same time.
NOTE 1:
Compatibility between AT is the responsibility of the AT.NOTE 2:
This provision includes cases where multiple software assistive technologies are connected in series or in parallel and cases where software can control the operations of various devices including hardware assistive technology.EXAMPLE:
An operating system allows users to install multiple assistive technologies that can inject or filter keyboard input.8.6 Closed systems
8.6.1 Read content on closed systems (Level 1)
Software that is on or intended for installation on closed systems should allow the user to move keyboard focus, using a keyboard or keypad, to any visually presented information and have that content read aloud.
8.6.2 Announce changes on closed systems (Level 1)
Software that is on or intended for installation on closed systems should allow the user to have any change in keyboard focus, status, or content audibly announced.
8.6.3 Operable through tactilely discernable controls (Level 1)
Software that is on or intended for installation on closed systems should provide at least one mode where all functionality can be achieved through devices that do not require vision.
EXAMPLE:
Touch screen software is designed so that all functionality can also be achieved through a keypad with keys that are easy to feel.NOTE:
If software is designed to operate via keyboard it would satisfy this requirement, unless it is known in advance that the keyboard requires vision to operate, such as an on-screen keyboard, or software is specifically for a device with a flat membrane keyboard.8.6.4 Pass through of system functions (Level 1)
Software that is on or intended for installation on closed systems should pass through or implement the platform’s accessibility features.
NOTE 1: Accessibility features in the platform software that do not apply to the platform hardware (e.g. a keyboard feature in a kiosk that doesn’t use a keyboard) are not considered ‘accessibility features of the platform’.
NOTE 2: The phrase “implement the platform’s accessibility features “means providing similar accommodation to the platform accessibility features, and therefore software is not expected to pass through or implement those features.
9 Inputs
9.1 Alternative input options
9.1.1 Provide keyboard input from all standard input mechanisms (Level 2)
Platform software should provide a method for generating keyboard input from each standard input mechanism provided by the platform.
EXAMPLE 1:
A platform that supports mouse input includes a mouse-operated on-screen keyboard utility that can be used to control any application that is designed to take keyboard input.EXAMPLE 2:
A platform provides built in speech recognition feature and provides the facility to type any key or key combination on the standard keyboard using the speech recognition.9.1.2 Provide parallel keyboard control of pointer functions (MouseKeys) (Level 1)
Platform software should provide a keyboard alternative to standard pointing devices that enables keyboard (or keyboard equivalent) control of pointer movement and pointing device button functions in parallel with the standard pointing device.
NOTE 1:
This is commonly called MouseKeys and is available on most major platforms (see Annex B).NOTE 2:
This allows users who have restricted limb/hand movement or coordination to more easily control pointing functions.NOTE 3:
It is important that the keyboard alternative works in parallel with the regular pointing device (mouse, trackball, touchscreen, etc.). A user with low motor function might move the mouse pointer to the general vicinity of a target, and then fine-tune the position using the keyboard control.9.1.3 Provide pointer control of keyboard functions (Level 2)
Platform software should provide a pointing device-based alternative to the keyboard that includes pointing device control of latching, and locking of key presses.
NOTE:
This allows users who cannot use the keyboard and can only use a pointing device to type.EXAMPLE 1:
A person who cannot use the keyboard can operate the device completely with a head operated mouse.EXAMPLE 2:
An operating system includes an on-screen keyboard emulator that allows the user to perform the equivalent of pressing, latching, and locking all keyboard keys using only a pointing device.9.1.4 Provide speech recognition services (Level 2)
If the hardware has the capability to support speech recognition, platform software should provide or enable the use of programming services for speech recognition.
NOTE 1:
This does not imply that a speech recognition engine should always be installed.NOTE 2:
This is relevant for users with visual, physical, and cognitive disabilities.EXAMPLE:
A virtual machine allows software that it hosts to access speech recognition services provided by the operating system.9.1.5 Provide system-wide spell checking tools (Level 3)
Platform software should provide system-wide support for spelling assistance by indicating probable errors and providing suggestions when they are known. Except where the task involves the testing of the user's ability to spell, application software should support the system spelling assistance, or where it is not provided by the platform software, application software should provide this functionality for its own content.
NOTE 1:
The ability to automatically check spelling is not possible for every language.NOTE 2:
Spelling is a problem for many users including people with text disabilities such as dyslexia.EXAMPLE 1:
A user's input in a textbox is checked for spelling using the operating system's spelling checker service.EXAMPLE 2:
A user of a text editor that does not provide a spell checker feature uses the operating system's spelling checker service to check their spelling.9.2 Keyboard focus
9.2.1 Provide keyboard focus and text cursor (Level 1)
Software should provide a keyboard focus cursor that visually indicates which user interface element currently has the keyboard focus, as well as a text cursor to indicate the focus location within a text element.
NOTE:
The availability of this information to Assistive Technology is covered in section 8.5, (Compatibility with Assistive Technology).EXAMPLE 1:
A box or highlighted area appears around the checkbox that will be activated if the user hits the space bar.EXAMPLE 2:
A text cursor (a flashing I bar) appears in the data entry field at the location where any typed characters will be insertedand also at the end of a text selection highlight to show which end of the highlight will move with the next shift-arrow-keystroke.9.2.2 Provide highly visible keyboard focus and text cursors (Level 1)
Software should provide at least one mode where keyboard focus cursors and text cursors are visually locatable by people with unimpaired vision at 2.5 meters when software is displayed on a 38 cm (15 inch) diagonal screen at 1024 x 768 pixels resolution, without moving the cursor.
EXAMPLE 1:
The software provides an option of having a thick rectangle of contrasting color that moves to and outlines the control or field that has keyboard focus.EXAMPLE 2:
The software provides an option of having bright, yellow triangles extend from the top and bottom of the text cursor.9.2.3 Restore state when regaining keyboard focus (Level 2)
When a window regains focus, software should restore the keyboard focus, selection, and active modes to the values they had before the window lost the focus,except when the user explicitly requests otherwise.
NOTE 1:
This is important because, if keyboard focus is not retained when a window regains focus via keyboard navigation, a keyboard user must press many keystrokes to return to their previous location and selection.NOTE 2:
Some user actions may return keyboard focus to a window and then move the keyboard focus to a specific element within that window automatically or change the state of the document.EXAMPLE 1:
Keyboard focus is currently on the third button in a window until focus is switched to another window. When the focus returns to the original window, keyboard focus is returned to the third button in that window.EXAMPLE 2:
The user is editing the contents of a spreadsheet cell. Keyboard focus is switched to another window. When the user returns to the original window, the application is still in editing mode, the same text is selected, and the text cursor is at the same end of the selected text as it was before the window lost focus.EXAMPLE 3:
On some platforms, the user may position the pointer over a window that does not have the keyboard focus, and click on a control in that window, thereby moving the keyboard focus to that window and then to the control that the user clicked.See also Part 3 – provisions 8.8.5, 8.8.7, and 10.9.
9.3 Keyboard input
9.3.1 General
Although the guidelines in this sub-clause refer to “keyboard input”, the source of such keyboard input may be a variety of software and hardware alternative input devices.
In this sub-clause, the term “keyboard” should be interpreted as referencing a logical device rather than a physical keyboard.
9.3.2 Enable full use via keyboard (Level 1)
Unless the task requires time-dependent analog input, software should provide users with the option to carry out all tasks using only non-time dependent keyboard (or keyboard equivalent) input.
NOTE:
1 Meeting this requirement has a particular benefit to a large number of people with different disabilities and enhances usability for people without disabilities as well.EXAMPLE:
A watercolor painting application where the darkness is dependent on the time the pointer spends at any location is exempt because the task requires time-dependent analog input.NOTE 2:
This includes, but is not limited to, editing text and other document components, and navigation to and full operation of all controls as well as not having the keyboard focus become trapped on any interface element.NOTE 3:
Platform-based on-screen keyboards, speech input, and handwriting are all examples of keyboard equivalents since their output appears to applications as keystroke input.EXAMPLE:
Users move keyboard focus among and between windows displayed from different software using voice commands that generate only keyboard input.NOTE 4:
Use of MouseKeys would not satisfy this guideline because it is not a keyboard equivalent to the application; it is a mouse equivalent (i.e. it looks like a mouse to the application).NOTE 5:
All input functionality needs to be keyboard operable, but not necessarily all user interface elements. If there are multiple ways to perform a task, only one of them needs to be keyboard operable, though it is best if all possible forms are keyboard operable.NOTE 6:
This does not preclude and should not discourage the support of other input methods (such as a mouse) in addition to keyboard operation.NOTE 7:
This includes accessibility features built into the application.EXAMPLE:
Features for people who are hard of hearing are operable from the keyboard because people who have hearing disabilities may also have physical disabilities that prevent them from using a mouse.NOTE 8:
This recommendation also includes keyboard navigation between groups of controls, as well as, inside those groups.(described in more detail in requirement 9.3.17).EXAMPLE 1:
A user uses the tab key to navigate to and from a list, and uses the up/down arrow keys to navigate up and down within the list..EXAMPLE 2:
All user interface elements or equivalent functions accessible via the pointer are accessible via keyboard input. Users make menu choices, activate buttons, select items, and perform other pointer-activated tasks via keyboard input.EXAMPLE 3:
A computer-aided design (CAD) program is usually used with a mouse, but also provides the facility to specify points by x, y, and z coordinates, and to select drawing elements from a hierarchical list of sub-assemblies and parts or by using arrow keys to navigate through elements displayed on the screen. Users navigating via the keyboard have no problem identifying the position of the keyboard focus.EXAMPLE 4:
An application in a PDA is usually operated with a stylus, but all functionality can also be controlled from any keyboard that plugs into the PDA.EXAMPLE 5:
An educational physics simulator for the parabolic movement of a launched object uses the mouse to simulate the angle (direction) and strength (speed) of the object launching. This is a highly intuitive input method, but inconvenient for people unable to use the mouse. An alternative input method could be a form in which the user types in the values of angle and strength.9.3.3 Enable sequential entry of multiple (chorded) keystrokes (StickyKeys) (Level 1)
Software should enable users to lock or latch modifier keys (e.g. Shift, Ctrl, Alt, Option, Command) so that multiple key combinations and key-plus-mouse button combinations can be entered sequentially rather than by simultaneously pressing multiple keys.
NOTE 1:
This is commonly called StickyKeys and is available on most major platforms (see Annex B).NOTE 2:
Most operating systems provide this function for all standard modifier keys. Other software is generally responsible for implementing this feature only for non-modifier keys that it treats as modifier keys.NOTE 3:
This allows users who have physical impairments a means to enter combination key commands (e.g., Ctrl+C, Ctrl+Alt+Del) by pressing one key at a timeEXAMPLE:
A graphics program allows the user to modify mouse clicks by holding down the Del key. Because Del is not treated as a modifier key by the operating system, the graphics program either provides key latching and locking, or provides an alternative method that does not require simultaneous operations.9.3.4 Provide adjustment of delay before key acceptance (SlowKeys) (Level 1)
Software should enable users to adjust the delay during which a key is held down before a key-press is accepted, across a range of times that includes a value of two (2) seconds.
NOTE 1:
This is commonly called SlowKeys and is available on most major platforms (see Annex B).NOTE:
2 In most cases this feature is supported automatically when software uses the standard keyboard input services provided by the platform software (and does not override this feature of the services).NOTE 3:
A common range for a key acceptance delay feature is 0.5 seconds to 5 seconds.NOTE 4:
This feature allows users who have limited coordination, and who may have trouble striking an intended key, to input the intended key-press by holding the key down for a longer period of time than unintended key-presses. This delay of acceptance means that short key-presses caused by bumping keys unintentionally are ignored.NOTE 5:
BounceKeys (9.3.5) would have no effect if SlowKeys is active9.3.5 Provide adjustment of same-key double-strike acceptance (BounceKeys) (Level 1)
Software should enable users to adjust the delay after a keystroke, during which an additional key-press will be ignored if it is identical to the previous keystroke, across a range of times that includes a value of one-half (1/2) second.
NOTE 1:
This is commonly called BounceKeys and is available on most major platforms (see Annex B).NOTE:
2 In most cases this feature is supported automatically when software uses the standard keyboard input services provided by the platform software (and does not override this feature of the services).NOTE 3:
This feature allows users, who may have tremors or other motor conditions that cause them to unintentionally strike the same key more than once, to prevent a system from accepting inadvertent key-presses.NOTE 4:
A typical range for a double strike acceptance delay feature is 0.2 seconds to 1 second.NOTE 5:
“BounceKeys” would have no effect if SlowKeys is turn on.9.3.6 Provide adjustment of key repeat rate (Level 2)
Software should enable users to adjust the rate of key repeat down to 1 per 2 seconds.
NOTE 1:
This feature allows users with slow reaction time to better control the number of repeated characters that will be produced by holding down a key during some time.NOTE 2:
In most cases this feature is supported automatically when software uses the standard keyboard input services provided by the platform software (and does not override this feature of the services).9.3.7 Provide adjustment of key-repeat onset (Level 2)
Software should enable users to adjust the time between the initial key press acceptance and key repeat onset across a range of times including a value of two (2) seconds.
NOTE:
1: This prevents users whose reaction time may be slow from producing unwanted repeated characters by holding down a key long enough to unintentionally initiate key repeat.NOTE 2:
In most cases this feature is supported automatically when software uses the standard keyboard input services provided by the platform software (and does not override this feature of the services).9.3.8 Allow users to turn key repeat off (Level 1)
Software that provides key repeat should enable users to turn off the key repeat feature.
NOTE 1:
This feature prevents users with very slow reaction time from producing unwanted repeated characters while holding down a key.NOTE 2:
In most cases this feature is supported automatically when software uses the standard keyboard input services provided by the platform software (and does not override this feature of the services).9.3.9 Provide notification about toggle-key status (ToggleKeys) (Level 2)
Software should provide information to the user in both visual and auditory form concerning changes to the status of keys that toggle or cycle between states.
NOTE 1:
This is commonly called ToggleKeys and is available on most major platforms (see Annex B).NOTE 2:
This allows users who are unable to see keyboard status lights to determine the current state of a binary-state keyboard toggle control such as “Caps Lock” or “Num Lock.”NOTE 3:
Applications do not need to duplicate the functionality of the ToggleKeys feature provided by major operating systems. Applications that are developed for a specific platform need to provide feedback for all keys that toggle or cycle that are used by the application and not already handled by a ToggleKeys feature provided by the platform.EXAMPLE 1:
A locked state is indicated by a high-frequency beep (or two tone mid - high sequence) and an unlocked state with a low-frequency beep (or a two tone mid - low sequence).EXAMPLE 2:
Firmware in a NOTEbook computer generates three different tones as the user presses the Fn and F4 keys to cycle between three different projection states (LCD, CRT, and LCD+CRT).EXAMPLE 3:
An application uses the INSERT key to toggle between inserting typed characters and having them replace existing text. This mode is specific to the application and therefore not handled by the operating system’s ToggleKeys feature, so the application toggles an indicator in its status bar and optionally generates a tone to indicate whether insertion is on or offNOTE 4:
Section 8.5.7 (Make event notification available to assistive technologies) requires software to notify assistive technology of these status changes.9.3.10 Provide accelerator keys (Level 3)
Software should provide accelerator keys for frequently used features.
NOTE 1:
In many cases, not every feature can or needs to be mapped to an accelerator key. The choice of what features to map to accelerator keys may be made by determining which features would constitute a core set of frequent and useful functions.NOTE 2:
Accelerator keys are especially important for users who type slowly, interact only through a keyboard, or use keyboard emulators such as speech-recognition systems. Users who have disabilities benefit because they can reduce time-consuming steps that would otherwise be required to activate accelerated features.NOTE 3:
The keys that are available to be used as accelerators somewhat depend on the conventions of the platform and the language of the user interface.EXAMPLE:
User can press “Ctrl,+C” to copy, “Ctrl+V” to paste or “Ctrl+P” to print.9.3.11 Provide implicit or explicit designators (Level 2)
Software should provide implicit or explicit designators that are displayed by default for all user interface elements that take input and have visible textual labels within the limits of characters that can be typed. The implicit and explicit designators should be unique within their context or the user should be able to choose between them without carrying out any unintended action.
NOTE 1:
This does not preclude providing an option to turn off the implicit and/or explicit designators.NOTE 2:
Implicit and explicit designators are restricted to the set of characters that can be displayed and typed, and therefore it may not be possible to provide designators when there are a large number of labeled elements. In such cases it is recommended that designators be provided for the most commonly-used elements.NOTE 3:
Including a very large number of controls in a single form often results in a loss of usability, in addition to making it impossible to provide unique designators.EXAMPLE 1:
In the portion of a menu shown in Figure 5, the implicit designators are the underlined letters:“T”, “S”, “E”, “W”, “g”, “m”, “L” and “D”
Figure 5 Example of implicit designators in a menu
EXAMPLE 2:
In the pushbuttons given in Figure 6, the implicit designators are “D” and “P”
Figure 6 Examples of implicit designators in pushbuttons
EXAMPLE:
3: On a screen used for initiating a print job, the control name is displayed as “Print”, and the underlined “P” indicates that “P” is the implicit designator, for the “Print” command.9.3.12 Reserve accessibility accelerator key assignments (Level 1)
The accelerator key assignments in Table 2 should be reserved for the purposes in the second column in Table 2.
Accelerator key | Used for |
---|---|
Five consecutive clicks of shift key | On/Off for StickyKeys |
Right Shift key held down 8 seconds | On/Off for SlowKeys and RepeatKeys |
The key combination designated by the platform being used | On/Off for MouseKeys |
NOTE:
To accommodate other accessibility options, platform software may reserve additional accelerator keys.9.3.13 Enable remapping of keyboard functions (Level 3)
Platform software should allow users to re-assign the mappings of all keysunless restricted by hardware.
Software running on such systems should support these remappings.
NOTE 1:
Remapping is not the same thing as reassigning a function to a different key in an application. (See 9.3.18, “Allow users to customize accelerator keys”.) Remapping globally changes which logical key is associated with each physical key on the keyboard.NOTE 2:
In most cases this is met automatically when software uses the standard keyboard input services provided by the platform software (and does not override this feature of the services).EXAMPLE 1:
A user who has a left arm and no right arm switches frequently used letters from the right to the left side of the keyboard.EXAMPLE 2:
In order to correctly support keyboard remapping at the operating system level, an application uses the platform functions to read “virtual keys” rather than “scan codes” that are associated with physical keys.9.3.14 Separate keyboard navigation and activation (Level 1)
Software should allow users to move the keyboard focus without triggering any effects other than the presentation of information (e.g. scrolling or pop-ups that do not change the focus or selection). An explicit keystroke or similar user action should be provided to trigger any other user-initiated effect.
NOTE 1:
This does not preclude the provision of additional navigation techniques that do cause effects such as changing selection.NOTE 2:
This is particularly important for users who cannot see the entire screen at one time, and would have to explore the user interface by navigating through all available user interface elements. In some cases they would not be aware of any side effects caused by such navigation.NOTE 3:
Software would fail this provision if a user cannot exit a data entry field without entering or change data because moving the keyboard focus to the field caused an effect (triggering a mandatory data entry mode). (Also see HFES 200.3, 8.6).EXAMPLE 1:
A user presses the Tab key to move from a button to a set of checkboxes. When the first checkbox acquires keyboard focus, it does not become activated. Activation requires a separate step, such as pressing the spacebar.EXAMPLE 2:
In addition to using the mouse to make selections from a list, the user can also use the arrow keys to move through items in a list, selecting them by hitting the space bar. In lists that allow multiple selections, the user can hold down the control key in combination with the arrow keys to move through the list, hitting the space bar for each item that the user wants to select.9.3.15 Follow platform keyboard conventions (Level 2)
Software should follow keyboard access conventions established by the platform software on which it is run.
NOTE 1:
This improves the usability of new applications, but it is especially relevant for people who can only use the keyboard or have cognitive impairments.EXAMPLE:
1; An application follows the system conventions that Alt is used to indicate the use of implicit designators when held down and to activate the application main menu when pressed and released.EXAMPLE 2:
An application avoids reassigning the key combination used by the operating system to activate the MouseKeys feature.EXAMPLE 3:
An application uses the Esc key to cancel its custom dialog and message boxes, because that follows the convention established by the operating system.NOTE 2:
The platform conventions normally include the assignment of implicit designators, modifier keys and accelerator keys.NOTE 3:
This does not preclude the provision of additional keyboard shortcuts and techniques in addition to those that are platform conventions.NOTE 4:
Keyboard conventions may be established by the operating system or by a separate graphical user interface layer.9.3.16 Facilitate list and menu navigation (Level 2)
Software should provide keyboard mechanisms to facilitate navigation within menus, and lists.
NOTE 1:
Wrapping with auditory and visual indication of rollover is one strategy. Home and End keys are another. Often both are provided.NOTE 2:
When navigational order is circular, an alert signal is provided.EXAMPLE 1:
The user presses Home key to move to the first item in a list, End key to move to the last item in the list, and PgUp and PgDn keys to move forward and backward the number of items currently visible.EXAMPLE 2:
The user types one or more characters to move to the next item that starts with those characters.9.3.17 Facilitate navigation of controls by grouping (Level 3)
Where there are large numbers of navigable controls, group controls for ease of navigation,
9.3.18 Arrange controls in task-appropriate navigation order (Level 3)
Controls should be arranged so that when the user navigates with the keyboard they are visited in appropriate order for the user’s task.
NOTE:
For users who are visually impaired or blind, the order and grouping in which keyboard navigation occurs may be the only order in which they can use controls.EXAMPLE:
As a user presses the Tab key, the keyboard focus cursor moves to a task-appropriate group of radio buttons, followed by the next group of radio buttons, and so on, in a task and conceptually appropriate order. Within each group of radio buttons, the user moves among related buttons by pressing the arrow key.9.3.19 Allow users to customize accelerator keys (Level 3)
Software should enable users to customize the assignment of accelerator keys to actions.
EXAMPLE 1:
An application allows the user to create macros that carry out one or more actions, and assign these macros to accelerator keys.EXAMPLE 2:
A user frequently presses Ctrl+P when they mean to press Ctrl+O, so they disable Ctrl+P as a shortcut for the Print function.9.4 Pointing devices
9.4.1 General
The term “pointing device” in this clause refers to any physical or logical pointing device. Such devices include mice, trackballs, touchscreens and touchpads, as well as specialized input devices such as head trackers, “sip & puff” systems, and many other hardware/software combinations that systems treat as pointing devices. Some devices, such as touchscreens and touchpads, may use a finger tap or gesture in place of physical buttons, and these should be interpreted as equivalent to pointing-devicebutton events and covered by provisions addressing such types of input.
9.4.2 Provide direct control of pointer position from external devices. (Level 1)
Platform software should provide a service to enable software including pointing device drivers to directly position the pointer. In addition all pointing device drivers should support direct positioning of the pointer.
EXAMPLE:
An eye-gaze mouse alternative plugged into USB and using the standard mouse driver can set absolute position of the mouse pointer on the screen9.4.3 Provide easily-selectable pointing device targets (Level 2)
Target size should be optimized to maintain adequate target selectability, grouping and separation from adjacent user interface elements.
NOTE:
This makes usage easier for all pointing device users, but it is especially important for enabling users with disabilities to select user interface elements effectively with a mouse or head operated pointing device.9.4.4 Enable reassignment of pointing device button functions (Level 1)
Platform software should enable users to reassign the functions for each pointing device button.
NOTE:
It is desirable for applications to respect global OS settings for button re-assignments rather than making such assignments on a per application basis.EXAMPLE 1:
A user with partial paralysis in the right arm wishes to remap the position of the mouse buttons to use it with their left arm. Instead of buttons being interpreted as button 1, 2, and 3 from left to right, they can be remapped as buttons 3, 2 and 1 reading from left to right.EXAMPLE 2:
A user with a trackball that has four buttons can choose which positions to use for each function based on their ability to reach them.9.4.5 Provide alternative input methods for complex pointing device operations (Level 2)
Software should enable all user initiated actions that can be accomplished with multi-clicks (i.e., double or triple clicks), or simultaneous pointing device operations (e.g. hold and drag), or spatial or temporal gestures (e.g. scribbling motion or holding buttons down for designated periods of time) to be accomplishable with an alternative pointing device method that does not require multi-clicks, simultaneous operations or gestures.
NOTE:
The number of buttons available on standard devices may limit the ability to use pointing device buttons as ‘multi-click’ or ‘button-hold’ buttons. Thus other methods are usually needed to accomplish multi-click or simultaneous mouse button actions.EXAMPLE 1:
User can use the right-click menu to achieve the same function as the double click.EXAMPLE 2:
Instead of holding the pointing device button down to keep a popup open, the user can click to open it and click again to close it.EXAMPLE 3:
A user with a cognitive disability can single-click to perform a multi-click operation.9.4.6 Enable pointing device button-hold functionality (Level 1)
Software should provide a method such that users are not required to hold down a pointing device button for more than the system single-click time in order to directly manipulate a user interface element, activate a control, or maintain a view of a menu.
NOTE 1:
In many systems, this facility may have to be built into driver software.NOTE 2:
If the MouseKeys feature is implemented fully in parallel with the standard pointing device, then this functionality would be provided since you could use the keyboard to hold down the pointing device buttons. If the platform provides this functionality and other software does not override it then the provision would be met.EXAMPLE 1:
Users have the option to view a menu by pressing and releasing a mouse button rather than pressing and holding it.EXAMPLE 2:
Users have the option to “lock” single-clicks so that they are treated as continuous button presses, allowing them to select across text without holding down a mouse button.EXAMPLE 3:
Using the MouseKeys feature (which allows the user to press a key on the number pad to lock the mouse button down), users can drag and drop user interface elements without continuously pressing down on a mouse button.EXAMPLE 4:
Application allows the user to make a rubbing gesture on a graphics tablet to erase numbers on screen. The application provides the ability to have the graphics cursor puck buttons toggle on and off so user doesn’t have to keep the button held down manually while moving the cursor puck.9.4.7 Provide adjustment of delay of pointing-device button-press acceptance (Level 2)
Software should enable users to adjust the delay during which a pointing device button is held down before a button-press is accepted, across a range of times including a value of one (1) second.
NOTE 1:
A typical range for a adjustment of delay of pointing device pointer-button-press acceptance feature is 0,1 to 1,0 seconds.NOTE 2:
In most cases this feature is supported automatically when software uses the standard pointing device input services provided by the platform software (and does not override this feature of the services).EXAMPLE:
A user who has tremors sets a duration time sufficient to prevent tremor-induced unintentional presses as they move the mouse around from being accepted as intentional presses.9.4.8 Provide adjustment of minimum drag distance (Level 2)
Software should enable users to adjust the minimum pointer movement while the pointing-device button is held down that will be registered as a drag event.
NOTE:
: In most cases this feature is supported automatically when software uses the standard pointing device input services provided by the platform software (and does not override this feature of the services).EXAMPLE:
A user who has tremors is able to select an item using a mouse without accidentally dragging that item to a new location.9.4.9 Provide adjustment of multiple-click parameters (Level 1)
Software should enable users to adjust the interval required between clicks and the distance allowed between the position of the pointer at each click, to accept the operation as a double- or triple-click.
NOTE:
: In most cases this feature is supported automatically when software uses the standard pointing device input services provided by the platform software (and does not override this feature of the services).EXAMPLE 1:
Users with slow movements may take a second or more between clicks of a double-click intended to open a document. Because they can adjust the time interval allowed between the two clicks, they can successfully double click.EXAMPLE 2:
Users with tremor often inadvertently move the mouse cursor between the first and second click of a double click. Because they can adjust the distance allowed between two clicks of a double click they can choose a distance that allows them to successfully double click even with their inadvertent movement between clicks.9.4.10 Provide adjustment of pointer speed (Level 1)
Software should enable users to adjust the speed or ratio at which the pointer moves in response to a movement of the pointing device.
NOTE:
: In most cases this feature is supported automatically when software uses the standard pointing device input services provided by the platform software (and does not override this feature of the services).EXAMPLE:
Users may change the speed of the pointer movement by setting an absolute speed or a ratio between movements of the pointing device and the pointer, so that the pointer movement is changed from a 1:1 mapping between the movement of the pointing device and the pointer to a 3:1 mapping9.4.11 Provide adjustment of pointer acceleration (Level 1)
If software provides pointing device acceleration, it should provide adjustment of the pointer movement acceleration including a setting of zero.
NOTE:
A zero acceleration setting allows assistive technology to move the pointer instantaneously with predictable results.9.4.12 Provide adjustment of pointer movement direction (Level 3)
Software should enable users to adjust the direction at which the pointer moves in response to a movement of the pointing device.
NOTE 1:
The pointer movement options include, but are not limited to being the same, the opposite or perpendicular to the pointing movement direction.NOTE 2:
This is useful for people with movement limitations.NOTE 3:
In most cases this feature is supported automatically when software uses the standard pointing device input services provided by the platform software (and does not override this feature of the services).9.4.13 Provide a means for finding the pointer (Level 1)
Platform software should provide a mechanism to enable users to locate the pointer unless it is always high contrast with background, always visible, and always solid and larger than text.
EXAMPLE 1:
A user with low vision loses track of the mouse pointer. When the Ctrl key is pressed animated concentric circles are presented around the location of the mouse pointer.9.4.14 Provide alternatives to simultaneous pointer operations (Level 1)
Software should provide a non-chorded alternative for any chorded key or button presses, whether chorded presses are on the pointing device alone or are on the pointing device in combination with a keyboard key-press.
NOTE 1:
The intent here is to replace or supplement concurrent actions with sequential input alternatives because multiple simultaneous actions may be difficult or impossible for users with motor impairments.NOTE 2:
Most operating systems provide this function for all standard pointer buttons. Other software is generally responsible for implementing this feature only for pointer buttons used in combinations that are not covered by operating system features.EXAMPLE 1:
If a task can be performed by pressing mouse button 1 and mouse button 2 simultaneously, it also can be performed using one mouse button to display a menu providing the same function.EXAMPLE 2:
If a file can be copied by pressing a keyboard modifier key while holding down a mouse button and dragging, then it is also possible to perform this task by selecting a menu operation called “Copy”.10 Outputs
10.1 General output guidelines
10.1.1 Avoid seizure-inducing flash rates (Level 1)
Software should avoid flashing that may induce seizures in individuals with photosensitive seizure disorders.
NOTE: Standards in this area are currently undergoing revision and adaptation to apply to new displays.
NOTE:
2 Less than three flashes in any one second period meets all current standards10.1.2 Enable user control of time-sensitive presentation of information (Level 1)
Whenever moving, blinking, scrolling, or auto-updating information other than simple progress indicators is presented, software should enable the user to pause or stop the presentation.
NOTE 1:
A simple progress indicator has no movement other than to indicate current completion status.EXAMPLE 1:
A progress indicator consists of status bar that shows completion along with an elf who is moving boxes. Clicking on the status indicator causes elf to freeze but status bar continues to reflect status.NOTE 2:
Individuals with low vision or reading problems need time to study information in order to comprehend it.NOTE 3:
Varying speed of presentation is also useful.EXAMPLE 2:
A Braille display is constantly refreshing to keep up with text output from software. The user pauses the presentation so they can read the Braille before it is refreshed..EXAMPLE 3:
A user presses the mouse button down on scrolling text, which pauses the moving text as long as they hold the mouse down allowing the user to read the text.10.1.3 Provide accessible alternatives to task relevant audio and video (Level 1)
When task relevant information is presented by audio or video, software should provide equivalent content in accessible alternative formats
EXAMPLE 1:
A video includes captions for the audio track.EXAMPLE 2:
The system provides an auditory description of the important information of the visual track of a multimedia presentation. (This is called Audio Description)10.2 Visual output (displays)
10.2.1 Enable users to adjust graphic attributes (Level 3)
To increase legibility of graphics, software should enable users to change attributes used to present the content without changing its meaning.
NOTE:
There are numerous cases where changing the view will necessarily change the meaning. The intent is that users have the capability to change views as much as possible without changing the meaning.EXAMPLE 1:
A user who has low vision wishes to view a line graph of the stock market averages over the past five years. In order to see the graph, the user changes the thickness and color of the line.EXAMPLE 2:
The user can change attributes, such as line, border, bullet size and shadow thickness, for improved viewing of charts, graphs and diagrams, but such changes would not affect the meaning.EXAMPLE 3:
The length of a temperature gauge does not change unless the scale was lengthened proportionally.EXAMPLE 4:
A user changes the size of icons making it easier for them to tell them apart10.2.2 Provide a visual information mode usable by users with low visual acuity (Level 3)
Software should provide at least one mode for visual information usable by users with corrected visual acuity between 20/70 and 20/200 without relying on audio.
NOTE:
There are several ways to conform to this recommendation. One possibility is that the software magnifies what is shown in the screen. Another one is to enable the user to change the size of fonts and icons.EXAMPLE 1:
An operating system provides a ‘large print’ setting that enlarges the fonts, lines and controls by a factor of 2 to make them easier to see. It also provides a magnifier to further enlarge portions of the screen.EXAMPLE 2:
An application provides font sizing and word wrap to allow documents to be enlarged up to 72 point font.10.2.3 Use text characters as text, not as drawing elements (Level 3)
In graphical user interfaces, text characters should be used as text only, not to draw lines, boxes or other graphical symbols.
NOTE 1:
Characters used in this way can confuse users of screen readers.NOTE 2:
In a character-based display or region, graphic characters may be used.NOTE 3:
This does not refer to the use of characters within an image. This only refers to the use of electronic text characters to create graphics (e.g. “ASCII ART”)EXAMPLE:
A box drawn with the letter “X” around an area of text is read by screen-reader software as “X X X X X X” on the first line, followed by “X” and the content and “X”. Text used for graphics in this way is usually confusing or uninterpretable when read sequentially by users with assistive software.10.2.4 Provide keyboard access to information displayed outside the physical screen (Level 1)
If the virtual screen (e.g. desktop) is made larger than the visible screen so that some information is off screen the platform software should provide a mechanism for accessing that information from the keyboard.
NOTE:
It is often preferable to re-format a page to fit on a single display if possibleNOTE:
A viewing area extending the physical boundaries of the computer is usually called a virtual screenEXAMPLE:
A moving view-port allows the users to pan to see the virtual screen area not displayed on the physical screen.10.3 Text/Fonts
10.3.1 Do not convey information by visual font attribute alone (Level 2)
Software should not use visual font attributes alone as the only way to convey information or indicate an action.
EXAMPLE 1:
Mandatory fields on a text entry form are indicated by bold text labels. An asterisk is added to the end of the mandatory field label so that this information is also available to blind users via speech output, and to screen magnification users who cannot easily detect emboldening.EXAMPLE 2:
Menu items that are not active are indicated by 'grey' or 'dimmed' text. This status is also conveyed programmatically.10.3.2 Enable user to set minimum font size (Level 3)
Software should enable users to set a minimum font size with which information would be presented on the display.
NOTE 1:
If the platform software already provides this facility, the application may utilize it.NOTE 2:
This would apply despite the font size specified in a display document.NOTE 3:
The range of allowable sizes need not be unlimited. However, to be useful, it would include large font sizes.EXAMPLE 1:
A word processor contains a “draft mode” which shows all document text in a single, user-selectable font, color, and font-size, overriding any formatting information specified in the document itself. When the user encounters small text that they have difficulty reading, they can switch into this mode and will still be viewing the same section of the document, but at a size they have already selected as meeting their needs.EXAMPLE 2:
A user has difficulty reading small text on the screen, so they set a “minimum font size” preference value in the operating system’s control panel. Their Web browser respects this setting and automatically enlarges any text that would otherwise be smaller than this size.10.3.3 Adjust the scale and layout of user interface elements as font-size changes (Level 2)
User-interface elements should be scaled or have their layout adjusted by software as needed to account for changes in embedded or associated text size.
NOTE 1:
This also applies to text associated with icons .NOTE 2:
In many cases these features are supported automatically when software incorporates standard user interface elements provided by the platform software.NOTE:
3 The range of allowable sizes need not be unlimited. However, to be useful, it would include large font sizes.EXAMPLE 1:
As fonts grow, button and menu sizes grow to accommodate them. If they become large enough, the window increases in size to prevent buttons from clipping (overwriting) each other. If the window would otherwise become too large to fit on the visible portion of the display, scroll bars are added.EXAMPLE 2:
A user increases the operating systems’ global setting for the number of screen pixels per logical inch. An application then displays a window that was designed to contain an image below three lines of 10-point text. However, because of the global setting change the 10-point text is now drawn using a larger number of physical pixels, and so is taller than in the default configuration. The application takes care to measure the height of the text when deciding where to draw the icon, rather than assuming the text will be a predictable number of pixels tall.10.4 Color
10.4.1 Do not convey information by color output alone (Level 1)
Software should not use color alone as the only way to convey information or indicate an action.
NOTE:
See HFES 200.5 clause 6.4.1.EXAMPLE 1:
Red is used to alert an operator that the system is inoperative or indicate an emergency situation. In these cases, the use of color is supplemented by text indicating “warning” or “emergency.”EXAMPLE 2:
If an indicator changes color to show an error condition, then the user can also get text or audio information that indicates the error condition.EXAMPLE 3:
Negative numbers are coded in red and also have parentheses.10.4.2 Provide color schemes designed for people with disabilities (Level 3)
Software that includes color schemes should provide color schemes designed for use by people who have disabilities.
NOTE:
People who have visual disabilities, dyslexia, photophobia, and sensitivity to screen flicker have color preferences that affect their use of light-emitting displays.EXAMPLE:
High-contrast monochrome schemes are provided, including one using light foreground on dark background and another using dark foreground on light background. The software system also includes schemes that avoid the use of colors that may confuse users who have common forms of color blindness, cataracts, macular degeneration and other visual impairments.10.4.3 Provide Individualization of color schemes (Level 2)
Software that uses color schemes should allow users to create, save and individualize color schemes, including background and foreground color combinations.
NOTE 1:
Ability to share schemes is also usefulNOTE 2:
See subsection 8.2, “User Preference Settings,” for provisions dealing with the individualization and persistence of these settings.EXAMPLE 1:
A user adjusts the color scheme provided for those with red-green color blindness to optimise discriminability for their particular requirements.EXAMPLE 2:
A person with low visual acuity uses the operating system’s control panel to request that window captions and menus be drawn in yellow text on a black background.EXAMPLE 3:
The user can choose the colors scheme that is used to draw different types of user interface elements (such as windows, menus, alerts, keyboard focus cursors, and default window background and text), indicators for general states (such as keyboard focus and selection), and codings for task-specific states (such as on-line, offline, or error).10.4.4 Allow users to individualize color coding (Level 2)
Except in cases where warnings or alerts have been standardized for mission-critical systems (e.g. red = network failure), software should allow users to individualize colors used to indicate selection, process, and the types, states, and status of user interface elements.
EXAMPLE 1:
If a user chooses red as the color to represent links, an embedded application should not override that setting and use another color.EXAMPLE 2:
A user who cannot discriminate between red and green can set the printer-status colors to be dark blue for OK and yellow to indicate printer problems. In addition the system provides an auditory warning if there is a problem with the printer..10.4.5 Provide contrast between foreground and background (Level 2)
Default combinations of foreground and background colors (hue and luminance) of the software should be chosen to provide contrast regardless of color perception abilities.
NOTE:
Measures such as those proposed by the W3C for its Web Content Accessibility Guidelines 2.0 have been developed that provide contrast regardless of the colors used.EXAMPLE:
The default colors used for a window's background and the text in that window are selected for contrast differences so that they are distinguishable, on the basis of light/dark differences, by users who cannot discriminate between different hues.10.5 Window appearance and behavior
10.5.1 Provide unique and meaningful window titles (Level 2)
Every window should have a meaningful title not shared with any other window currently displayed by the same software, even if several windows display multiple views of the same user interface element.
EXAMPLE 1:
A user opens a second window of the same document using their word processing application. Both windows are of the same document and both are editable. The word processor adds a “:1” to the end of the document name to form the name of the first window. It names the second window with the same document name except that it appends a “:2” to the end of the second window name so the two windows have meaningful yet unique names.EXAMPLE 2:
When a script or object hosted within a Web browser attempts to set the window's title to a string that is already used by another of the browser's windows, the browser modifies this string to be unique.10.5.2 Provide window titles that are unique within the windowing system (Level 2)
Platform software that manages windows should ensure that all windows have titles not shared with any other window currently on the system.
NOTE:
This recommendation is specific to platforms because on many platforms an application cannot identify the windows belonging to other applications, as that would cause a security problem.EXAMPLE:
When software creates a new window or changes the title of an existing window, the operating system checks the name of all other windows on the system and, if there is a conflict, appends a unique number to the title.10.5.3 Enable non-pointer navigation to windows (Level 1)
Software should enable users to use the keyboard or other non-pointer input mechanisms to move keyboard focus to any window currently running that is allowed to accept keyboard focus.
NOTE:
The intent here is to allow users who cannot use a pointing device to navigate among windows with a keyboard in a manner that is as efficient as possible compared to what other users might do with a pointing device.EXAMPLE 1:
By browsing a displayed list of currently running windows, the user uses a keyboard to select a window that receives keyboard focus.EXAMPLE 2:
By giving a voice command that generates a keyboard-command sequence, the user is able to move keyboard focus to any one of several windows.10.5.4 Enable “always on top” windows (Level 1)
Platform software that manages windows should enable windows to be set to always remain displayed on top of other windows.
EXAMPLE 1:
The user has a movable on-screen keyboard that is on top of all other windows so that it is visible at all times, but when the user clicks their mouse on the on-screen keyboard another window keeps the keyboard focus and the keyboard input goes to that window.EXAMPLE 2:
A user selects a screen-magnification window that is the top-level window through which all other windows are viewed and which remains always on top.NOTE 1:
If a function or a window is required continuously for users to perform a task, it is important for the window to be able to be set to always remain visible regardless of its position relative to other windows.NOTE 2:
It is often desirable for a window to remain “always on top” without ever taking keyboard focus from other windows, as is discussed in provision 10.5.10 (Enable windows to avoid taking keyboard focus).10.5.5 Provide user control of multiple “always on top” windows (Level 1)
Platform software that manages windows should provide the user with the option to choose which “always on top” window is always on top.
NOTE 1:
User control is important to prevent a conflict among multiple windows that are specified as “always on top.”NOTE 2:
Users might wish to have multiple windows on top of everything else; for example, a calendar and clock. In this case, it is desirable to provide a facility for users to choose a priority order for multiple “always-on-top” windows.EXAMPLE:
Two users each run an on-screen keyboard and a full-screen screen-magnification window. One chooses to run the on-screen keyboard on top of the magnifier and thus at a fixed location on the physical screen, while the other user runs the magnifier on top so that it enlarges the on-screen keyboard10.5.6 Enable user choice of effect of pointer and keyboard focus on window stacking order (Level 3)
Software should allow users to choose to have the window that receives pointer or keyboard focus either automatically placed on top of all other windows (with the exception of an “always on top” window, see above) or not have its stacking position changed.
EXAMPLE:
A user with motor limitations or repetitive-motion injury chooses to move the pointer among windows to automatically bring them to the top rather than to click on them, because it is faster and easier than to click on them.NOTE:
Platform software usually handles this functionality as long as applications do not interfere with its normal window handling.10.5.7 Enable window positioning (Level 1)
Software should provide a method for users to reposition all windows, including dialog boxes.
Platform software that manages windows should provide the user with an option to override any attempts by other software to prevent windows from being repositioned.
NOTE:
This helps, and can be required by, users working with several applications and/or windows, including assistive technology EXAMPLE: A user with an on-screen keyboard changes the position of a pop-up dialog so that it fits alongside their keyboard.10.5.8 Enable window resizing (Level 2)
Software should provide a method for users to resize all windows, including dialog boxes.
Platform software that manages windows should provide the user with an option to override any attempts by other software to prevent windows from being resized
EXAMPLE:
A user with low vision uses a larger font size that causes text to run off the bottom of the dialog box. They enlarge the dialog box so they can see all of the text.NOTE:
This recommendation is not a requirement because several widely-used operating systems do not yet provide dialog boxes that can be resized.10.5.9 Support minimize, maximize, restore and close windows (Level 2)
If overlapping windows are supported, software should give the user the option to minimize, maximize, restore and close software windows
NOTE 1:
This helps the users to better use several applications and/or windows at the same time.EXAMPLE:
A user who has limited short-term memory clicks on a window’s “Maximum” button in order to see as much of the content as possible.10.5.10 Enable windows to avoid taking focus (Level 1)
Platform software that manages windows should enable windows to avoid taking the keyboard focus. The keyboard focus should not be assigned to a window that is designated not to accept the keyboard focus
NOTE:
When a window is designated to not accept the keyboard focus, any action that would be normally used to reassign the keyboard focus to that window would not reassign the focus.EXAMPLE 1:
An on-screen keyboard program displays a window containing buttons, and it sets this window to remain “always on top” and to avoid taking the keyboard focus. When the user clicks the mouse on a button in this window, the on-screen keyboard sends key events to the application window where the user was working and which still has the focus.EXAMPLE 2:
A user starts a screen-magnification window that is the top-level window through which all other windows are viewed and which remains always on top. When the user clicks anywhere on the screen, the keyboard focus remains unchanged and the screen magnifier passes the mouse input to the appropriate underlying window.10.6 Audio output
10.6.1 Use tone pattern rather than single tone to convey information (Level 2)
When conveying information audibly, software should use temporal or frequency-based tone patterns rather than using a single absolute pitch or volume.
EXAMPLE:
In a teleconference service, a high to low tone pair, rather than just a low tone, indicates a person signing off.10.6.2 Enable control of audio volume (Level 1)
Software should enable users to control the volume of audio output
NOTE:
If software generates audio output, it is preferable for it to provide its own control that adjusts the volume of its own audio output relative to other software and any system-wide volume setting.EXAMPLE:
The user has a multimedia player application and a phone with an alert tone. The user adjusts the first application's volume control to a low setting and the second's volume control to a high setting. The second now sounds louder than the first. When they reduce system-wide volume setting in the operating system's global preferences, and the second application remains louder than the first, even though they are both reduced in volume.10.6.3 Use a mid-frequency range for non-speech audio (Level 3)
The fundamental frequency of task-relevant non-speech audio used by software should occur in a range between 500 Hz and 3 000 Hz or be easily adjustable by the user into that range.
NOTE:
Sounds in this range are most likely to be detectable by people who are hard of hearing.10.6.4 Enable adjustment of audio output (Level 3)
Software should enable users to adjust the attributes of task-relevant audio output such as frequency, speed, and sound content.
NOTE:
The range of adjustment will be constrained by the sounds that a system can produce.EXAMPLE 1:
A user can replace the sounds associated with various events and notifications, allowing him or her to choose sounds that he or she is able to distinguish.EXAMPLE 2:
A user may alter the speed of speech from a synthesizer to enhance understanding.10.6.5 Control of background and other sound tracks (Level 3)
If the background and other sound layers are separate audio tracks/channels, software should provide a mechanism to enable users to control the volume of and/or turn on/off each audio track.
NOTE:
Background sounds (e.g., sound effects, music) can mask speech audio or make speech audio more difficult to distinguish by those who are hard of hearing.EXAMPLE:
A person who is hard of hearing turns down the background sound so they can understand the dialogue.10.6.6 Use specified frequency components for audio warnings and alerts (Level 3)
Alerts and other auditory warnings provided by software should include at least two strong mid- to low-frequency components, with recommended ranges of 300 Hz to 750 Hz for one component, and 500 Hz to 3 000 Hz for the other .
10.6.7 Allow users to choose visual alternative for audio output (ShowSounds) (Level 1)
If the hardware supports both audio and visual output, platform software should enable users to choose to have task relevant audio output (including alerts) presented in visual form, auditory form or both together, and software running on such systems should support those options.
NOTE:
This is commonly called ShowSounds and is available on most major platforms (see Annex B).EXAMPLE 1:
By default a beep is provided when an error message has been displayed or a footer message has been updated. For users who have chosen to receive visual feedback, a flashing border on a dialog box is provided in conjunction with a warning toneEXAMPLE 2:
Explanatory text is provided in a dialog box when a distinctive audio (alert or other) is played.EXAMPLE 3:
Software that provides voice output provides closed captions as text that can be displayed on systems providing “closed caption” support or displayed by Braille devices through assistive software.10.6.8 Synchronize Audio equivalents for visual events (Level 1)
Software should synchronize audible equivalents with the visual events they are associated with.
NOTE 1:
This allows a user who cannot see the screen to follow the event sequences.NOTE 2:
Audio is sometimes presented early or immediately after to avoid other audio events or real-time delays.EXAMPLE:
A movie has audio descriptions of important visual information. The descriptions are timed to occur during gaps in the movie dialog.10.6.9 Provide speech output services (Level 1)
If the hardware has the capability to support speech synthesis, platform software should provide programming services for speech output.
NOTE 1:
This does not imply that a text to speech engine should always be installed.NOTE 2:
This is relevant for users who are blind or have other reading disabilities and that depend on speech-based assistive technologies.EXAMPLES:
Examples of those services are SAPI (Speech API) in Microsoft Windows, Java Speech in the Java platform, Mac OS X TTS(Text to Speech) on the Apple Macintosh, and GNOME speech on the GNOME desktop (Linux).10.7 Text equivalents of audio (captions)
10.7.1 Display any captions provided (Level 1)
Software presenting audio information should provide the facility to display associated captions.
NOTE:
It is important for captions to be displayed in a way that provides sufficient contrast with their background. (See 10.4.5)EXAMPLE:
A media player allows users to display the captions in an “Interactive Tour” which allows hard of hearing and deaf users to use the tour.10.7.2 Enable system-wide control of captioning (Level 2)
Platform software should provide a system -wide setting to allow user to indicate that they want available captions to be shown by all software.
NOTE:
A global setting to enable or disable captions is commonly called ShowSounds and is available on several major platforms (see Annex B).10.7.3 Support system settings for captioning (Level 1)
Software that presents captions should use system-wide caption preference settings by default. If the system-wide preference settings change during playback, the new settings should be used.
EXAMPLE:
A media player checks the system ShowSounds setting when it launches, and displays captions if that value is set to True. The media player allows the user to temporarily override this setting but will re-synchronize to the system setting if the system setting changes while it is running.10.7.4 Position captions to not obscure content (Level 2)
Software that presents captions should position captions to minimize interference with visual content.
EXAMPLE:
Media player opens up a separate attached window to display captions so they do not cover up the video being played.10.8 Media
10.8.1 Enable users to stop, start and pause media (Level 1)
Software should enable users to stop, start and pause the presentation.
10.8.2 Enable users to replay, rewind, pause and fast or jump forward (Level 2)
Software should enable users to replay, rewind, pause and fast or jump forward the presentation, where appropriate to the task.
NOTE 1:
“Replay” functions help users to avoid missing some information in the content.NOTE 2:
Meeting this provision is not always possible, especially in “real-time” presentations.10.8.3 Allow user to control presentation of multiple media streams (Level 2)
Software should enable users to select which media streams are presented, where it is appropriate for the task..
EXAMPLE 1:
A user who is able to see but not hear decides to view a captioned video with the audio turned off because they cannot determine the volume and do not want to disturb others.EXAMPLE 2:
A user makes a selection to turn off background sound in a video presentation where the voiceover is in a separate media stream from the background sound.10.8.4 Update equivalent alternatives for media when the media changes (Level 1)
Software should enable equivalent alternatives (e.g., captions, or auditory descriptions of the visual track of a multimedia presentation) to be updated when the content of a media presentation changes.
EXAMPLE: The audio portion of an “Interactive Tour” video is corrected; the accompanying captions and descriptive audio are corrected at the same time.
10.9 Tactile Output
10.9.1 Do not convey information by tactile output alone (Level 2)
Software should not use tactile output alone as the only way to convey information or indicate an action.
NOTE:
In contrast to visual and acoustic output, only a few sets of symbols are standardized for tactile output (e.g. Braille-code in several versions).EXAMPLE 1:
Bursts of tactile vibrations are verbally described as representing a ringing bell.EXAMPLE 2:
The vibration pattern of a pointing device with tactile feedback is explained independent of the functionality of the pointed object.EXAMPLE 3:
The adjusted maximum level of pressure output of a force feedback system is presented as an alphanumerical value via a visual display.10.9.2 Use familiar tactile patterns (Level 2)
Software should use well known tactile patterns (familiar in daily life) for presenting tactile messages.
NOTE:
A person without special knowledge in tactile coding (e.g. like Braille-code, Morse-code etc.) will be mostly well experienced in tactile pattern of daily life.EXAMPLE:
Bursts of tactile vibrations are designed to have a pattern similar to a ringing bell.10.9.3 Enable tactile output to be adjusted (Level 2)
Software should allow users to adjust tactile output parameters to prevent discomfort, pain or injury.
EXAMPLE:
A user with reduced haptic perception can individually adjust an upper limit for the tactile output of a force feedback system11 Online Documentation, Help, and Support Services
11.1 Documentation and Help
11.1.1 Provide understandable documentation and Help (Level 2)
Product documentation and Help for software should be written using clear and simple language, to the extent this can be done using the vocabulary of the task.
NOTE 1:
The use of technical terms is permitted where they are required to clearly explain the functionality or product.NOTE 2:
For recommendations on the design of on-line Help and user guidance, see HFES 200.3, Section 11.EXAMPLE:
: The documentation of a CAD (Computer Aided Design) system can use terminology from the field of technical drawing.11.1.2 Provide user documentation and Help in accessible electronic form (Level 1)
All user documentation and Help should be delivered in electronic form that meets applicable documentation accessibility standards, This documentation should be provided with the product, or upon request on a timely basis and without extra cost.
NOTE:
The category of “users” includes administrators. For software development software, users would include software developers.11.1.3 Provide text alternatives in electronic documentation and Help (Level 1)
Information presented in pictures and graphics by software should also be provided as descriptive text suitable for screen reading, printing, or Braille conversion so that it can be read by an alternative method.
NOTE:
Using both text and graphics simultaneously (in the default presentation) to communicate information is often helpful to readers who use one to reinforce the other and for people who differ in terms of their preferred style of information processing (e.g. visual vs verbal).EXAMPLE:
A user can print the text portion of the on-line Help and read text descriptions of any embedded graphics.11.1.4 Write instructions and Help without unnecessary device references (Level 2)
Instructions and Help for software should be written so that they refer to the users' actions and resulting output without reference to a specific device. References to devices, e.g. the mouse or the keyboard, should only be made when they are integral to and necessary for understanding of the advice being given.
NOTE:
For contexts where operation of a specific device such as a mouse is required, a generic description may not be possible. However, such specific descriptions need only occur in Help about using that device, not in all contexts.EXAMPLE 1:
The task description in Help does not require a user to recognize the color of a user interface element to use it, so the text does not state “click on the green icon”. Instead the name is reported.EXAMPLE 2:
An application provides a description of how to perform tasks using as many different input/output modalities as are available (e.g. mouse, keyboard, voice, etc.).11.1.5 Provide documentation and Help on accessibility features (Level 1)
Help or documentation for software should provide general information on the availability of accessibility features and information about the purpose of and how to use each feature.
NOTE:
It is important for users to be able to easily discover the accessibility features of the software.EXAMPLE 1:
On-line Help provides a section describing features of interest for people who have disabilities.EXAMPLE 2:
On-line Help explains keyboard-only use of the software.EXAMPLE 3:
On-line Help describes how to adjust font size.EXAMPLE 4:
A product has multiple color schemes and documentation and on-line Help describe which color schemes are appropriate for people with color vision deficiencies.11.2 Support services
11.2.1 Provide accessible support services (Level 1)
Technical support and client support services for software should accommodate the communication needs of users with disabilities.
EXAMPLE 1:
In countries where relay services are not provided free of charge, a company contracts with relay service(s) to assist the technical support process by providing real-time translation between the company’s support staff and deaf customers who use text or video telephones to allow them to communicate in text or sign language. A similar service provides re-voicing for people whose speech is difficult to understand. The company also trains its technical support staff on how to optimize conversation through relay services.EXAMPLE 2:
On-line Help or documentation provided on the software company's Web site is designed to comply with published guidelines for making Web content accessible.EXAMPLE 3:
Application Helpdesk operators are trained on the accessibility features so that they are able to guide a user through the process of carrying out a task or rectifying a fault entirely through the keyboard interface, without requiring any mouse click operations.EXAMPLE 4:
A company provides a dedicated telephone line for its customers who use telecommunications devices for the deaf (e.g. TTY/TDD), and trains support staff on its use and etiquette so that users can communicate directly (rather than through relay) with customers support personnel.EXAMPLE 5:
An IVR (interactive voice response) system provides software support services that are accessible to TTY users and complies with user interface design guidance in Part 4 of this standard.11.2.2 Provide accessible training materials (Level 2)
If training is provided as part of the product, the training materials should meet applicable accessibility standards.
Appendix A (Informative): Issues Regarding Activity Limitations
A.1 General
This Annex provides some additional information about sources of limitation on typical activities involving the use of software systems, and their implications for designing for accessibility.
While these sources of limitation are frequently described in relation to underlying body functions as used in the World Health Organization International Classification of Functioning, Disability and Health (ICF)[51], the same limitations may arise from other sources, such as the particular context in which individuals find themselves at any time.
For the purposes of this Annex, three main area of the ICF classification of Body Functions are considered most relevant to interaction with software systems; sensory functions, including seeing and hearing; neuromusculoskeletal and movement related functions; and mental functions, including attention, memory and language. In addition reference is made to the implications for accessibility of combined sources of limitation due to body function.
A.2 Sensory Functions
A.2.1 Vision
For many interactive software systems a major part of the interaction between the user and the technology relies on the use of visually presented material.
A.2.1.1 Individuals who are unable to see
For any user who is unable to see this means that other senses will have to be used and appropriate provision made to enable access to equivalent content and resources via those senses. In addition, individuals may have normal vision but are unable to view a screen due to context or task-related issues. For example: whilst driving a car a motorist is unable to view the screen of their GPS system.
Typical non-visual forms of interface used in interactive software are auditory or tactile. Whether used as a substitute for visual interfaces or in their own right, the primary issues are how to:
- obtain information provided by sound or tactile display whether or not connected with a visual presentation,
- navigate in an auditory or tactile environment, and/or achieve equivalent navigation to that among elements presented visually,
- identify user interface elements, and
- control focus, navigation, and other functions via the keyboard, joystick, voice or other control actuator.
Some individuals who cannot see will use specialized assistive technologies. For example, individuals who have learned Braille can take advantage of software and hardware that will provide ‘screen readers’ which will produce Braille output. Those who become blind later in life are less likely to learn such specialized skills although they may learn some new auditory skills and thus rely on additional auditory methods to obtain information
“Screen readers”, i.e. assistive software that can provide spoken information for windows, controls, menus, images, text and other information typically displayed visually on a screen, help people who have to rely main on speech to convey information. Others use tactile displays such as dynamic Braille or Moon displays
Interactions based on spatial relationships and the use of visual metaphors present users who cannot see with the greatest difficulties in terms of the provision of equivalent information in another modality. It is important therefore that all information (not just text) that is provided visually be available to assistive technologies for alternate display.
In addition, the use of speech output to substitute for visually presented material may cause problems due to the potential difficulty of attending to other auditory outputs that occur while they are listening. Braille and other tactile displays can assist here.
A.2.1.2 Individuals with low vision
Persons with low vision often use technologies more commonly associated with those who are unable to see (e.g., screen readers). However, for individuals with low vision it is important to find ways to facilitate their use of their remaining vision whenever possible. Sight, even limited sight is a very powerful capability and they should not have to fall back and access things as if they were blind. Combinations of visual and auditory techniques are often most effective. (Tactile can sometimes be used but is less common except for individuals with very low vision.)
It is a universal experience that vision changes throughout life and once adulthood is reached vision tends to become less effective over time. In addition a variety of factors such as low acuity, color-perception deficits, impaired contrast sensitivity, loss of depth perception, and restricted field of view may affect the ability of individuals to see and discriminate visually presented material. Environmental factors such as glare from sunlight, or light sources with poor color rendering, may have similar consequences. An individual who has reduced acuity may find that ordinary text is often difficult to read, even with the best possible correction.
The main approach in terms of increasing accessibility (other than by removing externally generated sources of interference with vision) is to provide means by which the visually presented material can be changed to increase its visibility and discriminability. Individuals interacting with systems in low vision conditions may experience particular difficulties in detecting size coding. They may experience difficulties with font discrimination and with locating or tracking user interface elements such as pointers, cursors, drop targets, hot spots and direct manipulation handles.
Support consists of the provision of means for increasing the size, contrast and overall visibility of visually displayed material, as well as allowing choice of colors and color combinations. What is required in any case depends upon an individual’s specific visual needs and thus depends upon the capacity for individualization. Common assistive technologies include the use of oversized monitors, large fonts, high contrast, and hardware or software magnification to enlarge portions of the display.
Additionally, non-visual or low visual conditions may cause difficulties when very small displays, such as those on printers, copiers and ticket machines are required to be read.
A.2.2 Hearing
A.2.2.1 Individuals who are unable to hear
Individuals may be unable to hear sound and thus be unable to detect or discriminate information presented in auditory form. The inability to hear sound below 90 dB is generally taken as the criterion for an individual being unable to hear. Disabling environments may occur when individuals cannot hear signals generated by the system, for example if there is a very high ambient noise level or the use of hearing protection. These situations must be regarded as creating limitations on the ability of individuals to use the system. In these circumstances the preferred solution is to eliminate the source of the problem. However, where this may be impractical, the approach will be to implement the same software based solutions that are appropriate for individuals who cannot hear in standard environments.
When interacting with software systems, users who cannot hear will encounter problems if important information is presented only in audio form. It is therefore important to enable the presentation of auditory information in other modalities. For example, verbal information can be provided by common symbols, text format or the “ShowSounds” feature, (which that notifies software to present audio information in visual form). These techniques will also be of benefit to individuals in contexts where sound is masked by background noise (e.g. a machine shop floor) or where sound is turned off or cannot be used (e.g. a library).
Some individuals with a general inability to detect auditory information may also experience limitations on voice and speech functions. This may have implications for their ability to produce speech recognizable by voice-input systems and should be considered when such technology is being implemented. In addition, if their experience of a national language is as a second language (sign language often being the primary language for people who become deaf at an early age or who are born deaf) this will have implications for the form of language used in the presentation of visual alternatives as a consequence of the learning aspects of mental function.
Some individuals who are deaf interact with software, such as interactive voice response systems, via a telecommunication device for the deaf (TDD), a text telephone (TTY), or a relay service in which a human operator types spoken text (e.g., IVR prompts) and relays it to the user. It is important that designers ensure that applications like this are accessible to these users and do not impose unnecessary response time requirements that render transactions impossible.
A.2.2.2 Individuals with a reduced ability to hear
Individuals may experience difficulties in hearing and discriminating auditory information both as a result of individual capabilities and as a result of external sources of interference. Issues that may arise include:
- the inability to detect sound
- the inability to discriminate frequency changes, differential decreases in sensitivity across the frequency range, and select frequency ranges where they have low sensitivity;
- difficulty in localizing sounds;
- difficulty in discerning sounds against background noise
- inappropriate response due to mishearing or not hearing auditory information
As with individuals who are unable to hear, the main implications for accessibility involve the provision of equivalent versions of auditory material via another modality, for example the use of the “ShowSounds” feature. In addition, individuals with a reduced ability to hear may adjust auditory material through the ability to individualize the characteristics of auditory presentations (e.g. increasing volume or selectively changing the frequency spectrum used).
Individuals with a reduced ability to hear may or may not use hearing aids, but to the extent that this form of assistive technology can take advantage of selective auditory inputs from the software system, the availability of such input will increase accessibility. It is very common for individuals with a reduced ability to hear to use whatever hearing they have and thus combine modalities (e.g., use captions as well as audio).
Finally, as with individuals who are unable to hear, some users with a reduced ability to hear may experience limitations on voice and speech functions. This may have implications for their ability to produce speech recognizable by voice-input systems and should be considered when such technology is being implemented.
A.2.3 Tactile
Some individuals, due to disease, accident or aging, have reduced tactile sensations. This can interfere with any tactile output modes. The ability to have information available in a variety of forms is important to address this. It is also important to not make assumptions about alternate modalities that may be available since they may not work for some users. For example, diabetes can cause loss of visions and loss of sensation in the fingers.
A.3 Neuromusculoskeletal and movement related functions
A.3.1 General
Interaction with software systems is highly dependent on the means of input/output used by individuals. While general mobility of the whole body may not be a critical factor, motor activity of the limbs and hands whether affected by the mobility and stability of joint and bone functions, the power, tone and endurance functions of the muscles, or the voluntary or involuntary control of movement, is critical to successful interaction. The design of software must take account of the range and variety of characteristics that may be present within the user population.
A.3.2 Individuals with limitations in motor activity
There are many factors that may influence motor activity. The causes of limitations on activity may be impairments that are long term and/or progressive, or temporary, as well as contextually determined, for example the need to carry out another task while interacting with the software. Particular issues are that individuals may have poor co-ordination abilities, weakness, difficulty reaching, involuntary movement (e.g., tremor, athetosis), and problems in moving a body part. Pain and soft tissue injuries can also cause limitations in a person’s physical abilities causing them to have to use alternate means for input. Individuals, as a simple consequence of the aging process, experience a slowing of reaction time and speed of motor actions. Designers need to ensure that applications take this into account and allow sufficient time for user actions in software that requires timed responses.
Individuals with limitations on motor activity may or may not use assistive technologies. There is a wide variety of hardware and software that may be employed by those who do and therefore it is not possible to describe the full range in detail here. Examples include eye-tracking devices, on-screen keyboards, speech recognition, and alternative pointing devices.
The extreme variation in the needs and capabilities of individuals who have motor activity limitations makes the provision of the capacity for individualization of input parameters critical to achieving accessibility. In particular it is necessary to be able to customize input parameters in terms of spatial allocations of functionality and the timing that underpins interaction.
A.3.3 Physical Size and Reach
Some individuals are of small stature. This may be because they are children or because it is their adult size. This can cause problems with reach, hand size, normal seated position etc. In either case most access issues are hardware or workstation in nature. However, the ability to use alternate input devices may also be very important to this group.
A.3.4 Speech Disabilities
Speech is also a form of motor activity, and some individuals have motor or cognitiveimpairments that make their speech difficult to understand, or they may not be able to speak at all. Individuals may experience a speech disability for any one of a number of reasons. They may have an impairment or injury to the physical mechanisms of speech (respiration, vocal tract, physical manipulators). They may have problems in the neuromuscular systems that control speech. They may have cognitive impairments of the speech or language centers. User interfaces that include speech input need to provide other input options that can substitute for speech, or be able to utilize the speech of these individuals, some of whom may be using augmentative communication devices to produce speech.
A.4 Mental functions
A.4.1 General
Variation in psychological functioning probably represents greater diversity than in any other function in human beings. Of particular concern to software accessibility is the area of cognitive functioning that relates to the handling of information. Receiving information, processing it and making appropriate responses forms the major element in the use of interactive software systems. These human cognitive capabilities are very diverse, highly adaptable and subject to change throughout life.
The issues commonly encountered by users who have cognitive disabilities involve difficulties receiving information, processing it and communicating what they know. Users with these impairments may have trouble learning, making generalizations and associations and expressing themselves through spoken, signed, or written language. Attention-deficit hyperactivity disorders make it difficult for a person to sit calmly and give full attention to a task.
The issues commonly faced by users who have dyslexia are that they have difficulty reading text that is presented in written form and producing written text.
While there are issues that are well understood, and for which it is possible to provide specific guidance, it has to be recognized that understanding of cognitive function is incomplete and that individuals vary to such an extent that no general guidance is appropriate in many situations. Much of the guidance in ergonomics standards relating to the design of dialogue interfaces, in particular ISO 9241-10 and -12 to -17, is based on the currently established understanding of human cognitive function.
A.4.2 Limitations on attention
For many information handling tasks it may be necessary to focus attention, or sustain attention over a period of time. Strategies for helping to identify the required focus of attention involving formatting and presentation of information will be beneficial. For people with limits on their capacity to sustain attention it is important to provide the capacity to adjust the characteristics of non task relevant displayed information, to avoid potential distraction.
A.4.3 Limitations on memory
Information handling tasks are very sensitive to limitations on both short and long term memory. In particular people experience problems recalling information from long term memory and will have difficulties holding newly presented information in short term memory if there is too much of it or if it has to be retained for too long. Thus the design of the interactive software should enable recognition rather than demanding recall, wherever possible, and use information consistently and in a way that is consistent with user expectations. Demands on short term memory should be minimized.
A.4.4 Limitations on the mental functions of language
Limitations in the ability to understand and produce written and/or spoken language have implications for the ways language is used in software systems. These limitations may result from a wide variety of causes including conditions at birth, illness, accident, medical procedures to address other conditions and aging. They may be physical, cognitive or psychological in nature. Individuals with deafness who rely on visual communication may sometimes also have reduced language skills in a printed language where the printed language is different than their primary visual language (as is usually true with sign languages). Regardless of cause, they result in reduced ability to handle written and/or spoken language. Standards providing guidance on clear expression and presentation of language will be important to achieve ease of reading for the largest possible number of readers. In addition specific options to provide additional support, such as providing the option to have an auditory version of the text available in parallel, will be beneficial. For individuals with limitations on writing ability, alternate inputs as well as use of symbols and speech output can be of help. Support within software for alternate input and output compatibility is therefore important.
A.5 Individuals with other disabilities
A.5.1 Allergy
Some individuals experience allergic reactions to chemicals or other substances that are so severe that it prevents them from breathing or being in contact with them for any extended period of time. This can cause severe limitations in the environments that they can live or work in or in the materials that devices they use can be made of. This is largely a hardware problem however that does not affect the design of software. However the ability for software to accept alternate inputs and to allow variations in display can help for some.
A.5.2 Other Functional Limitations
Some individuals may have disabilities that affect their sense of touch, their haptic sense, the functioning of their vestibular system (sense of balance), and their sense of smell or taste. Although these abilities are infrequently required for the use of software user interfaces, systems designed to utilize these senses need to take into account the implications of disabilities related to these senses and provide alternative input and output channels to enable users affected by these disabilities to use the software.
A.6 Multiple body function effects
Individuals may experience limitations on function in more than one of the areas of body function at the same time and this creates greater complexity in terms of achieving accessibility in interactive software systems. This may particularly occur with increasing age when changes in sensory, motor and mental function may occur in parallel. Experience of the effects of age on body function is universal and as such is a matter of concern for older users. For this reason the greater the possibility of integrating design solutions that address the full range of user capabilities within the software system, rather than requiring add-on assistive technologies, the greater the positive outcome in terms of achieving accessibility and removal of potential sources of stigma.
As NOTEd with respect to motor functions, slowing also occurs with respect to cognitive functions as people age. When the user population contains people who experience cognitive slowing, whether age-related or due to some other condition, developers of software need to ensure that users have sufficient time to complete activities that may have time constraints imposed on their execution.
Combinations of such limitations may mean that some of the guidance offered may not address the needs of individuals who are exposed to such combined effects. For example auditory output of visually presented text may not be of any help to somebody who is not only experiencing low vision but also has loss of hearing. The complex interactions that arise from these different sources increase the demand to ensure that solutions can be individualized to meet specific needs.
Appendix B (Informative): StickyKeys, SlowKeys, BounceKeys, FilterKeys, MouseKeys, RepeatKeys, ToggleKeys, SoundSentry, ShowSounds and SerialKeys
Introduction
This set of access features was developed by the Trace R&D Center at the University of Wisconsin-Madison to make computer systems usable by people with a wide range of physical and sensory impairments. Implementations of these access features are available for at least eight operating systems and environments including: Apple IIe, IIgs, and MacOS (by Trace R&D Center and Apple); IBM PC-DOS and Microsoft DOS (AccessDOS by Trace Center for IBM); X Windows (Access X by the X-Consortium); Microsoft Windows 2.0 and 3.1 (Access Pack by Trace R&D Center); Windows 95, 98, NT, ME, XP and VISTA (by Microsoft); and Linux (currently under implementation).
Source code and prototype implementations for these access features have been recently released under an open source license that allows incorporation into commercial products.
Permission to Use Terms
The terms StickyKeys™, SlowKeys™, BounceKeys™, FilterKeys™, MouseKeys™, RepeatKeys™, ToggleKeys™, SoundSentry™ ShowSounds™ and SerialKeys™ are all trademarks of the University of Wisconsin. However, use of the terms is permitted freely without royalty or license to describe user interface features that have the functionality and behavior described below. The ™ and credit statement are appreciated but not required.
Description of Access Features
Common activation behaviors
There are two methods for turning many (but not all) of these features on and off. One method is through the control panel. A second method is to turn them on and off directly using keyboard shortcuts. The keyboard shortcuts are provided for two reasons. One – these features may be needed on kiosks and other closed systems where the control panels are not available. Two – for some users and operating systems, it may be difficult to open the control panels unless the feature is turned on. Where both methods are defined they are described below. When turning the features on from the keyboard a low-high slide sound should be emitted. A high-low slide sound should be used for Off.
StickyKeys
The StickyKeys feature is designed for people who cannot use both hands, or who use a dowel or stick to type. StickyKeys allows individuals to press key combinations (e.g. Ctrl+Alt+Del) sequentially rather than having to hold them all down together. StickyKeys works with those keys defined as “modifier” keys, such as the Shift, Alt and Ctrl keys. Usually the StickyKeys status is shown on-screen at the user’s option.
Turn feature On
Two methods, both essential:
- from the control panel and
- from the keyboard by pressing the Shift key 5 times with no intervening key presses or mouse clicks. A confirmation dialog is recommended for keyboard activation of this feature, though the user should be able to turn this dialog off. Audible and visual indicators are recommended when the feature is turned on or off. (NOTE: the keyboard activation feature may be enabled/disabled in the control panel.)
Turn feature Off
- Using the same two methods as Turn On, plus (at user’s option) turn off anytime two keys are pressed simultaneously. Audible and visual indicators are recommended when the feature is turned On or Off.
Operation
- Pressing and releasing any ‘modifier’ key once causes a low-high tone and causes that modifier key to “latch” so that the next (single) non-modifier key pressed (or the next pointing device button action) is modified by the latched ‘modifier’ key(s).
- Pressing any modifier key twice sequentially causes a high tone and ‘locks’ that modifier key down. All subsequent non-modifier keys pressed, pointing device actions, and any software actions that are altered by modifier key state are modified by the locked modifier key(s).
- Pressing a ‘locked’ modifier key once unlocks and releases it and causes a low tone.
NOTE: Multiple modifier keys can be latched and/or locked independently.
Adjustments
- On/Off - StickyKeys (default is Off),
- On/Off - Keyboard Shortcut - (5 shift key activations) (default is On),
- On/Off - Keyboard Activation Confirmation Dialog (default is On),
- On/Off - Auto-turnoff if two keys held down (default is On),
- On/Off - On-screen StickyKeys status indicators (default is On),
- On/Off - Audible indication of StickyKeys activation and use (optional) (default is On).
NOTE:
The system should provide the StickyKeys feature for all standard modifier keys. Some modifier keys, such as the Fn key found on many NOTEbook computers, may need to have StickyKeys implemented in the keyboard firmware.SlowKeys
The SlowKeys feature is designed for users who have extra, uncontrolled movements that cause them to strike surrounding keys unintentionally when typing. SlowKeys causes the keyboard to ignore all keys that are bumped or pressed briefly. Keystrokes are accepted only if keys are held down for a user specifiable period of time.
Turn feature On
Two methods, both essential:
- from the control panel and
- from the keyboard by holding the right shift key down for 8 seconds (if keyboard activation of this feature has been enabled in the control panel). A confirmation dialog is recommended for keyboard activation of this feature, though the user should be able to turn this dialog off. Audible and visual indicators are recommended when the feature is turned on or off.
Turn feature Off
- Using the same two methods as Turn On, plus reboot. (Feature is always off at boot time because it makes the keyboard look like it is broken. Rebooting must turn feature off.)
Operation
- Pressing the right shift for 8 seconds causes the feature to Turn On. A double beep is emitted at 5 seconds to cause any inadvertent holding of the shift key to be stopped. This “right-shift-key” start must be enabled from the control panel.
- Once SlowKeys is turned On, the keyboard does not accept any keystrokes unless keys are held down for the SlowKeys acceptance time. When a key is first pressed a high tone is emitted. After the preset “acceptance” time has elapsed a second low tone is emitted and the key stroke is accepted (key down is sent).
Adjustments
- On/Off - SlowKeys (default is Off),
- On/Off - Keyboard activation & deactivation of SlowKeys (default is On),
- On/Off - Keyboard activation confirmation dialog (default is On),
- On/Off - Audible indication of SlowKeys activation and use (optional) (default is On).
- Delay-before-acceptance setting for SlowKeys (a minimum range of 0.5 to 2 seconds, default is 0.75 seconds).
NOTE:
It is acceptable to make SlowKeys and BounceKeys be mutually exclusive. Both of these features can be active at the same time, however SlowKeys will mask BounceKeys. So having both active will result in SlowKeys operation. (as it is impossible to have two quick key presses if you have SlowKeys activated.)BounceKeys
The BounceKeys feature is designed for users with tremor that causes them to inadvertently strike a key extra times when pressing or releasing the key. BounceKeys only accepts a single keystroke at a time from a key. Once a key is released it will not accept another stroke of the same key until a (user settable) period of time has passed. BounceKeys has no effect on how quickly a person can type a different key.
Turn feature On
Two methods, both essential:
- from the control panel and
- from the keyboard by holding the right shift key down for 8 seconds (if keyboard activation of this feature has been enabled in the control panel). A confirmation dialog is recommended for keyboard activation of this feature, though the user should be able to turn this dialog off. Audible and visual indicators are recommended when the feature is turned on or off.
Turn feature Off
- Using the same two methods as Turn On, plus reboot (feature is not On at boot time or is not on at boot time if delay is longer than .35 seconds.)
Operation
- Once turned On the user types as usual at full speed. Any rattling of keys will be ignored. To type two of the same letter in a row, the user simply waits briefly between key-presses (usually half a second or so).
Adjustments
- On/Off - BounceKeys (default is Off),
- On/Off - keyboard activation & deactivation of BounceKeys (default is Off),
- On/Off - keyboard activation confirmation dialog (default is On),
- On/Off - audible indication of BounceKeys activation and use (optional) (default is On).
- BounceKeys debounce delay before accepting the same key again (a minimum range of 0.2 to 1 second) (default is 0.5).
NOTE:
It is acceptable to make SlowKeys and BounceKeys be mutually exclusive. Both of these features can be active at the same time, however SlowKeys will mask BounceKeys. So having both active will result in SlowKeys operation. (as it is impossible to have two quick key presses if you have SlowKeys activated.)FilterKeys
The name “FilterKeys” is sometime used for the BounceKeys and SlowKeys features packaged together. It is acceptable to make these two features be mutually exclusive, however they can both be active at the same time.
MouseKeys
The MouseKeys feature is designed for users who cannot use a mouse accurately (or at all) because of physical capability. MouseKeys allows the individual to use the keys on the numeric keypad to control the mouse cursor on screen and to operate the mouse buttons.
Turn feature On
Two methods, both essential:
- from the control panel and
- from the keyboard by pressing the key combination specified by the OS (if keyboard activation of this feature has been enabled in the control panel). (New OS implementations should consider using LeftShift-LeftAlt-NumLock key combination.)
Turn feature Off
- Using the same two methods as Turn On.
Operation
- Once MouseKeys is turned On the NumLock key can be used to switch the keypad back and forth between MouseKeys operation and one of the other two standard modes of keypad operation (number pad or key navigation). It is recommended that there be an option for showing MouseKeys status on screen.
When in MouseKeys mode the keypad keys operate in the following fashion:
Controlling pointer movement (For computers with a number pad):
- 1 – Move down and to the left
- 2 – Move down
- 3 – Move down and to the right
- 4 – Move to the left
- 6 – Move to the right
- 7 – Move up and to the left
- 8 – Move up
- 9 – Move up and to the right
NOTE:
1 With all of these movement keys a press and release moves the pointer by one pixel.NOTE:
2 Pressing and holding a key down causes the pointer to move by one pixel and then after a pause of .5 second it begins to move and accelerate.NOTE:
3 If the Ctrl key is held down, a press and release causes the pointer to jump by multiple pixels (e.g. 20 pixels). (Optional)NOTE:
4 If the Shift key is held down, the mouse moves by one pixel only no matter how long the movement key is held down. (Optional).Clicking and dragging (Recommended):
- 5 – Click the selected mouse button
- + – Double-click the selected mouse button
- . – Lock down the selected mouse button
- 0 – Release all locked mouse buttons
Selecting mouse buttons (Recommended):
- / – Select the left mouse button to be controlled with MouseKeys
- * – Select the center mouse button to be controlled with MouseKeys
- . - (on systems with no center button, select both left and right mouse buttons)
- - – Select the right mouse button to be controlled with MouseKeys
Adjustments
- On/Off - MouseKeys (default is Off),
- On/Off - keyboard activation & deactivation of MouseKeys (default is On),
- setting for the top pointer speed,
- setting for the rate of acceleration (starting at very slow – 1 second for noticeable speed increase),
- On/Off - “MouseKeys when Num Lock On” (allows the user to chose which other keypad mode they want to use with MouseKeys) (default is On),
- On/Off - “Show MouseKeys Status on Screen” (optional) (default is On)
RepeatKeys
The RepeatKeys feature is designed to allow use of computers by people who cannot move quickly enough when pressing keys to keep them from auto-repeating. The facility to adjust repeat onset, repeat rate and to turn auto-repeat off are usually included as part of most keyboard control panels. If these functions are not included, RepeatKeys provides them. RepeatKeys also ensures that the repeat delay and repeat interval can be set long enough for users who do not have quick response (if the maximum value for either of the regular key repeat settings are not long enough).
Operation
- These settings affect the auto-repeat function when keys are held down.
Adjustments
- Key repeat On/Off,
- setting for repeat onset delay (maximum value of at least 2 seconds),
- setting for repeat interval (maximum value of at least 2 seconds).
ToggleKeys
The ToggleKeys feature is designed for users who cannot see the visual keyboard status indicators for locking (toggle) keys such as CapsLock, ScrollLock, NumLock, etc. ToggleKeys provides an auditory signal, such as a high beep, to alert the user that a toggle key such as the CapsLock has been locked, and a separate signal, such as a low beep, to alert the user that a toggle key has been unlocked.
Turn feature On/Off
- From the control panel
Operation
- Pressing any toggle key causes a tone to be sounded, a high tone indicating the key is now locked, and a low tone indicating the key is now unlocked.
Adjustments
- On/Off - ToggleKeys (default is Off)
SoundSentry
The SoundSentry feature is designed for individuals who cannot hear system sounds (due to hearing impairment, a noisy environment, or an environment where sound is not allowed such as a library or classroom). SoundSentry provides a visual signal (e.g. screen flash, caption bar flash, etc.) to visually indicate when the computer is generating a sound. SoundSentry works by monitoring the system sound hardware and providing a user selectable indication whenever sound activity is detected. NOTE that this feature usually cannot discriminate between different sounds, identify the sources of sounds, or provide a useful alternative for speech output or information encoded in sounds. Applications should support the ShowSounds feature (described below) to provide the user with a useful alternative to information conveyed using sound. SoundSentry is just a system-level fallback for applications that do not support ShowSounds.
Turn feature On/Off
- From the control panel.
Operation
- When SoundSentry is On all sounds cause the user-selected indicator to be activated.
Adjustments
- On/Off - SoundSentry (default is Off),
- setting for the type of visual feedback (common ones are flash of on-screen icon, flash of full screen, flash of foreground window frame, flash of desktop).
ShowSounds
The ShowSounds feature is designed for users who cannot clearly hear speech or cannot distinguish between sounds from a computer due to hearing impairment, a noisy environment, or an environment where sound is not allowed such as a library or classroom. ShowSounds is a user configurable system flag that is readable by application software and is intended to inform ShowSounds-aware applications that all information conveyed audibly should also be conveyed visually. (E.g. captions should be shown for recorded or synthesized speech, and a message or icon should be displayed when a sound is used to indicate that new mail has arrived.
NOTE:
Captions should not be provided for speech output where the speech is reading information that is already visually presented on the screen (e.g. screen readers, etc).Turn ShowSounds system flag On/Off
- From the control panel.
Adjustments
- True/False for ShowSounds flag (default is False).
Time Out (For All Access Features)
The Time Out feature allows the access features to automatically turn off after an adjustable time when no keyboard or mouse activity occurs. Time Out is intended to be used on public or shared computers, such as those in libraries, bookstores, etc., where a user might leave the computer with an access feature turned On, thus potentially confusing the next user or leading people to think the computer was broken.
Turn feature On/Off
- From the control panel
Adjustments
- On/Off - Time Out feature (default is Off),
- Setting for period of time of inactivity before access features are disabled (maximum of at least 30 minutes, default is 10 minutes).
SerialKeys
The SerialKeys feature allows users to connect an external hardware device to the computer’s serial port and send ASCII characters to the serial port which instruct SerialKeys to simulate standard keyboard and mouse events. To applications, keyboard and mouse events simulated by SerialKeys should be indistinguishable from events generated by the physical keyboard and mouse. For more information on SerialKeys including technical specifications for ASCII/Unicode strings supported by SerialKeys, see http://trace.wisc.edu and search for “GIDEI”.
NOTE:
SerialKeys was designed for users who are unable to use traditional keyboards and mice and must use a special communication aid or computer access aid to act as their keyboard and mouse. This functionality however is now met by USB. Consequently SerialKeys is being retired. It is therefore not further specified or discussed in this document except for the brief description above.Appendix C (Informative): Bibliography
Bergman, E., Johnson, E. (1995). Towards Accessible Human-Computer Interaction, Advances in HCI, Volume 5, Ablex Publishing Corporation
Blattner, M.M., Glinert, E.P., Jorge, J.A. and Ormsby, G.R. (1992). Metawidgets: Towards a theory of multimodal interface design. Proceedings: COMPASAC 92, IEEE Press, pp. 115-120
Brown, C. (1989). Computer Access in Higher Education for Students with Disabilities, 2nd Edition. George Lithograph Company, San Francisco
Brown, C. (1992). Assistive Technology Computers and Persons with Disabilities, Communications of the ACM, 35(5), pp. 36-45
Carter, J., and Fourney, D. (2004). Using a Universal Access Reference Model to identify further guidance that belongs in ISO 16071. Universal Access in the Information Society, 3 (1), p. 17–29. http://www.springerlink.com/link.asp?id=wdqpdu5pj0kb4q6b
Casali, S.P. and Williges, R.C. (1990). Data Bases of Accommodative Aids for Computer Users with Disabilities, Human Factors, 32(4), pp. 407-422
Chisholm, W., Vanderheiden, G. and Jacobs, I. (eds), 1999 Web Content Accessibility Guidelines 1.0 W3C, Cambridge, MA, USA url: www.w3.org/TR/WAI-WEBCONTENT/
Church, G. and Glenna, S. (1992). The Handbook of Assistive Technology, Singular Publishing Group, Inc., San Diego
Connell, B. R., Jones, M., Mace, R., Mueller, J., Mullick, A., Ostroff, E., Sanford, J., Steinfeld, E., Story, M., Vanderheiden, G. (1997) The Principles Of Universal Design, NC State University, The Center for Universal Design
Edwards, W.K., Mynatt, E.D., Rodriguez, T. (1993). The Mercator Project: A Nonvisual Interface to the X Window System. The X Resource. O’Reilly and Associates, Inc.
Edwards, A., Edwards, A. and Mynatt, E. (1993). Enabling Technology for Users with Special Needs, InterCHI ‘93 Tutorial, 1993
Elkind, J. (1990). The Incidence of Disabilities in the United States, Human Factors, 32(4), pp. 397-405
Emerson, M., Jameson, D., Pike, G., Schwerdtfeger, R. and Thatcher, J. (1992). Screen Reader/PM. IBM Thomas J. Watson Research Center, Yorktown Heights, NY
Glinert, E.P. and York, B.W. (1992). Computers and People with Disabilities, Communications of the ACM, 35(5), pp. 32-35
Griffith, D. (1990). Computer Access for Persons who are Blind or Visually Impaired: Human Factors Issues. Human Factors, 32(4), 1990, pp. 467-475
Gulliksen J., and Harker S. (2004) Software Accessinbilty of Human-computer Interfaces – ISO Techical Specification 16071. In the special issue on Guidelines, standards, methods and processes for software accessibility of the Springer Journal Universal Access in the Information Society, Vol. 3, No. 1, pp. 6-16, Edited by Jan Gulliksen, Susan Harker and Gregg Vanderheiden.
IBM Technical Report (1988). Computer-Based Technology for Individuals with Physical Disabilities: A Strategy for Alternate Access System Developers
Kaplan, D., DeWitt, J., Steyaert, M. (1992). Telecommunications and Persons with Disabilities: Laying the Foundation, World Institute on Disability
Kuhme, T. A User-Centered Approach to Adaptive Interfaces. (1993). Proceedings of the 1993 International Workshop on Intelligent User Interfaces. Orlando, FL., New York: ACM Press, pp. 243-246
Lazar, Joseph J. (1993). Adaptive Technologies for Learning and Work Environments. American Library Association, Chicago and London
Macintosh Human Interface Guidelines. (1992). Addison-Wesley
Managing Information Resources for Accessibility, U.S. General Services Administration Information Resources Management Service. (1991). Clearinghouse on Computer Accommodation
McCormick, John A. (1994). Computers and the American’s with Disabilities Act: A Manager’s Guide. Wind crest
McMillan, W.W. (1992). Computing for Users with Special Needs and Models of Computer Human Interaction. Conference on Human Factors in Computing Systems, CHI ‘92, pp. 143-148. Addison Wesley
Microsoft Corporation. The Windows Interface Guidelines for Software Design. (1995). Microsoft Press.
Microsoft Corporation. (2001). Microsoft Active Accessibility Version 2.0. http://msdn.microsoft.com/library/default.asp?url=/library/en-us/msaa/msaastart_9w2t.asp
Microsoft Corporation, Lowney, G. C. (1993-1997), The Microsoft® Windows® Guidelines for Accessible Software Design
Mynatt, E. (1994). Auditory Presentation of Graphical User Interfaces, in Kramer, G.(end), Auditory Display: Sonification, Audification and Auditory Interfaces, Santa Fe. Addison Wesley: Reading MA
Newell, A.F. and Cairns, A. (1993). Designing for Extraordinary Users. Ergonomics in Design
Nielsen, J. (1993). Usability Engineering. Academic Press, Inc., San Diego
[A]NSI T1.232:1993, Operations, Administration, Maintenance, and Provisioning (OAM&P) — G Interface Specification for Use with the Telecommunications Management Network (TMN)
Ozcan, O. “Feel-in-Touch: Imagination through Vibration” (2004). Leonardo. MIT Press, Volume 37, No 4, pp 325-330.
Perritt Jr., H.H. (1991). Americans with Disabilities Act Handbook, 2nd Edition. John Wiley and Sons, Inc., New York
Resource Guide for Accessible Design of Consumer Electronics (1996). EIA/EIF
Sauter, S.L., Schleifer, L.M. and Knutson, S.J. (1991). Work Posture, Workstation Design, and Musculoskeletal Discomfort in a VDT Data Entry Task. Human Factors, 33(2), pp. 407-422
Schmandt, C. (1993). Voice Communications with Computers. Conversational Systems. Van Nostrand Reinhold, New York, 1993
Sun Microsystems, Inc. “Java Accessibility. Overview of the Java Accessibility Features. Version 1.3”. (1999)... http://java.sun.com/products/jfc/jaccess-1.3/doc/guide.html
Stephanidis, C., Akoumianakis, D., Vernardakis, N., Emiliani, P., Vanderheiden, G., Ekberg, J., Ziegler, J., Faehnrich, K., Galetsas, A., Haataja, S., Iakovidis, I., Kemppainen, E., Jenkins, P., Korn, P., Maybury, M., Murphy, H., & Ueda, H. (2000). Part VII: Support Measures. Industrial policy issues. And Part VIII: Looking to the Future. Toward an information society for all: An international R&D agenda. In C. Stephanidis (Ed.), User Interfaces for All Concepts, Methods, and Tools (589-608). Mahwah, NJ: Lawrence Erlbaum Associates, Inc
Thatcher, J., Burks, M., Swierenga, S., Waddell, C., Regan, B., Bohman, P., Henry, S., and Urban, M. (2002). Constructing Accessible Web Sites. Glasshaus, Birmingham, U.K.
Thorén, C. (ed.) (1998). Nordic Guidelines for Computer Accessibility. Second Edition. Gallingly, Sweden: Nordic Committee on Disability.
Vanderheiden, G.C. (1983). Curbcuts and Computers: Providing Access to Computers and Information Systems for Disabled Individuals. Keynote Speech at the Indiana Governor’s Conference on the Handicapped.
Vanderheiden, G.C. (1988). Considerations in the design of Computers and Operating Systems to increase their accessibility to People with Disabilities, Version 4.2, Trace Research & Development Center
Vanderheiden, G.C. (1990). Thirty-Something Million: Should they be Exceptions? Human Factors, 32(4), p. 383-396
Vanderheiden, G.C. (1991). Accessible Design of Consumer Products: Working Draft 1.6. Trace Research and Development Center, Madison, Wisconsin
Vanderheiden, G.C. (1992). Making Software more Accessible for People with Disabilities: Release 1.2. Trace Research and Development Center, Madison, Wisconsin
Vanderheiden, G.C. (1992). A Standard Approach for Full Visual Annotation of Auditorily Presented Information for Users, Including Those Who are Deaf: Show sounds. Trace Research & Development Center
Vanderheiden, G.C. (1994). Application Software Design Guidelines: Increasing the Accessibility of Application Software to People with Disabilities and Older Users, Version 1.1. Trace Research & Development Center
WAI Accessibility Guidelines: User Agent Accessibility Guidelines 1.0 . (2000). http://www.w3c.org/wai
WAI Accessibility Guidelines: Web Content Accessibility Guidelines 1.0 (1999). http://www.w3.org/TR/WCAG10/
WAI Accessibility Guidelines: Web Content Accessibility Guidelines 2.0 (Working draft) (2006). http://www.w3.org/TR/WCAG20/
Walker, W.D., Novak, M.E., Tumblin, H.R., Vanderheiden, G.C. (1993). Making the X Window System Accessible to People with Disabilities. Proceedings: 7th Annual X Technical Conference. O’Reilly & Associates
World Health Organization, (2002) Towards a common language for functioning, disability and health: ICF, WHO/EIP/CAS/01.3, World Health Organization, Geneva, http://www3.who.int/icf/
The Windows Interface Guidelines for Software Design. (1995). Microsoft Press, Microsoft Corporation
ISO 13406-2:2001, Ergonomic requirements for work with visual displays based on flat panels — Part 2: Ergonomic requirements for flat panel displays.
OZCAN O. “Feel-in-Touch: Imagination through Vibration”, Leonardo, MIT Press, Vol:37, No 4, August 2004