Tricia wrote a blog post a while ago about how accessibility is really important for us here at Our Machinery and how we want to make sure that The Machinery can be used by as many people as possible.
So now is a good time to take stock of what accessibility features we have and what we plan to do in the future.
Note that in this post I’m focusing on the accessibility of the editor, not on games built in the editor. That is a slightly different (but related) topic.
Immediate Mode GUIs and Accessibility
The Machinery uses an Immediate Mode GUI or IMGUI. What this means is that instead of creating UI controls at startup and writing code that keeps them synchronized with the application state, we just “draw” all the controls every frame, using the current state.
The IMGUI toolkit we use is a custom toolkit that we ship as part of The Machinery. We do everything ourselves, down to the drawing of the pixels that make up the buttons and checkboxes. We have a number of good reasons for doing things this way instead of using an off-the-shelf UI toolkit, such as WPF, Qt, or Electron:
We get a consistent cross-platform look and behavior.
It’s easy to make whatever custom controls we need.
We have full control of the UI’s performance and can make use of GPU acceleration — ensuring that the editor is just as fast as all the other parts of the engine.
We don’t have to deal with any mysterious black boxes. We can follow a UI call all the way down to the rendered pixels to figure out why it behaves the way it does.
We don’t have to worry about synchronizing the UI state and the application state, they are always the same.
We can iterate faster over the UI, which lets us do more in less time.
Compared to other UI solutions we’ve worked with in the past, we vastly prefer this approach. But using a custom IMGUI toolkit brings its own special challenges when it comes to accessibility. Since we can’t rely on the built-in accessibility features that come with OS or browser-based UI toolkits, we have to build everything ourselves.
The flip side of that coin is that we have full control of the whole application stack and can more easily experiment with novel solutions for accessibility.
Sometimes people have the misconception that IMGUIs can’t be accessible. The argument is that if we don’t have a tree of pre-created UI objects, then there is nothing for a screen reader to query or interact with.
I think this comes from a misunderstanding of what “immediate mode” means. At least to me, “immediate mode” doesn’t mean that we don’t store or retain any information about the UI and the controls that live there. Rather, it’s an API interaction style where instead of explicitly creating and destroying objects and modifying their state, you just list the objects you want to have and the state they should be in for each “tick” of the application.
Behind the scenes, an IMGUI might very well keep a full object tree around (as React does with its Virtual DOM approach). Or, it might keep a smaller list of objects, just for accessibility purposes. This is in fact what we do, see the Screen Readers section below.
If you’re still not convinced about the suitability of IMGUIs for use in “real applications”, another data point to consider is that Google Docs recently announced that they are switching to canvas-based instead of DOM-based rendering. Essentially, an IMGUI-based approach:
Google Workspace Updates: Google Docs will now use canvas based rendering: this may impact some Chrome extensions
Google says that “compatibility for assistive technologies such as screen readers, braille devices and screen magnification features, will not be impacted”.
UI Zoom and High-Contrast Color Schemes
Our two main features for users with low vision are UI Zoom and high-contrast color schemes. Using the Window > Zoom menu, the editor UI can be zoomed to an arbitrary scale. You can zoom in for better readability, or zoom out to fit more content on the screen:
The implementation of this in The Machinery is almost trivial. We already have a scale factor in the UI to handle DPI settings for the display. All that was needed to implement the Zoom feature was to expose this scale factor to the user.
In addition to this, The Machinery also works with the standard Magnifier tool in Windows, if you prefer to use that.
The color scheme used by The Machinery is fully customizable (in both the free and the paid versions). In addition to the default Dark and Light color themes, we’ve recently added support for high-contrast versions:
If you don’t like our default colors, you can easily edit them using the Window > Theme > Edit Themes.. menu option. Let us know if you have any suggestions for changes to the high-contrast themes that would improve legibility further.
Localization is not usually thought of as an accessibility feature, but we think it falls under the same umbrella. Providing localized versions of the UI makes the editor accessible to people who are not English speakers.
I’ve already covered how localization is implemented in The Machinery, in the blog post Localization in The Machinery’s UI, so in this post, I just wanted to mention a few newly added features.
First, we have added a new rotating language option that makes the engine flip back and forth between English and a target language. This lets you quickly compare the translated text with the English original and see how the translation works in context. It also makes it easy to spot any text that isn’t being translated.
In a similar vein, we’ve added a hotkey that temporarily switches the UI back to English if it has been set to a different language. Simply hold down F9 and the UI will switch back to English.
This serves two purposes. First, just as the rotating text mode, it can be used to debug and/or verify the translations. Second, let’s be honest — translations are not always perfect. Even if a user generally prefers to work in their native language, sometimes the translation can be confusing and it can help to quickly peek at the original English. This is especially true for software that contains a lot of technical jargon that can become hard to understand when translated.
We still have a fair amount of work to do on the localization system:
We currently don’t have support for right-to-left text such as Arabic and Urdu. We also don’t support shaping of cursive scripts such as Tamil and again Arabic. For more on these topics, see Text Rendering Hates You.
The only language other than English that we currently support is Swedish. This doesn’t make much sense, because Swedish is not really a world language. But we needed some language other than English in order to properly test the localization system and I happen to know Swedish, so there we go.
If you are interested in adding localization support for other languages, you can do so in a plugin. We will add more “official” languages based on user needs.
One thing I would like to try but haven’t gotten around to yet is to add support for localization through Google Translate or a similar translation service. Machine translations can’t compete with human ones, but for a user that doesn’t know any English, it’s better than nothing. And in combination with the F9 hotkey to “reveal” the original text of any particularly egregious translation, it could be workable.
We’ve recently started working on adding support for screen readers. Since we want to support a wide range of different screen readers across multiple platforms, we’ve decided to abstract the support into our own accessibility API. To add support for a specific screen reader, you can query UI information out of the accessibility API and feed that into the screen reader’s API using some lovely glue code.
The basis for the accessibility API is the
tm_ui_api->register_control() function that informs the
API about a control in the UI:
tm_ui_api->register_control(ui, TM_UI_ROLE__BUTTON, "Play", rect);
This specifies that a control with the
BUTTON role and the text
“Play” exists at the specified
rect in the UI. If you use our built-in controls, such as
tm_ui_api->button(), this function
will be called automatically for you. If you create your own controls out of raw draw calls, you are
responsible for making this call.
We save all the registered controls in an array and a screen reader can later query this array by
automation_controls() to get a list of all controls or
find_control() to find a
particular control based on its role and title.
The role of the control tells the screen reader how to interact with it. The Machinery comes with
some predefined roles such as
STATIC_TEXT, etc. but if you have a custom
control where none of these categories fit, you can define your own role.
To interact with a control, a screen reader calls the functions
text_input() to provide virtual input to the
UI. For example, you can click on a control by moving the mouse there and then setting the button
DOWN followed by
UP. You can drag an item by moving the mouse there, pressing
moving to a new location, and then pressing
Right now the state of this API is pretty rudimentary and there is a lot of information that is not available, such as:
- State of a control (enabled, disabled).
- Value of a control (checked, unchecked, tristate, etc).
- Selection, focus, caret position.
- Hierarchical relationships: controls and sub-controls.
We also haven’t fed the information into an actual screen reader API yet. We need to do that to make sure it works.
Going forward, we will probably extend the accessibility API with some sort of key-value table for controls so that we can provide this extra information. But first, we need to learn more about what kind of information would be valuable to our end-users and how they interact with the API.
If you are using game editors together with assistive technology, we’d be really interested in getting your feedback so that we can make sure The Machinery works as well as possible.
Here’s a screenshot of The Machinery with debug visualization for accessibility turned on. All the controls that have been registered with the accessibility API are drawn with a red outline:
Accessibility for Everybody
Making software accessible can benefit more people than you might initially think. For example, I often watch TV with captions on. English is my second language and my hearing is not perfect. Having captions makes the dialogue a lot easier to understand, especially with some British accents (mentioning no names). I also often watch videos with the sound off, because sound can be intrusive. And sometimes I go straight to a transcript because I can scan it quicker than a video and search for words I’m interested in.
I think learning how to use VoiceOver would be useful too, so I can operate my phone while I need to look at other things, but I haven’t gotten around to it yet.
The accessibility features mentioned in this article are also generally useful:
Zoom is useful for users with varying screen sizes and resolutions.
Color themes make it easier to use the editor under various light conditions. And of course, it’s also nice to be able to pick your own colors.
The accessibility API acts as the foundation for UI automation. We use this in our integration and regression tests to automatically drive the UI like a real user would, making sure that clicking on buttons or selecting menu items has the expected result. In the future, we might use it to implement a macro system.
What are we missing?
We’re committed to making The Machinery the most accessible game engine out there and we’re always trying to learn more about this space. If there is something we are missing or that we should tackle differently, let us know!