The nature of software development means that the interface we get is generally one that is an amalgamation of feedback from test users. Whilst we can customize these interfaces to an extent, what we get is what we have.
Now, research from Aalto University, Finland, and the Kochi University of Technology, Japan proposes AI as a means of giving us truly personalized user interfaces. It’s an approach they believe will especially benefit older or disabled users.
“The majority of available user interfaces are targeted at average users. This ‘one size fits all’ thinking does not consider individual differences in abilities — the aging and disabled users have a lot of problems with daily technology use, and often these are very specific to their abilities and the circumstances,” the authors say.
The challenge is two-fold. Firstly, you need to have a realistic model of the user, and then you need to be able to tailor the interface to their needs. Both of these have been significant hurdles for the industry but also for disabled users.
For instance, hand tremors common in Parkinson’s patients makes it difficult to point accurately, which renders touch-based interfaces impossible to use.
The researchers have developed a new predictive model of interaction, which they believe will predict how our individual abilities affect our ability to input text on a touch screen. The model utilizes psychological research into both finger and eye movements to predict things such as text entry speed, proofreading and typing errors.
The model was put through its paces with users suffering from tremors, and it was capable of showcasing how difficult it is to type with a normal Qwerty keyboard. Indeed, roughly half of the keys pressed were typos.
“After this prediction, we connected the text entry model to an optimizer, which iterates through thousands of different user interface designs. No real user could, of course, try out all these designs. For this reason, it is important that we could automatize the evaluation with our computational model,” the team explains.
Optimizing the Interface
The next step was to then optimize the text entry interface based on the input from the model. When this was subsequently tested on the same user group, virtually no errors were produced by the volunteers. Whilst it is just a prototype and not ready for the market yet, the researchers are nonetheless optimistic that their approach can be built upon by designers to create better interfaces for users in future. What’s more, the team hope that the approach can be valuable not just for text entry tasks, but a wide range of interactive tasks.
“We started with text entry, which is an everyday task. We chose to simulate and optimize for essential tremor because it makes text entry very difficult. Now that we have confirmed the validity and usefulness of the model, it can be extended to other use cases and disabilities. For example, we have models for simulating how being a novice or an expert with an interface impacts user’s performance. We can also model how memory impairments affect learning and everyday use of interfaces. The important point is that no matter the ability or disability, there must be a psychologically valid theory behind modeling it. This makes the predictions of the model believable, and the optimization is targeted correctly,” they conclude.