Created with Sketch.
Switch Lab
Choose Lab

Binary Electronic Mobility Systems (BEMS) for people with visually disabilities are expanding due to the increasing computing power of smartphones. These aids combine wayfinding and orientation technologies based on Global and Visual Positioning Systems (GPS and VPS) that can be interacted with through Voice User Interfaces (VUI). Such VUIs can read out loud visual information or can be used, as a more advanced, conversational interfaces.

A downside of BEMS is that they are difficult to use. Using BEMS is cognitively complex because people often need to switch between different apps to obtain the type of information they need. Interacting with BEMS is further time-consuming. It takes time to translate menu-based information into speech, and this is not error-friendly when making mistakes.

We propose that conceptualizing BEMS as ‘embodied intelligent agents’ is a productive way forward in their design

Many data-intensive products are disclosed via agent-based technologies and we see similar benefits for BEMS. For instance, agent-based software can search for and select the type of mobility data that is most relevant for a person in a particular situation. It is expected that using agents to interact with BEMS will be more intuitive and efficient compared to menu-based systems. Lastly, creating a particular embodiment for these agents allows BEMS to be carried and used more easily.


Important in our vision is that we design BEMS to empower visually impaired people by helping them to explore more about the world around them by learning to use, trust and train their learned capabilities and resources, such as being able to orient themselves based on visceral, auditory signals and being socially resilient asking other people to help. Our guiding principle is to provide minimal information and minimal input by taking advantage of their situated experience (context), skills and resourcefulness. In this project you will be closely collaborating with both the clients and mobility trainers of Bartiméus and connect to the “Meye Indoor Way” project to explore how agent technology might enhance this system.


A preliminary agent architecture is provided as a starting point for research. The following main functions are envisioned: Training: A person can train BEMS to learn new routes by creating an instance of a starting-location and an end-location in real-time and that is annotated with a keyword to give that location meaning. Guiding: BEMS can guide a person towards particular places by means of directional haptic and/or audio cues while traveling. Exploring: EMA can suggest places that have been stored as potential destinations or that have been generated elsewhere (i.e., via social media or produced by other BEMS users).

The following data and functions are suggested to be explored for BEMS to perform these functions:

  • Location and time: tracking a person’s locomotion and travel.
  • Place: a place is a stored location based on GPS/VPS with a meaningful semantic annotation (keyword) in relation to the position and moment in which this ‘place’ was initiated. (i.e., Blindsquare).
  • Route: a route entails a starting point (where you happen to be at that moment) and an end-point (i.e., ‘place’). EMA can analyze these ‘stored’ starting points of routes and aggregate them in extended routes.
  • Route familiarity: the number of times particular routes have been travelled and therefore can assess the ‘familiarity’ of routes. This might be of interest for EMA’s guidance preference.
  • Place relevance: IMA’s can assess which places stored in the system are more relevant for a person given that person’s specific location and time. This might be of interest in highlighting certain places over others.
  • Movement anomalies: IMA tracks a person’s movement patterns and based on identified anomalies might provide support proactively (moments of doubt or moments of insecurity).
  • Map data: IMA can connect to (Google) maps (voice guidance software, traffic information) and work with the available data (to help in annotating).

Further inquiries

Dr. ing. Marco Rozendaal

Marco Rozendaal