Accessibility on iOS: Working with VoiceOver support

Accessibility on iOS provides visually impaired iOS users the ability to use an awesome touch screen platform to its full extent. It’s your job as an iOS developer to ensure your app works wonderfully with Accessibility on iOS.

When considering adding features or optimizing your app for Accessibility, don’t assume we’re just talking users who are completely blind. And despite being all iOS being full touch screen devices, there are lots of visually impaired users outthere, ranging from users who are blind, hard at seeing or colour blind, for example.

Apps, such as Mail, numerous Twitter clients and more include the possibility of changing text on the screen to a larger font size. Matt RixTrainyard includes an option for players who are colour blind. These are just a few examples of improving apps for Accessibility.

Also consider interaction. While a scroll requires just a flick with one finger for regular use, having VoiceOver enabled scrolling is done with a three finger swipe, and moves the UIScrollView (and any NSObjects that inherit from UIScrollView) a page at a time. So while a pull-to-refresh is awesome, for user with VoiceOver enabled, it might not be as apparent or ease to use.

In this post, I will focus a bit on optimizing your app for VoiceOver. Before continuing with this post, check the video below where a blind user shows off VoiceOver and demonstrates how to use it.

I also recommend enabling VoiceOver on your own device and playing around with it for a while, just to get familiar with it. One thing is to know about it and use it with the Accessibility debugger turned on in Simulator, another thing is to experience it for yourself!

While most of UIKit works fairly well out of the box with VoiceOver, there are plenty of things you can do to improve how your app works with VoiceOver. If you’re doing any custom UI (especially drawing) VoiceOver will most likely not work, or it may speak out the names of the object. So if you call a custom UIButton “createNewTaskButton,” that’s what it will say. Instead, it should probably say “Create New Task”.

Consider your own app(s) and what use visually impaired users may have of it. I made a transit app, so it was only natural to optimize it for visually impaired users and add custom and improved VoiceOver support. With VoiceOver enabled, users can tap the time until the next transit vehicle arrives and the device speaks out in plain English “Next TTC vehicles arrives in 5 minutes and 32 seconds. Tap again to hear updated time.” If I had not customized it, it would have just said “5:32.”

Underneath are subsequent vehicle arrival times. As you can see from the Simulator screenshot, without VoiceOver optimization, the device would just say “20 min bullet 25 min bullet 26 min.” With Voice Over optimization, it says “Subsequent TTC vehicles arrive in 15 minutes (pause) 20 minutes (pause) and 25 minutes. Tap again to hear updated times” Notice the (pause) and look at the screenshot. VoiceOver speaks pretty fast, so if you would like a bit of a pause, just insert a period, and it will pause a bit between the words. This is a great way to split up important information.

Also notice how it will say “and” for the last arrival time. If there are more or less times, it would only say it for the last item. Remember attention to detail in these cases. If someone was to tell you the information for what you touched on the screen, how would you best understand it? There’s a huge different between spoken language and UI. Keep this in mind!

For the buttons with which users can change a route, the label would look like this “506 • Carlton.” In this case, the literal VoiceOver sentence would be “506 bullet Carlton.” So I went ahead and customized this to “506 Carlton,” so it wouldn’t speak the “bullet” part of it. For the custom UIButton where the user can add or delete a route/stop selection to their Favourites, I added “This is a favourite. Double tap to remove from your favourites” and if the selection is not a favourite it would say “Not a favourite. Double tap to add to your favourites.”

If you’re doing any custom drawing, for example as I have done in the example on the right, you will need to pay extra attention to VoiceOver optimizations.

For flat, drawn views, VoiceOver cannot access any of the text or other information on screen. In this example, I draw each cell, so there are no UILabels or UIImageViews for VoiceOver to recognize and speak.

I therefore set the accessibilityLabel property on each cell which then provides VoiceOver with information about what to say. As you can see from the example, according to Accessibility Inspector, VoiceOver will say the route number and name, and because the item is a current selection, I also made sure VoiceOver lets the user know about this by saying “Current selection” after the route number and name.

For non-VoiceOver users, the checkmark is a great visual marker for the item being a current selection. But for VoiceOver users, this would have not been the case. Especially since it’s a custom drawn view, VoiceOver has no clue about the checkmark. In your Accessibility optimization, not only consider what VoiceOver would say, but also consider the UI and what VoiceOver can see, or should see that is related to the item the user it touching.The user may not know about a certain UI object or visual marker without VoiceOver telling them.

Accessibility Inspector in iOS Simulator is a great tool to help optimize your app for full VoiceOver support. Tapping the X button disables the Inspector and hides most of its UI. This enables you to see the app without VoiceOver enabled to quickly get to the content you’re currently debugging for VoiceOver, or since VoiceOver scrolling requires three fingers, it will allow you to scroll when you disable it temporarily.

While Accessibility Inspector shows what VoiceOver will say, you should again test on an actual device with VoiceOver enabled. As mentioned, one thing is how something looks and “sounds” when it’s written, another is how it sounds when VoiceOver speaks it.

So these were a few of the consideration I made for one of my apps in optimizing it for full VoiceOver support. For more on the subject, I urge you to read Apple’s Accessibility Programming Guide for iOS, which gives a more detailed overview of the possibilities as well as more in depth examples and usage guides.

Furthermore, Matt Gemmell has written a great post on it as well with more details on Accessibility properties, among other things, and it’s totally worth checking out.

Next TTC follow-up + TACOW Meet-up tomorrow

It’s been a long day and I think I’ll *just* make the deadline for my #idevblogaday post. This morning, Toronto Transit Commission (TTC) released the much anticipated bus data for their Next Vehicle Arrival Service (NVAS), which previously only included streetcar arrival time predictions. Read more about the app my previous Next TTC: Behind the Scenes.

In anticipation of the release of the bus data, I had done some updates for my app, including hosting some of the data on my own server for speedier transfer, etc. However, with a jump from 300KB to nearly 4MB of data for the initial config XML file, I have some work ahead of me to improve the app and it’s handling of the data. Furthermore, some of the bus routes also have a bit of different configurations, in terms of their route format (some routes have a/b/c/etc. directions/destination for the same route), so I have to change the parsing up a bit to properly display this to users on those routes. I also have to change the way I store data and access it on the fly, so lots of work ahead of me this week! I need more time in the day!

But, apart from the slowness of the app (until I finish the update), it’s been pretty good today. I was able to update to include the new data this morning, which meant I got some good press today.

I was mentioned in National Post, CP24, CityTV, and finally interviewed for a great article in Globe and Mail which was posted tonight (hopefully making it to press for tomorrow’s paper!). I’m really happy about these mentioned. As far as I could see from my app’s ranking (went from ~#30 in Navigation to #2 in that category, and made it into Top 100 overall – both for the Canadian App Store of course, which is quite different from the US Store, in downloads) sales have been pretty good today, but most exciting is all the awesome feedback I have been getting from users who are ecstatic about finally being able to use the app for their bus commute (and night buses and streetcar, not previously included).

All in all, today’s made me pretty excited about my app again. You always end up getting a bit tired with a project after staring at the code for a long time, or literally using every aspect of your app possible, or painstakingly tweaking every little pixel of the UI if you’re me. I have a bunch of ideas on the table that I want to do after fixing what needs to be fixed. And I can honestly say, it’s made it a bit more interesting now there’s finally a larger user-base to cater to – and more copies to be sold.

Finally, if you’re in Toronto and free tomorrow night, I am presenting at tomorrow’s TACOW (Toronto Area Cocoa and WebObjects developers group) meet-up. I’m doing a presentation on Core Location, how-to-use-it-basics plus some insight/tips and experiences from myself using it. Should be fun! The details for the meet-up is on the website, and we’ll be heading out to a pub afterwards for some drinks.

 

Next TTC: Behind the scenes

Last week, I released Next TTC 2.0, a major update to my real-time arrival predictions utility app for Toronto’s streetcars (buses to be added by Toronto Transit Commission (TTC) soon).

Today, I’d like to discuss how I went about dealing with the data that is provided through the API and which UI/UX choices I made based on the data and how it would be used.

Bit of background

Data is provided for Toronto’s public streetcar transportation system through the nextbus.com API. Basically vehicles are outfitted with a GPS. Based on their location and proximity to a certain stop, the API provides arrival time predictions. So for example, if a streetcar is 1.2 km away from a XYZ stop, based on the average speed and time it takes from the streetcar to arrive at XYZ it will tell you that the streetcar is arriving in 4 minutes and 43 seconds. This can however change if the streetcar is backed up in traffic or suddenly able to travel at faster speeds, or make less stops en route to stop XYZ.

The data

TTC provides up to 5 arrival time predictions for the requested stop and/or route.

By requesting data for a stop only, you’re provided with all routes and their arrival times for that particular stop. Depending on the stop, this may be one or several routes at once. You’re also able to request stop and a specific route, which means the API just returns arrival time predictions for that route.

Furthermore, for TTC, some streetcars make short-turns. For example, route 501 westbound has two end destinations. The streetcars are on the same track for most of the way, but every few streetcars will stop at an earlier station. This means the API also returns predictions for each route-short-turn.

The data may appear as such:

  • Route 0
    • Destination 0
      • Prediction 0
      • Prediction 1
      • Prediction 2
      • Prediction 3
      • Prediction 4
  • Route 1
    • Destination 0
      • Prediction 0
      • Prediction 1
      • Prediction 2
      • Prediction 3
    • Destination 1
      • Prediction 0
      • Prediction 1
  • Route 2
    • Destination 0

In the above example we have requested data for a stop. This stop is serviced by 3 streetcar routes. Route 0 is currently in service and we are returned 5 arrival time predictions. Route 1 is also in service and has two destinations (or short-turns) for which we are returned results for both. Route 2 is currently not in service (some streetcar routes only run during rush hour) and we are not returned any arrival time predictions.

So as you can see, we are potentially dealing with a great deal of data.

Scope

Version 1 of the API provided less amount of data and details, so in Next TTC 2.0 I wanted to upgrade as well, while still retaining the basic user experience of the app.

The basic scope of the app is to launch the app and instantly get arrival time predictions for the stop nearest your current location. In essence this is pretty easy with Core Location. Once you have a user’s location, loop through all stops to find the nearest stop and request the data for that stop.

But how do you retain a simple user experience as required by the scope when you’re dealing with potentially lots of varied data?

UI/UX decisions

In Next TTC 1.x, if a stop had more than one route, I prompted the user to select the desired route – or telling them Route X and Route Y service this stop but only Route Y is currently running. While it was the only app allowing the user to view other routes at the same stop it was intrusive and quite annoying.

Alert prompt in Next TTC 1.x to select route for stop at user's current location

With version 2 of the API returning multiple destinations (short-turns) for one route there was more data to present.

My goal was to present all the data possible without any interruptions, while also making it extremely easy and quick to change between the desired data. So in 2.x I added the trust navigation bar (I also went from not having the status bar visible, to making it visible, so the user could see the current time), which allowed me to added a UISegmentedControl to quickly switch between routes for the stop at the user’s current location.

Next TTC 2.0 provides as easy way to switch between routes servicing the same stop in the navigation bar as well as switching between a route's short-turns

With a checkmark, I also visualize whether a route is running. While the returned data is a bit scrambled, I chose to sort the segments in an ascending order. Furthermore, the selection defaults to the first route in the segmented control that is in service, so the user isn’t initially presented with an empty result for a route they most likely wouldn’t care about, since they want to know when the next vehicle is arriving at their current location.

In the screenshot, you can also see another row of segments: All, Long Brand and Humber. These are the short-turns. As we saw earlier, the API would have returned two destination. I then present these two selections above the route button along with a combined set of predictions for both the short-turns sorted in an ascending order. In this case, 501 Queen is a long route, going from east of Toronto to west of Toronto. Both destination short-turns are quite far out of downtown, so if you’re going to a stop way before that, you wouldn’t care which one of the upcoming streetcars you can take, because you know you’re getting off before that. At the same time, if you’re going to Long Branch station, selecting Long Branch will provide you with upcoming arrival time predictions for that location, which could save you lots of time waiting or getting on the wrong streetcar (Long Branch is further than Humber. Unfortunately the API doesn’t sort the destination short-turns in order of proximity, so it’s not possible to present them in a sorted order at this time).

Example of a stop serviced by up to 5 different streetcar routes

While it’s recommended in the API documentation not to show seconds, I felt it was appropriate to show it because “1 minute” may be 1 minute and 20 seconds or less than a minute. If you’re running to catch a streetcar, 30 seconds precision matters a lot. Luckily, lots of users love this feature over other apps that provide the same basic use-case. I only show it for the upcoming streetcar, while the subsequent cars are displayed in minutes. The app auto-updates the predictions based on calculations for the count down time, so the user is always shown the most accurate prediction at any given time. It feels kind of awesome standing at a stop looking up from your iPhone running Next TTC and seeing the streetcar arrive exactly as predicted in the app :)

Core Location

Anyone who’s ever had to work with Core Location probably agrees with me when I say it’s a bitch. While the API is great, it was a challenge to find a balance between very accurate location results and also being able to locate the user when location accuracy is very low (no GPS for example). With Next TTC, I wanted to be able to locate the user quite precisely and quite quickly to find the nearest stop, and not find a stop 500 meters away or more.

All devices cache the user’s location and when initially prompting for a user’s location that’s exactly what you get back. It might be an extremely accurate location, but it may also be very old and inaccurate. If you’ve ever opened the Maps app, and seen the blue dot move over a large area before locating you, or having the circle around it cover a very large area, this is exactly what I am talking about.

So in dealing with this challenge, I initially did a whole bunch of stuff in code to try and figure out whether the location was usable, whether to continue trying to find a new, better location, etc. It turned out quite difficult so in Next TTC 2.0 I went back to the basics, but made it possible for the user to quickly re-locate themselves and request new arrival time predictions for their updated location. In Next TTC 1.x I had a GPS arrow button, but it turns out, even though Apple uses this, most users don’t understand the icon. Since I had worked so much on getting a good location initially (which sometimes took several seconds; way too long) I didn’t really make it possible to re-locate oneself, other than picking another stop manually and then hitting the GPS arrow button again. Poor UX, Rune!

Main screen in Next TTC 1.x using user's current location (GPS arrow)

So as dicussed, in Next TTC 2.0 I got rid of lots of code for my Core Location class, and implemented a much easier way to get an updated (and more accurate location if available) if the initial location wasn’t satisfactory. I implemented the pull-to-refresh paradigm, which works wonderfully for my app.

Example of my custom "pull-to-re-locate" implementation of Pull-to-refresh seen in lots of apps

Clutter or not?

When dragging the bottom view down, it reveals a button to add a stop to favourites. When this button is hidden, I added a small light that allows the user to quickly see whether it’s a favourite (light is on) or not (no light), without having added yet another button that takes up a bunch of screen real estate and doesn’t get used 90% of the time if that much!

Extra hidden menu bar reveals "Add favourite" button

The great thing about this extra menu bar is that I can add features/buttons in the future without cluttering up the UI more, leaving the user to focus on the most important thing to them: “When’s the next streetcar arriving at this stop?”

I did add a button in the centre of the toolbar, to open/close the hidden menu. Based on beta tester feedback, it wasn’t obvious enough that dragging/pulling down on the picker view would reveal the extra menu bar.

Next TTC 2.0 has the same grey noise background as Next TTC 1.x as well as light letters. When I released the app back in February it was winter and the sun wasn’t shining very much. As the months passed and spring arrived, I started receiving feedback from users that the UI was too dark and very hard to discern in bright sunlight. Sure enough, I tested it and the only thing I saw on my iPhone screen was my own face reflected on the glass screen.

Based on this feedback, I added a “Dark on Light UI” setting in the app, which changes the background to a noisy off-white with dark text, making it much easier to read in direct summer sunlight.

While not as pretty as the default setting, the "Dark on Light UI" setting makes it possible for Next TTC users to easily read the content on the screen in bright sunlight

Wrapping up

I hope you’ve enjoyed a bit of insight into the choices that went into the user experience and design of Next TTC 2.0. Here’s a few points to wrap up what I think you should take away from it:

  • While you may have a UI in mind early on, UI and UX design is closely related to the data you need to present. While considering your UI, really dig through your data and decide early on what’s important and what parts of the data you want to present to the user, or what takes precedence.
  • Create a scope early on. This will help you on your path to completing the app and provide clear guidelines for designing and coding your app, since you’ll have the data and the user in mind.
  • Revise, revise, revise. Don’t settle on a particular design just because it looks great. If it doesn’t work well in terms of usability or if it doesn’t allow you to present the data or the user to interact with the data in an appropriate manner, scrap it and figure out a better way of making this possible.
  • Don’t be afraid to make big changes. Next TTC 2.0 was a huge update for me. I completely rewrote the core of the app and redesigned 80% of the app. I consider it a brand new app (which also made it painful in deciding it’s a free update to the thousands of current users who’d this awesome update for free, only having paid a dollar for the previous version).
  • Listen to tester and user feedback and make appropriate changes to your app. Something might appear important to you, or easy to use from a UX point of view, but if several users tell you it’s not, listen to them.