Slug sommer accessibility cover?fm=jpg&fl=progressive&q=75&w=300

Accessibility Three Ways: iOS Implementation Case Studies

Sommer examines accessibility and how to implement it from three angles. First up, some examples of accessible mobile technologies, including some “good” and some hilariously and sadly “bad” case studies. She discusses new accessibility APIs in iOS 8 and 9, and how you can harness them to offer cutting-edge #a11y in your app, as well as how to deal with the key issue of backwards compatibility. Finally, she covers how accessibility code in Swift is a fantastic candidate for FRP (via RAC 4), with examples of why this pairing works particularly well.


My name is Sommer Panage. I am currently a freelance mobile developer. I have worked at Twitter heading up their Mobile Accessibility Team; before that, I was at Apple working to make Macs more accessible: my passion is clearly accessibility.

This talk will start on the outside, 1) what it means to develop for accessibility, then 2) we will go down a level, and we will take a look at some of the new API’s that Apple has brought about as of iOS 8 and iOS 9. Lastly, 3) we will do explore how to combine functional reactive programming with accessibility and how they play nicely together.

Talk 1: Accessibility & You (00:49)

According to Apple, accessibility means “using technology to overcome challenges.” Apple provides a vast number of accessible support built into its mobile devices already.

  • Visual challenges (e.g. full or partial blindness, or color blindness): iOS devices come with numerous tools. VoiceOver is a screen reader, which allows the user to hear what is on the screen (instead of having to look at it). Users can also dictate text via Siri, or use handwriting recognition that is built into the iOS devices (rather than typing).
  • Partial vision: zoom and font adjustments make the screen easier to read and interact with (devices offer inverted colors and grayscale). One of my favorite things that Apple integrates with are braille devices; the user can use the device in their pocket, they can feel everything displayed on in the screen. For those with hearing impairments, iOS supports video captioning via AV Foundation, in addition to minor audio (iOS devices support numerous types of hearing aids).
  • Motor challenges: AssistiveTouch. AssistiveTouch helps users with multi-touch gestures; two or three finger gestures become much easier.
  • Bigger motor challenges: Switch Control (also supported by iOS). This allows users to navigate their entire screen using a single button, or a single switch. That switch can be operated not just by pressing a button, but also as a mouth device, eye blinks, head turns.
  • Cognitive challenges (autism or children with learning disabilities): Guided Access.

Get more development news like this

As you can see there are different technologies that are built-in. As a developer, it is up to you to integrate with those technologies the best you can. Check out this video of Tommy Edison: it is fascinating to hear his perspective on seeing movies and utilizing technology as a blind person.

Vision: VoiceOver (05:48)

Often by supporting VoiceOver you get other things for free. Supporting VoiceOver means utilizing the UI accessibility API’s. You have probably heard of accessibility label (putting labels on things that are on the screen: images and buttons, and buttons made of images). We need to be sure images are given a proper accessibility label. Along those lines, you want to avoid images that are text. If I make work with graphics but it is not actual text on the screen, VoiceOver is not going to read it: you want to avoid those images, or be sure that you are giving the proper accessibility labels.

Let’s take a look at an example: a Tweet, by Jordan Kay. It would navigate his picture, each label, and each of these little buttons at the bottom of the Tweet would, be separated. A blind user he would have to swipe eight times in order to understand the content of this Tweet. Instead, I have turned this into one line. VoiceOver speaks once for this Tweet: it says the whole Tweet, Jordan’s name, all the information that I feel is pertinent, that conveys the Tweet, and it gives a hint at the end (summarizing that this Tweet has attached actions into the experience itself). If the user wants to get to those actions quickly, they do a very quick standard accessibility gesture (a two-fingered double-tap), and they can get to those actions. Those actions will only be associated to this Tweet - there is no risk that they have accidentally touched a different Favorite button, attached to a different Tweet. We do much better synthesizing the experience to make it one Tweet, rather than a whole bunch of jumbled information that is associated to Jordan.

Vision: color & fonts (09:12)

Dynamic type system (Apple has a fantastic dynamic type system!). When the user goes into their settings and blows up the font as big as they can, your app will read clearly to them. Top targets should be no smaller than 44 by 44.

Contrast ratio is also incredibly important. It is popular to have the light gray on the white, but it is not great for users with visual challenges. Use this link: you can punch in your RGB values, and you will know your contrast ratio, and whether your text and background are suitable.

Keep color schemes simple, and never indicate meaningful content with color alone. Always convey information via multiple media.

Other challenges: captioning & sound (11:19)

Captioning and sound are key for our users with hearing challenges. All audio and video content should provide optional captioning. It is easy via AV Foundation (the challenge is getting the content). Also, you never want to signal anything with sound alone, and avoid background music/sounds. Give them the option to turn the sound off.

Other challenges: touch & cognition (12:16)

For touch, avoid complex and hard to discover gestures. Again you want to keep the touch targets big. Importantly, you want to touch your interface with Switch Control. Make sure that that Switch system that allows you to navigate with the one big button. Make sure that it can access everything important on your screen, get to every button/tap target. Again, avoid screen clutter and favor navigation.

Examples: Airbnb & Twitterific 5 (13:17)

Airbnb: They have not done any string replacement; buttons are not labeled. Moreover, there is no content grouping and no dynamic text support.

Twitterific 5: Grouping makes sense; it reads things that were not even on the screen (so the user did not have to navigate to find it). They did a really good job of keeping the user on task at all times.

Talk 2 - What’s New in iOS Accessibility? (17:05)

The new accessibility documentation since iOS 8 is fantastic.

Simplifying the complexity (17:13)

Up until iOS 7, accessibility was basic: you attached labels and hints to things. Now it is very dynamic, interfaces are gestural. There is more support for custom behaviors and accessibility tools beyond VoiceOver (e.g. the Switch system).

Accessibility actions (17:51)

Before iOS 8, when an object had a gesture, any custom navigation tap associated to it, I had to make another gesture recognizer, or I had to rely on accessibility detecting it.

Now, with iOS 8 and forward, we have accessibilityCustomActions. I can create action objects that call into code (called by my gesture or by my hard to find item), and I can assign those to actions. VoiceOver will pick up on the actions automatically (as long as I have assigned them to the view), and will read them out to the user.

Example Reminders app. “I need to buy a spatula”; if I were to select “buy a spatula”, I can swipe, and get more actions. I can create UI accessibilityCustomActions. It uses the old selector API, but we can assign our target and selector. The selector is going to be the code I would already have called if they tap the button. I need to wrap it in a callback (so that I can return true), and let the VoiceOver system know that I have handled the action (or return false if I did not). I set up the actions, give them a name, that name will be spoken by VoiceOver, and I assign them to myself. I would swipe, it would tell me “I need to buy a spatula”, and then, VoiceOver would say “select a custom action, then double-tap to activate”. When the user has heard that hint a few times, VoiceOver stops saying it. Instead: “Double-tap to edit details”, actions available, and the user knows they can swipe to get there. Great API for integrating those swipes and custom hidden actions into your app.

Accessibility containers (20:05)

Prior to iOS 8, accessibility containers were implemented via three callbacks. Those callbacks, if you had a dynamic interface could get out of sync, causing crashes. After iOS 8 you can simply set accessibility as an array. I make accessibility objects for each thing on the screen, and assign that to my accessibility elements (no callbacks, no extra work, more efficient).

Example health app. There is custom on the screen. The graph is not UI view (it is inside of one, but it is not one). How do we provide that information?

The container graph view is not an accessibility element. Then I created a container accessibility element that summarizes the graph. It has a label, and I give it a frame. Then I loop through each data point. For each data point I grab its value and its date, and then I set an accessibility frame for the data point so that the picture knows where to go on the screen. Finally, I concatenate those two arrays, and I assign it to accessibility elements.

Switch control… control (22:30)

Prior to iOS 8, Switch Control was running (was great but you cannot do anything about it, you have no power). After iOS 8, you get more control. You can decide how the Switch navigates the screen. Also, Switches pick up on your custom actions.

Switch control demo Let’s see a demo of this: we will demonstrate two different types of Switch Controls, separate and combined. The buttons change my background color (I have added a hidden action on the purple button that resets my background color to white).We will do yellow, hit the button twice (we did combined). Finally we will find my hidden action in purple: reset background, and reset.

More customization based on device state (24:06)

More customization based on device state means you can check for grayscale, big text, bold text or inverted colors, and you can change your UI accordingly. If a user is using a default system, you can give them everything that you had planned, but you can scale it back if a user is using something different (e.g. different colors and states). There is a whole host of API’s that give you notifications when these things change, as well as let you check the state itself.

(More on recommendations for backwards compatibility are on the slides above.)

Talk 3 - Accessibility + FRP (25:03)

Functional reactive programming (25:26)

FRP is an explicit way to model changes in values over time. It is a way to have a stream of information effect.

Why FRP? (26:22)

The common libraries are RxSwift and ReactiveCocoa. Sometimes you have static information (Tweet, text), but others (e.g. the number of Favorites and Retweets) are dynamic. Let’s imagine the dynamic information, continually being updating by my server. That stream is coming, that information is changing: I could call an update method. Yet instead I could use FRP: because I am modeling this as a stream, the stream of data comes in to one spot. From that one spot, I can tell everything to update.

Example: accessibility announcer (27:37)

Another problem that I had with accessibility was the announcement system. Let’s say I pull to refresh. Accessibility makes the announcement saying, Refreshing Content. The problem with the iOS accessibility announcer (which many of you may have experienced), is that if you initiate too many announcements close together, they just override each other, and many of them get dropped. The user will not hear all of your announcements (which can be very frustrating). What is the solution? An announcement queue. I put the announcements in the queue, they come out one at a time. If they are in there too long, I probably need to drop some; the user will not need to hear them anymore. I need a timeout. If they fail, I need a retry policy. It is simple, it is a data flow. Announcements go in, they go into a stream, they come out, one of three things happens: they get announced, they do not get announced (they retry until they do get announced), or they hit a timeout and they fail.

This is Accessibility Announcer. I want to show how clean and simple it is. The entire Accessibility Announcer with comments and copyright is 90 lines. It is three functions, that is where the announcements go into the pipeline. I have created signal producers (processes for each option that the announcements go through). They go through the announcer; the announcer just tries to announce it. Then they’re passed off to the notifier, and the notifier checks to see if it was successful. If it was successful, we are done; if it was not, the notifier passes that information on. That information goes on to my retry until timeout loop. That loop’s job is to retry. If we hit the timeout, drop it. It is those same three possibilities, now linked up in this chain: we send the announcements in, and they come out the other end, either being announced, or being dropped.

Q&A (31:12)

Q: When you were showing the Tweet summary (where you were summarizing what was in a Tweet), one of the choices you made was that there was the profile picture over at the left, and you chose not to put any description text saying there was a profile photo. I have heard arguments made both ways as to whether you should explain everything that is on the screen, or just a summary of those things, and I am curious about that decision. Sommer: It is a controversy for everyone. Even if you go to Apple and you ask their engineers they will give you two different answers. For me the question is, does describing the picture augment the experience or not? If the image had been not his profile picture, if it had been a separate image, I would say yes then you should say, there is another image here, but because his profile picture is another way of conveying it is Jordan, and we already have heard his username and his full name, there is not much more we can provide by saying there is a picture. My choice when I wrote that code was probably do not need to say profile picture every time. Another question you can ask yourself is, Is it redundant? And if we are reading Tweets, and most users are going to read 20 30 Tweets really fast,they probably do not want to hear “profile picture” every single time a Tweet is touched. I am trying to pair it down to the interesting information, however, if they go into the Tweet detail view, then I will make the profile picture a separate element, and they can go in and inspect that picture from that angle. That is the two part answer is, is it redundant, and does it provide additional information that would be useful?

Q: Speech to text: it would be useful to do shortcuts where you are speaking at the interface and that interprets. With the Apple Watch, you can actually raise your wrist, it can go directly into the speech to text keyboard. Has anyone tried tricks like that as well? Sommer: Google has; Apple, I believe it is always button triggered. As far as iPad and iPhone go, there is no physical trigger, except for the Hey Siri functionality. I know that Google was working (I think it was API 20), they were working on a quick accessible dictation system that you could “Tap this, tap that”. I lost track of where that got to. But it is a really cool idea. I think that as technology improves it is going to go that way. From what I have heard from users, they want the stability of knowing they are always going to get the right thing. Right now that cannot be provided with pure speech to text. But it is cool, and I hope we get there.

Next Up: New Features in Realm Obj-C & Swift

General link arrow white

About the content

This content has been published here with the express permission of the author.

Sommer Panage

Sommer Panage is currently a mobile software developer at Chorus, and circus artist. She worked previously as the lead for Mobile Accessibility on iOS and Android at Twitter. Before moving into this role, she worked on various iOS projects such as DMs and Anti-spam. Prior to Twitter, Sommer worked on the iOS team at Apple. She earned her BA in Psychology and MS in Computer Science at Stanford University.

4 design patterns for a RESTless mobile integration »

close