Eight steps to a more inclusive event

Multiple devices on a desk all running different types of teleconferencing apps

Making events more accessible makes them more consumable and a better experience for all your participants.

Part one of a two part article. Part two focuses on how to make PowerPoint presentations more accessible.

A significant percentage of attendees at inclusive events rely on captions and sign language interpreters to participate equally in the event. There are many things that event coordinators can do to make the process and outcomes as close to equal as possible for these participants. These accessibility steps are a curb cut, because making the event more inclusive makes it a better experience for all attendees, not just those using assistive technology.

Step 1: Choose the correct type of captioning

I can’t shout it any louder for the people in the back. All events *must* be captioned!

There are three types of captioning:

  1. Automatic captioning. Automatic captioning is free but is the lowest quality. Most of the time, there is no punctuation, speaker identification, or sounds other than speech. Automatic captions handle cross-talk atrociously — caption users will see one line with co-mingled parts of what each speaker is saying.
  2. Respeaking. Despite involving a human, respeaking still heavily relies on automatic captioning. Respeakers listen to the audio and repeat everything they hear. They can add clarity by using an unaccented, correctly paced voice and sometimes add speaker identification, punctuation, and sound effects. However, respeaking can add a significant lag between when the words were initially spoken and when the captions appear.
  3. Live captioners. Live captioning is 100 % human-based. Live captioners type what they hear into court reporting-like systems to keep up with speech rates.

A few other data points to consider:

  1. Event planners should use the highest quality captioning they can afford.
  2. Events with registration forms should ask participants whether they will be relying on captions to understand the impact of their captioning choices.
  3. Captions are not (I REPEAT NOT!!!) a substitute for ASL. If someone asks for ASL, every effort should be made to provide them with that form of communication.

Step 2: Send all presentation materials to your live captioner/interpreters at least 24 hours before the event.

When you send your presentation materials in advance of the event, the captioner/interpreter will have access to product names, technical terms, and acronyms that may be difficult to process on the fly if they are hearing them for the first time.

Step 3: Send participants’ names and name signs (if any) to live captioner/interpreters in advance

Names, especially long names (like mine), can take forever to caption and sign. This causes the audio that comes after the name to lag, which degrades the experience to the person relying on the captions/sign language. You don’t even need to meet with the captioners/interpreters, just record your name signs on a video as I did above, load it to Vimeo with whatever permissions you think are appropriate, and send them the link.

Step 4: Ask your speakers to practice speaking more slowly

According to the National Center for Voice and Speech, the average conversation rate for English speakers in the United States is about 150 words per minute. Most non-English languages are spoken even more quickly. For example, Japanese is one of the fastest spoken global languages, with an average rate of 340 words per minute. When people are accustomed to speaking their native language quickly, that frequently shows up in their English speaking rate. And yet, I only type 100 words per minute, while being one of the fastest typists I know. That is why it is critical to make these minor presenter modifications so captioners and interpreters can keep up.

On top of an individual’s default speech rate, many people get nervous when speaking in front of groups, and their speech rate increases.

  • The faster the speaking rate, the more likely it is for words to run together.
  • The more words are run together, the less likely it is for automatic captioning engines to caption correctly.
  • The faster the speaking rate, the less likely either captioners or sign language interpreters will keep up, even if the words are not run together.

Step 5: Ask speakers to build pauses into presentations

Think of presentation pauses like white space on a slide or document. White space is essential to allow consumers the time and space to absorb one set of material before you start to throw yet more new material at them.

  • Pauses in presentations at the end of thoughts, slides, or sections allow captioners and interpreters to catch up.
  • Presentation pauses also allow participants to synthesize what they’ve heard into critical takeaways.
  • If your presentation style allows for questions during the presentation rather than addressing all questions at the end, pauses also provide a natural breakpoint where people can ask questions without feeling like they are interrupting.

Step 6: Caption any videos that will be played in advance

At only $1 per minute, captioning pre-recorded videos will always be the biggest bang for your inclusion buck. And if that is too expensive, you can do it yourself for free using Descript.

Video speech rate is almost always faster than presentation speech rate, making it hard for sign language interpreters and captioners to keep up. In addition, videos frequently have background music, making it harder for the captioners/interpreters to hear the speech they are supposed to be relaying.

  • Help your interpreters by giving them access to the video before the presentation.
  • Help your captioners by pre-captioning the video, giving them a short break during the presentation.
  • Help your users by pre-captioning the video which will have 100 % accuracy by having it reviewed in advance instead of doing the captioning real-time or by automatic captioning.

Step 7: Formalize the approach to land acknowledgments and visual descriptions

I wrote an entire article on land acknowledgments and visual descriptions a couple of weeks ago. Remember, everything you say needs to be captioned or signed. Your captioners and interpreters won’t know how to sign or spell Mi’kmaq (which is pronounced meeg-maw) unless you let them know in advance that is a word you will be using.

Step 8: Practice spotlighting / pinning

Captioners are always behind the scenes, but sign language interpreters need to be spotlighted simultaneously with the person they are signing for. Also, if someone is voicing for someone who is d/Deaf, that individual needs to be spotlighted concurrently with the d/Deaf person.