GLAD vs CNN closed-captions lawsuit: finding a win-win for broadcasters and deaf people

On Saturday a Californian court refused to dismiss a suit by the Greater Los Angeles Agency on Deafness (GLAD) against CNN for its refusal to add closed captioning to news video clips on its website (for more details see: CNN sued over lack of closed captioning on website).

I’m in a privileged position to comment on this case as I was the manager behind delivering workflows to caption over 90% of BBC iPlayer’s programmes, and having worked with the BBC’s News site to investigate what they’d need to do to add captions (or subtitles as they are often called in the UK) to their news video clips.

Lack of captions is the number one accessibility concern of most Deaf and hard of hearing web users – without captions online video is pretty much useless for them. And the amount of video on the web is growing exponentially – Gartner reckon that over 25% of the content that workers see in a day will be dominated by pictures, video or audio.

So the user-needs behind this lawsuit are substantial, and the issue is only going to become more and more important (and contentious) over time.

In that context, how do we understand CNN’s position on online captions, and what impact might the lawsuit have on broadcasters globally?

Context: is a culture of accessibility litigation emerging?

This lawsuit comes after last week’s lawsuit in the UK by RNIB against bmi-baby, on which I gave my expert thoughts regarding how BS 8878 can help prevent other organisations getting sued.

On both sides of the Atlantic, disabled groups are mobilising to use available legislation to challenge organisations that refuse to make websites which meet their accessibility needs.

So organisations that own websites would do well to understand how to balance the needs of their disabled users with their own needs to protect their brand values, unique selling points and profitability.

Online captioning 101: what’s needed for captioned online video

To deliver captioned video via the Internet an organisation will require four things:

  1. A media player that can play captioned video;
  2. A way of creating that captioned video;
  3. A way of letting users know which of their videos are captioned and which are not; and
  4. An caption production workflow – a process that enables the organisation to ingest video, get captions created as quickly as they need, and publishes the captions with the video, reliably and efficiently.

The easier requirements

Choosing a media player that allows you to display captions on online video is comparatively easy. Nomensa recently released the source code of their accessible media player to the public, and other players are also available.

Getting captions created for online video is slightly more challenging. The BBC Online Subtitling Editorial Guidelines advise on how to create captions that can be easily seen and read. EBU-STL and timed-text standards exist for encoding online captions (and AMI’s Robert Pearson is presenting at CSUN 2012 on proposed standards for descriptive video). And commercial tools are available to create closed-captions for pre-recorded and live video. The main challenge posed to organisations by this requirement is cost, as the creation of quality captions still requires human effort, and so doesn’t scale easily or cheaply.

The requirement to include visual cues for letting users know which videos are captioned and which are not often gets overlooked, but are essential for deaf and hard of hearing people to benefit from sites which include some captioned video. However, it isn’t rocket science for sites to ensure they get this right.

The crucial importance of the caption production workflow and its speed

Creating an efficient and reliable video publishing workflow which includes captions is not easy.

And it’s this requirement that gives some insight into why CNN’s lawyers are defending their position on the lawsuit by talking about “free-speech rights” and “violation of their editorial practices”.

To understand the heat behind this language, you need to understand the essential difference between long-form video and news video clips (which, thankfully, the FCC do, as their 21st CVAA IPTV captioning rules already make this distinction).

Long-form video publishing is not particularly time sensitive, even for TV catch-up services like BBC iPlayer. If a programme is online an hour or so after broadcast, that’s generally acceptable.

But most news operations’ most expensive commodity is time. If they wait to get their video captioned before publishing it online, and another broadcaster gets their video online more quickly because they don’t wait for captions to be created, they will have lost one of their unique selling points – being first with reporting the news.

After all, news is only really news when it’s new – every second counts.

So caption production workflows need to be able to produce captions as quickly as the video encoders can get the video ready for online publication. This requires super-fast, high-quality, automated, speaker-independent caption generation. And this is still a long way from being available.

This is why CNN are arguing that it would be unfair for them to have to caption their clips unless the same rules are applied to all of their competitors.

I’d agree with them – regulation would need to apply all broadcasters or none to avoid giving one broadcaster unfair advantage over the others. And, bearing in mind news is a global commodity these days, I’m sure CNN would say that this regulation shouldn’t just be for all US broadcasters but all global broadcasters, or else the BBC or others could steal their thunder too.

Even then their scoop could be trumped by a ‘citizen journalist’ video blog that doesn’t care about disability law and waiting for caption generation.

So, welcome to complete stalemate – we need one global law for everyone, or none for anyone.

This is why none of the broadcasters, including BBC News, have rolled out captions for news clips in any meaningful way yet.

Even if the CNN case ‘sets the precedent for the whole industry’ as Laurence Paradis of Disability Rights Advocates thinks, it’s unlikely that this precedent will give deaf people what they want.

So can the stalemate be broken?

Well, yes.

But people who want captions might just have to concede the war to win the battle.

They are unlikely to ever get a law requiring broadcasters to create captions before they publish a news clip.

But they are on much more reasonable ground to ask for broadcasters to subtitle news clips within 24 hours, say, of them being published.

Yes, I know that isn’t equal treatment for deaf and non-deaf people. But it is something that I believe can be argued for in court, as it’s expensive but achievable.

It’s costly because it requires the broadcasters video publishing workflows to be reviewed to include the production and publishing of captions. On top of the costs of creating the captions for each clip, the costs of updating workflows in large broadcasters, which are delivering video to both broadcast TV and online, is not trivial by any means.

But it is do-able, because the enabler here is closed-captions. For those not in the know, open-captions are those ‘burnt into a video’ that the viewer can’t turn off, whereas closed-captions are those that the viewer can turn on and off because they are delivered separate from the video, and synchronised with the video by the media player.

It’s this separate delivery that allows closed-captions to be delivered using completely separate workflows from those used to publish the video.

So this allows for clips to be published as immediately as they are now, without having to wait for captions, which can be added minutes or hours later.

In this case, unlike for most accessibility issues, broadcasters can reasonably easily retrofit closed-captions by adding new caption workflows to their existing video publishing workflows, without having to do much alteration to those existing precious workflows.

How broadcasters and deaf people can achieve a win-win

I believe this publication of captions after the publication of the video is the most likely outcome of the lawsuit, unless the GLAD complainants insist on trying to challenge the stalemate. If they do, I think CNN will and should win.

If level-headedness prevails, and the two sides can come to some accommodation, is there a way for them both to come out of the suit having won?

I think there is, because there are benefits to enriching news clips with captions that go beyond helping deaf and hard of hearing people.

Many people without hearing difficulties also use captions, especially the many office workers that web analytics and contextual research has identified who browse news sites at their desks in their lunch-hours. Given the prevalence for open-plan offices, many of these who don’t have headphones with them will turn on captions or avoid online video.

The other benefit is findability and SEO. Clips enriched with captions (and/or transcripts based on these captions) are clips whose content can be indexed by search engines in the same way as text on a webpage (CNET reported a 30% increase in traffic after providing transcripts for videos).

How should CNN and other broadcasters react to this case?

My recommendation is that all broadcasters place a higher priority on investigating how to put in place the right workflows to enrich their news clips with captions.

They may be required to do so by legislation at some point.

In the meantime, being first to enrich their video with captions may give them a new unique selling point: that their clips achieve higher google rankings and so reach a wider audience.

I’d be happy to help any broadcasters or online video providers investigate how captions could be added to their business-as-usual video production processes, based on our existing experience at Hassell Inclusion. Please contact us if we can be of any help.

Want more?

If this blog has been useful, you might like to sign-up for the Hassell Inclusion newsletter to get more insights like this in your email every other week.

Update June 2012

In a related case, last week a U.S. federal judge allowed a lawsuit that would require Netflix to include closed captioning on all its Watch Instantly content to move forward, denying Netflix’s request for the dismissal of the case and giving a clear ruling on the application of the ADA to the web. Find out more on how the Netflix case has already encouraged Netflix to include more closed captioning, and what it might mean for video-on-demand services in the UK.

Comments

Luke McGrath says

Great article, covering what seems to be a growing issue in litigation. The key point, as you say, is managing expectations. News providers need their edge to attract visitors – they need to be as quick as possible. CNN should work out how quickly they can provide QUALITY captions and offer a solution to GLAD. Even though they will win the case they can come out with a better service, increased demographic and moral advantage over their competitors.

If only it was that easy to convice CEOs.

James says

We looked at these guys to help do the transcription element of captioning http://samasource.org
The minimum content per month was too small though so we’ve gone to one of their providers directly. We’ve not got any results to report on yet though.

Jonathan Hassell says

Thanks for this, James.

Organisations like samasource are an interesting development in potentially lowering the cost of producing captions.

It will be interesting to see whether they might be an appropriate partner to integrate into ongoing captioning workflows, or whether they are better used for one-off caption production.

I’m sure time will tell.

Dawn says

This article raises some excellent points.

In a similar way on my blog I argued that the live subtitling method of respeaking is not suitable for music programmes but that often there is little choice or no alternative because the programme is a live event or is because it is recorded too close to the transmission date for pre-perpared subtitles.

I work in TV broadcast and so get to see the fast turn arounds and workflow of receiving a TV programme and getting it to TX. Adding a subtitling workflow prior to TX isn’t always viable. As a consumer of subtitles I hate that I am writing that sentence trust me but realise it is the reality. So instead I argued – is it not much to ask that these programmes subtitles are corrected/modified for repeat viewings and/or its internet catch up service? That is still a better service than currently received and a step in the right direction.

If anyone wants to read the post I am referring to it is here: http://iheartsubtitles.wordpress.com/2010/09/27/its-all-gone-pete-tong-wrong/

Jonathan Hassell says

Hi Dawn,

Many thanks for adding your voice and experience to my blog. It’s good to have another voice from the industry who can put over the complexity of subtitling workflows.

And your experience certainly chimes with mine – adding subtitles prior to TX isn’t always viable.

I agree that the correction of subtitles on repeat viewings would be really useful to improve quality. I remember having a number of conversations with the BBC subtitle provider, RedBee, about this issue. So I’m sure it’s already happening on some repeated programmes on iPlayer. Have you noticed any improvements since your blog in 2010?

Dawn says

For music programmes specifically, not really but I’ll be honest I made an assumption that it wouldn’t be any better based on my previous experience[s]. I should re-check. I do know one BBC viewer I follow on Twitter was hugely disappointed to find that some of the Glastonbury music festival repeat broadcasts on TV was still not subtitled in sync.

On other programmes on the BBC I am grateful that the TV repeat of Have I Got News For You a few days later in the schedule often reliably has subtitles in sync, rather than live subtitling with its slight delay. Going back to News channels – BBC News does have some pre-recorded content. I constantly struggle with the show Click for example – subtitles are too delayed and its a fast edited programme with lots of voice over (can’t lip-read that, I can do that with the newsreaders at least!) And sadly its no better on iPlayer as of writing this comment. I do not mean to single out the BBC here at all by the way – I’m a fan of their programming so notice more!

CCAC (Collaborative for Communication Access via Captioning) says

Nice article. Thanks.

We invite you to read about the CCAC today, all volunteers, and discussing your blog as I type :-).

Also please: if you are saying some BBC news online is captioned now, please point us to a good example.

Lauren & Is

Jonathan Hassell says

Hi Lauren and Is,

Thanks for the link to your organisation’s site. There’s lots of good information on there and it sounds like you’re doing great work (although it might be easier to find information on your site if you included a bit more structure on your homepage).

To clarify, BBC News programmes via BBC iPlayer are subtitled, although unfortunately you can’t access these outside the UK.

However, clips on the BBC News site are not yet subtitled.

Larry Goldberg says

Helpful discussion. Vendors now getting into this field with turnkey solutions; any experience with RAMP or 3Play Media?

– Larry

Jonathan Hassell says

Thanks, Larry.

I’ve not had any experience with RAMP or 3Play Media myself yet, but others reading this blog may have…

Martin Sloan says

Jonathan – thanks for posting this. A good insight into some of the technical issues.

I know that you haven’t really talked about the law here, but thought it might be worth adding some comments that perhaps reinforce the comments you make.

Under UK law (and leaving aside specific broadcasting regulations), this probably strays into the territory of the objective justification test.

Whilst there is case law to date, this strikes me as an area where the website operator might be entitled to rely upon the test to say that delayed captioning within a reasonable period strikes a balance with ensuring that content is made available to the public as soon as possible.

If the effect of the law was that *no one* could view the clip until the clip had been captioned, then the law would have a negative impact on all users of the website. And that’s not what the law is intending to do.

Jonathan Hassell says

Many thanks for giving a UK legal perspective, Martin.

I’d steered clear of this because of the complexities of the law in this case, which already encompass Californian law and US Federal law, and could include national laws applying to broadcasters in other countries if the case considers those non-US broadcasters as being competitors to CNN (which they surely are).

It’s good to hear that my ‘middle way’ chimes with the intention of UK law.

Robert Pearson says

Great post Jonathan and thank you for the mention. We look forward to continuing the discussion on media accessibility at CSUN later this month.

To put some context to us, for others, we are a unique broadcaster offering 100% captioned and described content on air in Canada. However we have yet to broadcast any of that content through our website – specifically for the reasons touched upon within this article. There are challenges in finding the correct medium/player to be able to accomplish that while being fully accessible and of course in regards to broadcasting live or near-to-live content in a fully accessible format, however we have made good strides in that area with events such as the Royal Wedding and a Canadian reality show.

You speak of the idea of the global guideline and that is certainly relevant. It is something that we’ll be discussing further in the description context.

In reading through your post, I was thinking of the YouTube solution. One can upload a video and have them automatically caption it, albeit not perfectly. However is it the type of solution that looks to break the stalemate by starting somewhere?

Jonathan Hassell says

Thanks for your comments, Robert. And I’m looking forwards to your presentation at CSUN.

Regarding the YouTube solution… I think it’s a step in the right direction. But YouTube have over-sold its capabilities immensely (although the version of their tool which allows you to create captions from a transcript of the video is really, really useful). As it stands, their automated solution is better than nothing. But it would really benefit from a mechanism for hard of hearing people to click on videos they most want captioned, which then sends a request to an army of motivated volunteers to fix the problems in the automated captions.

I’m sure someone’s already doing something like that but for the moment I can’t remember who…

Reply to this thread

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.