close
close

How does the BBC use AI?

How does the BBC use AI?

As a publicly funded national media organisation, the British Broadcasting Corporation (BBC) has a greater responsibility than most to ensure that its use of artificial intelligence is safe and that it adheres to ethical, moral and accurate reporting standards. This post was originally published in the Bright Sites newsletter and is republished with permission.

Laura Ellis is Head of Technology Forecasting at the BBC. She has worked in news teams across radio, TV and the web, and built the BBC’s first end-to-end digital newsroom. In her current role, she focuses on ensuring the BBC is best placed to take advantage of new technologies.

How does the BBC use AI?

We’ve been using AI for a number of tasks for years – translation, transcription and object recognition through computer vision, for example – the kind of things you’d expect from a broadcaster. For example, we have a program called Springwatch, which is part of our ‘Watches’ series on nature. AI would spot a bird or animal after being trained on the type of wildlife it would expect to see. This would save producers from having to spend hours sifting through footage to figure out where the duck comes into the frame. This is really clever because it can also distinguish between species and between individual animals.

And then when generative AI came along, we naturally started to explore how it could change the landscape.

How has generative AI changed the landscape?

The first thing we realized was that there were a whole host of additional risks. I think we need to solve three main problems before we can move forward with generative AI.

First of all, it’s become much more democratic and suddenly anyone can use it. You can talk to machines in your own language. That means there’s a risk of people putting company data into a system and if you don’t have the right security in place, that data can go anywhere. You don’t know where it’s going to end up, so you have to be very careful.

The second concern was that we don’t know how to make these models work without ‘hallucinating’ or making up things that aren’t true. They’re designed to be comfortable and to give you an answer. There’s no reasoning in there that says, ‘Oh, I don’t know that. I’ll say I don’t know.’ Things can just appear that we know can be harmful, damaging, and unhelpful, so that’s a real problem.

And then there is the third issue, which affects the entire media industry and beyond: to what extent do we accept – whether legally or ethically – the fact that these models have been trained on huge amounts of data, much of which is copyright data.

How difficult is it to enforce an AI policy in an organisation as large as the BBC?

We’re used to having editorial policies at the BBC that everyone has to follow, so we’ve always known that there are certain things that we’ve agreed editorially that we will and won’t do.

Now we’ve entered a new era where we’re looking at policies that have a lot more technology in them or reference technology a lot more. So in some ways it’s been quite interesting to supplement and update those. I think it’s also really important to supplement those with human training. For example, I did a course this morning as part of our Co-Pilot series.

I also think it’s really important that people can ask questions like, “Wait a minute, how does this work? How can I access it? Are you sure we made the right decision here?”

We also share with the audience what we’ve done. It’s very important to tell our audience that we used AI in creating this particular piece of content and to convey that information in a way that’s not intrusive but educational and useful. That’s a really big challenge.

Do you collaborate with other media organizations on AI?

There used to be a lot of rivalry between the organizations. You probably didn’t share much about what you were doing, but I think generative AI has come on so fast and so strongly that we wanted to share what we were learning, and I think that’s been a really positive result. There are some people doing some really impressive things.

It is similar to the way we have worked together to combat disinformation. This problem is also too big to be used as a competitive differentiator.

What advice would you give to publishers who are just entering the world of AI?

I think the first thing we need to do is listen. It’s important to have conversations with colleagues and keep the human element in journalism. The people who use this need to be able to ask questions. They need to share their concerns. We have something we call the Blue Room where we have a lot of sessions on this topic. So people come in and tell us what they think. It’s a great feedback loop.

If generative AI can help, then it should be able to improve and extend, rather than take away. It should open up new possibilities for us. For me, the miracle of journalism is when a human looks at a situation and tells another human or group of humans what they discovered and how they responded.

That’s something very, very valuable, and if we lose that, we’ve basically lost the industry. We have to keep that in mind throughout all of this.

Is there a world where journalism, publishing and AI live together harmoniously?

There should be. One way AI could add value is because it can change modes so that it can convert text to speech and speech to text.

You may not have spare capacity, and that adds value to content that we’ve already paid for and that we’ve already accumulated. So I think letting AI do these kinds of tasks adds value. Something new in the offering that we’re giving our audience, and that feels great, doesn’t it?

From this perspective, there are many positive aspects. The more difficult part is the jobs that are perceived as boring. They could be repetitive – translation, transcription. What do we lose if we leave all of this to AI?

We produce a lot of material that we would never translate or transcribe because there are not enough human resources to do it. But we should not lose the beauty and subtlety of human translation for things that can be particularly sensitive, beautiful or valuable.

And we should make sure that we don’t erase a very, very subtle and sophisticated human ability, which is understanding another language. And when it comes to audiences, people say, “Oh, I know we can get a story, write it once, and then have an AI write five versions for an audience that has English as a second language.” An audience that really doesn’t like words, that prefers bullet points and images or whatever. You can certainly do that, and that could be useful, but I would ask two questions. First, do you then lose touch with the audience that you’re no longer writing for?

If we look at our wonderful journalists at the BBC, they do things like Newsround and Newsbeat. They are writing for a particular audience. They need to know what the language is, what the idioms are and how that audience will respond to stories being told to them in a different way. Language changes and becomes outdated very quickly.

Second, if you don’t understand that audience, is that a problem for you as a company in the long term? I don’t think we should use AI as a replacement for communicating with people we could otherwise communicate better with.

What insights from your journalistic background do you often draw on in connection with AI?

If you’ve worked in a newsroom like this, you have an absolute passion for facts, truth and accuracy. I think we as a society are losing touch with facts and truth. So when I go back to journalism every now and then, I’m reminded that facts are not only important and not only the absolute currency in journalism, but they’re also a basic human right. Journalism shouldn’t have to fight AI for that. It should be won over to make it better.

——————————————————————-

Republished with kind permission of Bright Sites, developer of the digital publishing platform FLOW, which combines a data-driven approach with machine learning, AI, e-commerce, subscriptions and translation. Bright Sites developed FLOW, a software-as-a-service CMS for digital publishing that provides multi-site newsroom workflows, multilingual content creation and AI to personalise the user experience for a range of global and local publishers. Clients include Irish Independent, The Independent and London Evening Standard.

Leave a Reply

Your email address will not be published. Required fields are marked *