At Cofacts in Taiwan, volunteer editors and a time-saving chatbot race to combat falsehoods on LINE.

IMG_6120-1350x775.jpg

Volunteer editors gather at a recent Cofacts meetup in Taiwan.


There are already numerous fact-checking initiatives across Asia that comb through news articles and public social media posts, but instant messaging poses a unique challenge in the fight against misinformation.

It’s difficult to deal with “fake news” circulating in chat groups, when most of us cannot see it.

And it’s a problem that Taiwan — where LINE is hugely popular — is currently grappling with. Messages on the chat app are convenient to forward, but not always easy to rebut.

This is especially so in family chat groups, where young Taiwanese often find it difficult or awkward to confront older relatives who forward false or heavily skewed messages.

Enter Cofacts, an open, collaborative platform by Taiwanese civic tech community g0v (pronounced “gov-zero”).

The basic system is simple. Users who receive dubious messages on LINE can forward them to the Cofacts chatbot. The messages are then added to a database, where they’re fact-checked by volunteer editors before the response is sent back to the LINE user.

If the editors don’t agree with one another’s fact-checking, they can send multiple messages to the user, who then has the opportunity to independently evaluate the information provided.

Based on Cofacts’ data, the misinformation debunked on their platform can range from fake promotional messages and medical misinformation to false claims about government policies.

Time-saving chatbot

Johnson Liang, a creator of the Cofacts Line bot, says he was convinced that a chatbot was the right approach after getting a peek at the admin panel of MyGoPen, another fact-checking account on LINE.

MyGoPen was being flooded with messages, but most of them were duplicates — identical or very similar messages from different users. The administrators were overwhelmed by the need to respond to each person individually.

MyGoPen’s struggles gave Liang the idea for the chatbot, linked to a database. Once a message is sent to Cofacts, it is stored on the database. The next time someone forwards the same message, the database can retrieve existing fact-checks and send it right back.

Over the past three months, Cofacts received more than 46,000 messages — and the chatbot was able to automatically answer 35,180 of them.

Like the rest of g0v, openness and decentralisation is key to Cofacts’ operations. Their datasets, analytics and meeting notes are all open access, so anyone (as long as you can read Traditional Chinese, that is) can look at their discussions and design decisions.

It’s a practice that even extends to the way they deal with media queries: instead of one-on-one interviews or emails, the Cofacts team requested that this interview be conducted via shared documents accessible by the entire team, allowing people with different roles and perspectives to contribute (or at least be aware of what others were saying).

This model was at first born from practical concerns. “Distinguishing users or adding mechanisms like screening the editors takes time,” says Liang. “It takes time to design the rules of background checks. It also takes time to implement them to the system. It takes more time to justify (or modify) the screening rules if there are people challenging these rules. Since we don’t want to waste time on these rules, we just make it available to all people—if anyone is not happy with any reply on the platform, just directly contribute.”

Gatekeepers of rumor

But Cofacts acknowledges that this system isn’t foolproof: while it’s built on openness and trust, there’s always the possibility of the platform being hijacked by rogue editors.

And Liang freely admits that the team might not figure out the best method forward until they achieve a critical mass, where the likelihood of getting hacked or hijacked becomes more tangible.

Still, he keeps an open mind, preferring to think of tools that communities can use, rather than measures to pre-empt and block threats that may or may not yet exist.

“A more constructive perspective to look at the risk of the platform being flooded with rogue editors is to rephrase it to questions like ‘How we can improve quality of the editor’s replies?’” he says. “This opens up much more rooms for discussions and possible solutions.”

He cites other platforms, such as Wikipedia and Quora, as possible models to learn from.

Despite the support of its growing database, fighting hoaxes and misinformation is still an uphill battle for Cofacts.

The team receives about 250 new messages a week, but has less than 12 active editors combing through them, meaning that users don’t always receive quick responses to the messages they want fact-checked. They also have no way to figure out what and how rumors are spreading among LINE users who don’t use Cofacts.

But Liang points out the “bright side”: every user who does get an answer to their query now has the ability to become rumour-quashers themselves.

“Although the rumours still spread after we reply, at least we stopped these many users (or chatrooms) from further forwarding such messages every day. It empowers [these] users to be gatekeepers on their own.”

Tweet

Cofacts debunks fake news on LINE in Taiwan by using a surprisingly simple system. Here's how they do it. Story by @kixes.

Kirsten Han

Kirsten is a freelance journalist and curator of We, The Citizens, a newsletter on Singapore, politics, and social justice. Subscribe here.

Previous
Previous

Social media is reshaping the way satire is produced and distributed across Asia — as well as how governments strive to contain it.

Next
Next

Nine’s takeover of Fairfax spells trouble not just for Australia’s mainstream media — but for startups and independents too.