However information reaches you, whether from a physical newspaper or magazine, an online platform, or over the airwaves, there’s always the risk that what you hear or read may not be true. Fake news and misinformation are as old as human society itself, but the sheer volume and lack of control of the information we receive from the online, connected world, leaves us especially vulnerable to unwittingly consuming distorted or manipulated information.
Organizations and companies have many tactics to amplify content and get people to engage. But the fact is that grabbing attention and gaining followers or customers can work just as well if that content is either a heavily biased version of the truth or fully fledged fake news.
Fake news is created for various reasons, such as mischief, to push a particular ideology, or for financial rewards. It’s usually presented in an attention-grabbing way, acting as clickbait. This allows the creators of the misinformation to monetize it on platforms like YouTube.
Content an organization or group wants people to see can also be attached to popular news items that are already online. This is known as newsjacking and is perfectly legitimate, unless, of course, the newsjacking is the work of those spreading fake news.
Having round-the-clock access to rolling news, social media posts, blogs, and video content can be a double-edged sword. We’re better informed and updated about new products, world events, and trends than ever before. But we’re also more lied to, often without us even being aware. This online flow of unfiltered information means we’re open goals in terms of misinformation and out-and-out lies.
As consumers, we’re used to being swayed by what we see and hear online, for example through influencer marketing or celebrity endorsements. Opinions, whether linked to facts or not, are very powerful; a lot of fake news relies on having an emotional impact. When our interest and emotions are engaged, we often don’t stop to question if what we’ve heard or read is genuine.
Human deviousness as well as new technology means that both words and images and even videos can easily be faked. AI, artificial intelligence, not only plays a part in producing fake news, but it can also help spread it.
Legitimate organizations use AI to identify and target the most likely consumers of a message or point of view, with sophisticated algorithms pinpointing and reaching the demographic and individuals who might readily consume a message, while tools such as digital applause testing can even show how appreciated a post is.
Misinformation is also seen in fake announcements from those posing as a person of authority, from organizations or government agencies. But using certain technologies such as electronic signature software or AI can help avoid the spread of fake copies of electronic documents.
But of course, all of this technology is also available to those trying to spread a fake story or misinformation.
The effects of fake news
At a basic level, lies erode trust. This is as true for individuals as organizations, and this is the damage fake news can ultimately inflict. Getting the public and consumers to believe and have confidence in your organization can be hard-won, but easily and quickly lost. If there’s a loss of confidence and trust in a business or government, no matter how well-crafted the latest attention-grabbing press release might be, your message will miss its target.
Misinformation can be damaging and downright dangerous when it comes to issues such as medicines and health. But it can also have a less dramatic effect. It can slowly shift the public’s opinions and habits in a range of areas, such as how much we trust certain companies and products.
Fake news is a way of weaponizing lies and using propaganda and misinformation to persuade and convince, but also to damage. Whether it’s a political group seeking to undermine another faction, or unscrupulous business competitors eroding another company’s reputation, fake news is a dangerous tool.
Artificial intelligence has been instrumental in its growth and reach to billions of people around the globe. The viral spreading of information, whether true or false, has long been recognized by businesses. It’s why social media features so prominently in successful ecommerce growth strategies.
What we encounter in the online world, whether true or not, shapes our opinions and views of our society and the wider world. This is especially the case with social media, which has the power to boost and spread fake news both easily and terrifyingly fast. Information and stories are:
- Shared with a tap of a screen
- Passed on to friends and family in WhatsApp groups
- Retweeted on Twitter
- Posted all over Facebook
But how can any of us be blamed for this when fake news is so convincing and ubiquitous, and we’re up against AI?
Detecting fake news
Ironically, the very technology that enables the spread of fake news could in fact be the answer to combating it. Technology can certainly help us monitor and keep track of which stories are circulating, just as companies are now using podcasts for media monitoring, the level of engagement with a particular topic online can be flagged up automatically and analyzed.
It’s also possible for AI to detect whether a story has been written by a human being or a computer. This is invaluable, as the key to the fast spread of fake news in part, is how speedily it can be created and posted. Computers are faster than human authors, but if those computers leave tell-tale clues, this helps AI hunt down suspicious posts.
But fake news is also a people problem, not just a technological one. It only takes a few shares for a story to leak out into the world, however untrue it is. The real answer would be for mass education of people who go online, encouraging critical thinking and discernment when faced with posts and stories that tend to push emotional buttons or touch a nerve.
Detecting fake news and the development of AI that can meet the challenge of misinformation online, also requires diverse teams; individuals from different backgrounds who have their attention on different expressions of the online world. Just like diversity in software engineering, it’s about casting the net wide to get the best and biggest catch.
Hunting down fake news is a matter of looking for patterns in the ways news is shared. This involves analyzing huge quantities of data on how posts are shared. There’s some evidence that fake news seems to get shared more often than it’s liked. This is a pattern AI can be trained to look out for.
There’s also the question of the type of content and how sentences are formed and how phrases are used. Fake news items tend to have certain speech patterns in common. AI can quickly sift through huge amounts of online posts and quickly pick up on these repeated formats. Using a machine learning pipeline, AI can also be trained by journalists who are used to the kind of semantics and share patterns that help an item of fake news stand out.
The challenge of combating misinformation online, however, is not just identifying and stemming the flow of blatant fake news. It’s the issue of the proliferation of sensationalist clickbait pieces that express a point of view based on dubious evidence but don’t actually cross the line into telling lies. This type of misinformation can be much harder to teach AI to tackle, but here, again, there are patterns and repeated phrases and tropes that can be identified as red flags.
The way we receive information has changed radically in recent years, and, just as VoIP telephony has revolutionized how we communicate by telephone, social media and other online platforms have changed forever how we keep up with news and events.
The days of an entire population all watching the same terrestrial news channels that are answerable to standard authorities and guidance are long gone. This has to be understood in order to get to grips with the threats posed by fake news and misinformation online.
Much of the online world is pretty lawless and unchecked. This is a huge challenge for those tackling it, and it’s why technology such as AI can perhaps do some of the heavy lifting.
Can AI combat fake news?
Having identified fake content, taking it down isn’t always as straightforward as it might sound. Organizations can be accused of censorship and of trying to conceal information that one faction or another falsely considers to be true. It’s a tough call to strike a balance between freedom of speech and fighting fake news and misinformation.
When it comes down to opinions, such as ideas about how to start a business, there’s, of course, room for a range of information. But when talking about hard data often embedded within opinion pieces, this is where lines can be drawn.
While of course being open to manipulation and different interpretations, dates and numbers don’t usually lie. AI can check for incorrect numbers, names, and dates linked to particular topics, and can cross-reference with content containing similar false information coming from the same source. In this way, the origin of fake news posts can be tracked down and targeted.
AI algorithms can check for semantics and repeated speech patterns. They can also simultaneously check images and metadata. Some technology can also compare posts about similar topics with verified sources of information. In this way, voracity can be checked against authentic and properly sourced, factual content.
But AI has its limitations. For example, it doesn’t tend to have a sense of humor. So if fake news or misinformation is used in a humorous or tongue-in-cheek way, AI may lump this in with malicious misinformation.
But there’s no doubt that AI can be an enormous help in tackling fake news. The sheer volume of content that technology can process makes it invaluable in the fight against online fake news.
A fake-proof future
Addressing the rise of fake news and finding innovative ways to combat it with AI requires innovation and a willingness to find new approaches through trial and error. In this respect, it’s something like the world of software development, where an agile scrum manifesto enables teams to adapt and make quick changes.
Of course, the ideal would be a future where we’ve found ways to eliminate fake news altogether, but there are several factors that make stopping its spread completely a challenge.
The first obstacle is money. Fake news, as we’ve seen, can be monetized. Advertising revenue can pour in when a post goes viral and is freely distributed online. Profits can outweigh integrity, so there can be a reluctance in the business world to decisively tackle fake news and those spreading it.
The second is enforcement. Although news and information can’t be stopped by national borders, laws are another matter. Some countries have, or are introducing, legislation that addresses the proliferation of misinformation. But there are loopholes and territories where the rules are more flexible or not enforced.
Different countries and cultures also have different ideologies and sensibilities. This can mean that there are different attitudes to various topics. Getting a cross-border consensus and solution to stopping fake news from a legal point of view can therefore be very difficult.
Another difficulty is AI itself. For every example of a group developing AI to tackle fake news, there will be a shadow team experimenting with ways to use AI to create more sophisticated ways to create and disseminate fake news and avoid detection.
But the fact is that AI is a fantastic tool for fighting fake news. Developments will continue to improve its success rate. This will give us a better chance of cutting down the impact of online lies masquerading as news.
But alongside AI, the human element can’t be ignored. Education is the best way to promote critical thinking and fact-checking. It’s human nature to seek out information, and that’s largely now an online activity.
We look to the online world for everything from online teaching tips to a new home, job, or what to eat for dinner. Of course, it’s where we also look for news and events. We might cross-reference and read various reviews for an item we’re about to purchase, but we’re far less likely to do that when it comes to reading news. Usually, we read the first item we see and leave it at that.
This makes us vulnerable to fake news and misinformation. The only real defense against this is to arm ourselves with skepticism and the habit of checking sources. But in a busy world, where time to read and analyze properly is very limited, if AI can cut down and thin out the amount of news coming from dubious sources, it will mean we’re exposed to far less of it. And in this way, AI can be a powerful tool for the defense of the truth.