How Blackbird.AI and Wasim Khaled Are Changing Our Perspective Of News

Wasim Khaled has been a technology entrepreneur his entire life. He has been gifted with a talent to build a business around it every opportunity and turn each into a profit. However, Khaled always wanted to do something more meaningful. In the 2010s he began noticing that people were increasingly being taken in by false information. Enter Blackbird.AI.

As Khaled describes it, “Blackbird.AI started from a place of purpose and has been mission driven since day one. We firmly believe that we have a responsibility to society and that the power to fight disinformation is vitally needed by governments, communities, and individuals to create an empowered and critical thinking society in a time of great turmoil and uncertainty.”

When this trend started, many people paid attention to disinformation. The topic was mostly relegated to academics and research. Today, the need for a trusted solution is paramount. “[A]s an AI-Driven Disinformation Analysis company, we sit at the nexus of cutting-edge artificial intelligence, threat intelligence, reputation management, crisis management, brand safety, fact-checking, business intelligence and national security,” says Khaled.

There is a popular term today permeating the airwaves “Fake News.” This heavily overused catchall phrase includes anything people either do not agree with or do not want to agree with. It describes content that has not been fact-checked or that is not fully accurate—including shades of information that aren’t entirely accurate or entirely false stories.

On the other hand, “Disinformation” refers to information that is created with the intent to deceive. “Misinformation” is the often-accidental spread of disinformation the threat is growing almost daily. Some artificial intelligence (AI) can now create incredibly realistic yet false news articles or press releases. As this kind of technology becomes more widespread, disinformation actors can create highly convincing text messages and disperse them via websites, blogs, and social media. Who knows how vast this technology’s impact will be in five or ten years?

Wasim Khaled (Courtesy photo)

Today, there is a lot of worry surrounding “deep fakes,” i.e.: altered or digitally created video nearly indistinguishable from the authentic video. But while a synthetic video is something to watch for, the more immediate threats are text-based. The latter is the most pervasive way disinformation is currently spread. Khaled sees it as the greatest threat to authenticity.

I am thinking about and preparing for a day when disinformation functions less like a creative endeavor and gets to full automation of text, image, video and social media propagation with a few clicks and simple instruction, which would effectively ‘flood the zone’ with magnitudes more noise than any human or current system is capable of handling.”

If this were to happen, the information ecosystem would be unfit for information exchange or consumption. It could create the kind of hyper-filter bubbles that would make today’s online polarization mild by comparison. Effectively, it could create a public inoculated against reality. Imagine it—a world where everything is a conspiracy, and nothing can be trusted. According to Khaled, Blackbird.AI is working around the clock to build technologies that can act as defensive capabilities, protecting humankind from the destabilization of our country’s democracy, and global stability.

The rise of “Disinformation-as-a-Service” should be a major concern for corporations and democracies worldwide. These organizations that work as public relations firms use tools and techniques similar to those of adversarial nation-States—but in this case, they perform digital “takedowns” of companies, markets, politicians, and policies.

Craig Silverman, a disinformation expert from Buzzfeed News says that clients are “purchasing an end-to-end online manipulation system, which can influence people on a massive scale — resulting in votes cast, products sold, and perceptions changed.”  Most enterprise organizations still think disinformation attacks are primarily a political problem. Nothing could be further from the truth.

In fact, a recent study from the University of Baltimore found that disinformation causes companies $78 billion in annual losses in the United States alone. “Financial disinformation” in the form of fake press releases and memos circulated online cost companies at least $17 billion a year in stock losses. Meanwhile, consumer brands lose $235 million annually because they have unknowingly advertised next to fake news items. They end up having to spend over $9 billion each to repair damages from disinformation attacks—including boycotts. Media outlets spend $3 billion a year trying to combat disinformation and increase safety.

The rise of viral conspiracy theories and information diets dominated by poorly moderated social media platforms (and message boards) often tie hinder corporate brands. How? They tie them to highly polarizing narratives, hurting companies when they least expect it. Increasingly today, brands have become subject to information attacks and boycotts. This has resulted from online vigilantism and from special interest group manipulation. How then can their audiences distinguish authentic vs. synthetic discourse?

When asked about COVID-19 and all the conspiracy theories and misinformation being spread about it, Khaled said, “COVID-19 has only amplified our desire to detect and thwart influence campaigns and conspiracies as we see the rampant disinformation and attacks that have moved from online to real-life societal breakdowns and countless deaths. At no time in the history of our company or the world has information integrity been more important. Also, since our social lives and travel have been so wholly impacted, we are all working more than we ever have. We were already a distributed team between the east coast, west coast, and our offices in Singapore. Now, we are going full speed 24/7, seven days a week, for the last seven months.”

What about analyst firms and agencies? Unfortunately, they are not truly prepared to deal with the massive outpouring of disinformation in today’s climate. Khaled acknowledges that these groups and businesses can be indirect competitors but, more often they partner with Blackbird.AI. Even if they all access the same data from the same sources, insight derived from that data is different. Khaled explains,

We do not consider ourselves “social media analysts”, but rather an AI “first” company that happens to look at social media as one of our data sources to surface unique properties of risk. The algorithms we are building are not limited to social media, but any text, media, and/or network-driven content. We fully plan on continuing to invest heavily in algorithmic technology that will evolve as disinformation threats mutate and grow in the coming years,”.

Blackbird.AI’s reports are created around topics of public interest that need to be brought to light, but it’s not their core product offering. They help organizations greatly enhance analytics and reporting by helping clients stay ahead of the story. Blackbird’s proprietary AI algorithms highlight the issues clients need to focus on, and the timeframe required to resolve the threat.

Khaled explains that by looking beyond traditional proxies for harm (volume, engagement, and sentiment) helps their customers understand the nuances of emerging threats with greater resolution than might seem possible. “This enables greatly enhanced critical decision-making to deploy countermeasures and thwart disinformation attacks before they do serious damage. Blackbird helps remove the guesswork and build efficiency in response. It helps customers have a clear understanding of the present, empowering them to strategize for the future.”