Brand N3xt is back. Thanks for sticking with me, and to those of you who subscribed during my downtime, thank you. I’m here for three reasons:
Educate people and brands about how emerging technologies can benefit businesses, marketers, people, communities, and the world.
Highlight use cases likely to have a positive impact while also driving scale and adoption.
Spotlight the innovators who are leading and building in emerging spaces.
If you are reading this and haven’t subscribed, please do. It’s a great place to keep up with the changes that are impacting us today, and which will impact us tomorrow. 🚀
Subscribed
You can also subscribe to my Podcast on Spotify, Apple Podcasts, or YouTube to hear interviews with the builders, innovators, and marketing pioneers shaping what’s new and what’s Brand Next.
I’ve been away for a while. My time was occupied with a move from Singapore to the USA, getting my kids into a new school system, changing my role at work, etc. All this took a few months, and while I pushed pause to get my family acclimated to a new life, the world didn’t stop. So much happened, and so much to catch up on. But the first thing I want to talk about is fraud. Why?
Because I got phished last week.
Well, I almost got phished. It was just a test sent by my business, but it got me thinking - if I can be fooled by a basic phishing email, how will people stay safe against AI-powered fraud? We're entering an era where fraudsters can leverage Generative AI to create personalized, interactive scams. This technology is progressing rapidly and poses a huge threat to trust online.
In the remainder of this article, I'll cover how Generative AI is already being used for fraud and propaganda, the limitations of current solutions, and why blockchain technology offers hope for restoring trust by verifying content provenance.
Let’s start with the facts. Fraud is widespread, and Generative AI creates new and more convincing ways for fraudsters to fool us all.
At this point, we’ve all seen ChatGPT, Bard, Midjourney, and many other technologies that use prompts to make content from what would appear to be magic. While these technologies are a lot of fun to play with, and while they offer a lot of positive benefits for business and productivity perspective, they are not without risk. Generative AI tools and technologies expressly for the purposes of fraud and phishing are now available for prices ranging from $20 a minute to $250 for a full video according to Mandiant (a Google-owned cybersecurity company).
Thankfully, usage of Generative AI for these nefarious purposes remains limited, but for how long?
Beyond phishing, a larger threat might be politically motivated propaganda and influence campaigns. Propaganda has been a strategy in politics for over two thousand years, and those who look to use these strategies to influence and manipulate people are quick to adopt new technologies as seen by all the interference in the 2016 US presidential election and then again in 2020.
The signs are already there that we will see Generative AI weaponized for propaganda purposes in the 2024 US presidential election, and it has the potential to have a significant impact on everything from opinions and beliefs to voter turnout.
This is a terrifying prospect, but it’s already happening. To date, the most famous instance of Generative AI being used for political purposes was around Trump’s arrest where many images like the one below on the left started circulating around the internet. Trump himself got in on the action when he posted the AI-generated image of himself below on the right on his Truth Social Platform.
These are far from the only examples of Generative Content being used in reference to politics or government. An AI-generated image depicting the aftermath of an attack near the Pentagon was circulated by verified Twitter accounts on May 22nd. The images were believed by many people to be real, including multiple news publications who re-tweeted the image resulting in a short-lived price drop in the US stock market.
More recently, a person going by the pseudonym of Nea Paw and claiming to be a cybersecurity expert used Generative AI to build a disinformation and propaganda engine that creates fake news stories, fake historical events, fake journalists etc for a cost of just $400. Their motivation for creating this was to show the risk and so none of what they created has actually been posted on the internet, but the message is clear and the risk is real.
All this is just focused on static imagery. We are also now starting to see deep fake videos of politicians appearing online. One recent example is this DEEP FAKE VIDEO of Ron DeSantis dropping out of the Republican presidential primaries was posted on X (formerly known as Twitter) on September 1st. This never happened.
The original version of this video was posted on X (former Twitter) at this link: https://twitter.com/ImStevenSavage/status/1697729565777379368
If we are going to have trusted elections, we need better ways to detect fraudulent and generative content, but how?
Working towards solutions
Generative AI being used by fraudsters isn’t unexpected. New technologies have long been used by fraudsters to advance their strategies of theft and influence. This is a sad reality, but it does mean that probably every big tech company predicted the risk and are likely already working on solutions of one type or another. A few examples include:
Google Deepmind just announced SynthID which adds an invisible watermark to images generated with Google’s Generative AI image generator Imagen. The watermarks remain attached to the image even if the image is manipulated after creation.
Microsoft has pledged to do the same and add watermarks to images created with their tools.
Watermarking technology definitely seems like a good first step and one that offers tremendous value. Further to that, it will likely allow for compliance with the new EU AI Act which was passed in June of 2023 and requires Generative AI models to adhere to transparency and disclosure rules that mandate the flagging of images that were created with Generative AI.
Intel is taking a different approach as it goes after real-time deepfake video detection. Instead of trying to find evidence of what’s fake in a video, their tool looks for what’s missing that should be there if it was a real video. More specifically, they look at human faces and look for the micro-color changes that occur as blood circulates and as our hearts beat. We might not be able to see these changes with our eyes, but the fluctuations are visible to Intel’s algorithm and systems allowing them to identify images where no heartbeat is detected and flat them as being probable Deep Fake cases.
This technology is currently being integrated into the backend workflows of various content distributors, including social media tools and news agencies to help identify and stop the spread of deepfakes.
Embedding this technology at the source is critical, but does not prevent people from creating fake news stories that can then be spread via social media. One such example that spread this weekend and reported instances of cannibalism at Burning Man was designed to look like it was from NPR (National Public Radio).
So are these detection technologies going to be enough to solve the problem of Generative Fraud and Propaganda? The truth seems to be that every advance in cyber security is countered by further advances from fraudsters. So with people spending a greater portion of their time and of their lives online, whether at home or at work, the rates of fraud are increasing.
The UK had the highest number of cybercrime victims per million internet users at 4783 in 2022 – up 40% over 2020 figures.
1 in 2 North American internet users had their accounts breached in 2021.
Between May 2020-2021, cybercrime in the Asia-Pacific region increased by 168%
The US IC3 department received reports from 24,299 victims of cybercrime. This amounted to more than $956 million lost.
The numbers are not improving, and with Generative AI becoming more accessible and more affordable, there is a high likelihood that they will get worse.
So who do you trust?
What can you trust when seeing is no longer believing?
Blockchain’s opportunity to deliver trust in a world where seeing is no longer believing
One idea would be to leverage blockchain technology. Because Blockchain data is immutable, it creates an easy audit trail to help verify provenance. In this way, Blockchain can identify where and when content originated and give people the information needed to reestablish trust.
There are a number of ways that this could take shape. One idea I had when thinking about the NPR example above would be for news agencies and publishers to enhance trust by using audience-owned smart tokens as content authenticity validators. Here is a short and very high-level description of how this could work.
Publishers tokenize subscriptions so that all their readers/viewers have a token.
The tokens could be free, and they probably should be as you would want as many people as possible to have them and not just paying readers. This isn’t about monetization, it’s about trust.
As an aside, another benefit of giving away free tokens would be having a direct link to readers that could be leveraged for CRM purposes, but that is a whole different story.
All content is first published to a blockchain. When distributing content online, the publisher would include an immutable link to the blockchain address. This link would need to be persistent in the same way that Google’s SynthID watermarks are persistent even when people modify the image.
When readers see the content, their smart token acts as a validation key checking first a blockchain address, and then checking the content against the original. Fraud and adaptations would be flagged to the reader/viewer.
As a three-step process, it sounds simple, but the reality is that there are significant behavioral and technical challenges to making an example like this real. For example, most people don’t read their news on the publisher's website, they get it on social media, or on YouTube, so just as Intel’s technology is being integrated into the backend of Social Media platforms, this option would need the same level of integration.
Another challenge is that people often copy snippets of content, take screen grabs, crop pictures, or trim videos from publishers before sharing them. This type of capture would require screen capture tools to pick up the invisible blockchain ID in the content so that it would remain verifiable.
Those are just a few of the challenges, so suffice it to say its not simple.
That said, the premise of blockchain technology is that by using it you do not need to trust third parties. In short, trust is at the core of how blockchain has been developed, and how it continues to be developed through new technologies like ZK Proofs (here is a TLDR if you want to learn about ZK Proofs).
So where Generative AI is creating a world where you can no longer trust what you see, Blockchain is designed to help establish trust, even when you are not able to see everything.
It’s not clear how these two technologies will develop in the future. And yes, blockchain has seen its fair share of fraud as well. But today, as we look for solutions to the emerging challenge of generative fraud and propaganda, these two technologies offer each other complimentary benefits and the potential to reestablish trust through verifiable and immutable provenance.
If you enjoy the Newsletter and want to get more, please subscribe to make sure you never miss an issue.
If you already subscribe but want to do more to help grow our readership, you can share the article with your friends. One share from you makes a world of difference to me.
If you want more content from Brand N3xt, you can also subscribe to the Podcast or follow and connect with me on social channels through the links below:
Subscribe to the Podcast on Spotify, Apple Podcasts, or even on YouTube
Follow me on Twitter at @justinkpeyton
Follow me on LinkedIn
Nothing in this newsletter is intended as financial advice. This newsletter is for educational and entertainment purposes.