top of page

PROOF.

2026

PRODUCT DESIGN + CODE
Functional Browser Extension, coded by connecting Claude Code to Figma

PROJECT OVERVIEW
 

For my capstone project for my Bachelor's of Arts, we had to create an original product that solves an "unsolvable problem."

 

Proof. provides real-time fake news/deepfake detection and is implemented directly onto social media platforms. It instantly understands if media is real, altered, AI-generated, or unverifiable, helping social media users confidently navigate their online tools. 

PROBLEM
 

In 2023, 23% of political image posts contained some form of misinformation. In 2025 a study was done that found that 71% of social media images are now AI‑generated.

 

AI-generated deepfake media is a rapidly increasing issue affecting the general public, voters, children, and organizations worldwide by enabling highly realistic misinformation and digital impersonation. As deepfakes snowball across online platforms, people struggle to identify real, human-made media from fabricated content, though both have been said to make the same emotional, persuasive impact on viewers.

  • Role
     

Product Designer

  • Company
     

(Personal project) University of North Texas College of Visual Arts & Design Integrative Capstone

  • Team
     

Destiny Ezebuogo - Marketing
Leticia Ferreira - Professor

  • Timeline
     

2 months

WHAT'S GOING ON?

Secondary research finds a rising statistic in shared media.

of shared content contains some form of misinformation

38%

of social media images are now AI‑generated.

71%

SOCIAL MEDIA USERS ARE ANRGY
 

I asked 35 social media users ranging from Gen-Alpha all the way to Baby Boomers about their online habits. I made sure to avoid leading questions to keep the data clean, but even with a neutral approach, the feedback was incredibly passionate and high-energy.

FREQUENTLY QUESTION THE ACCURACY OF POSTS THEY SEE

86%

When asked about how generated content and false news in media makes them feel, the most frequently used descriptors were “frustrated” (8 participants) and “unreliable” (10 participants), followed closely by “angry” (7 participants), reflecting a strong sense of distrust and emotional fatigue.

Some standout quotes:

"[It makes me feel] exceptionally uncomfortable. We are entering into a dangerous era where "fake news" and manufactured truths are becoming commonplace. It's hard to trust any news outlet or page as it's possible even reliable sources have been misled."

- Gen - Z Facebook interviewee (March 23rd, 2026)

"It honestly makes me feel a mix of frustration and concern. On one hand, it’s unsettling how realistic AI-generated content can be, it blurs the line between what’s real and what’s not. That makes it harder to trust what I’m seeing, especially on fast-moving platforms like Instagram or TikTok."

- Gen - X Instagram interviewee (March 25th, 2026)

"I feel very concerned that people, especially of the older generation who aren’t familiar with the advancements of AI generated media, being manipulated into believing a certain thing, buying something, or being told false information."

- Gen - Z Snapchat interviewee (March 25th, 2026)

Despite the strong opinions users have about online habits, almost no one feels like an expert at spotting fake info.

The 2% Club

Only 2% of users are 'very confident' in identifying generated content.

🏆

The Uncertain Majority: 

72.7% of participants feel 'neutral' about their detection skills.

🫠

Generational Divide:

50% of Baby Boomers feel 'not at all confident,' the highest of all of the groups.

😰

How might we empower users to feel confident in what they see online by making content credibility clear at a glance, without interrupting their flow?

SOLUTION STORYBOARD
 

Someone holding a phone open to a post. "Convincing Fake News"
Theres a big red X on the post that says it's fake news, but the comments are all like "I can't believe this happened!"
The person scrolling is thinking "hmmm it looks believable..."
They tap the red X on the post
The red X opens an overlay that says "there are no sources to back this claim up. scroll safely!"
The person using the phone is now happy. He thinks "huh! I'm glad I didn't share that around!"

PROJECT ABSTRACT
 

Proof. is an AI-driven tool that fact-checks social media content in real time, assigning credibility scores based on supporting evidence. A small hotspot indicator placed at the bottom right corner of each post allows users to assess content instantly. The user can click on the indicator to reveal an overlay that provides deeper context and transparency for each evaluation.

By integrating into platforms like Meta, Proof. helps users stay informed without interrupting their browsing experience - building trust and awareness in an age of misinformation.

FEASIBILITY TESTING
 

Before designing the UI, I made sure to validate the concept by using Anthropic’s Claude as a stand-in developer. I tested whether it could generate working code for a Chrome extension that parses page content on Facebook and surfaces it in a custom UI.

Starting with a simple prompt to extract and display the first image post on the page, I confirmed that AI could reliably read and navigate the structure of the webpage, and output it into a simple UI, proving the foundation for integrating real-time analysis into the product.

"Can you create me a chrome-based browser extension that, when on the website Facebook.com, it will find the first image on the page and then display the image to the user in the extension."

A post of a cat on Facebook. The AI extension shows the same image with the caption "may be an image of a cat."

It was able to identify and grab media from Facebook, as well as recognize vaguely what was in the image. The next step would be to connect an AI "agent" to the extension in order to analyze the image content.

Another post of a cat. The extension displays the image, with a button titled "scan and analyze image." There is a spot for an AI analysis.

I confirmed that the AI could be connected to the extension to pull, scan, and examine a post. Proof. is feasible!

LOW FIDELITY
 

IMG_3157_edited.jpg
IMG_3157.heic
IMG_3162.jpg
IMG_3161.jpg
IMG_3163.jpg
IMG_3165.jpg

Landing Page

Sign In/Sign Up

Connect to Socials

Onboarding

Dashboard & CTA

Social Media Overlay

BRANDING & POSITIONING
 

Users feel frustrated and betrayed by misinformation. This informed a branding direction centered on empowerment and trust, with messaging like “It’s your right to know what’s wrong,” “Regain confidence when scrolling,” and “Unlock the truth behind every headline.”

 

Tensions surrounding AI are high, however the only way to rapidly fact check posts would be to use it. Because of this, the AI messaging and interaction needs to be minimal to keep users feeling comfortable. 

This is the clearspace and symbol section of the branding book. Clearspace states that the logo needs a certain amount of space around it to stand out. The symbol is the first letter P of the Proofpoint logo.
The core color palette part of the brand book. It shows a purple, blue, and white gradient that makes up the main UI. The secondary colors are a bright red, green, yellow, and teal. Under this, it states: "in headings, we use DM Sans. For subheadings it's the same, but a smaller size and a neutral gray. For body text, it's the same typeface and smaller than the subheading."
The primary lockup for the branding. It has the main and secondary logos for proofpoint.
Proofpoint marketing material: "It's your right to know what's wrong."

COMPONENTS AND SYSTEMS
 

I started by building a design system and component library in order to create a full dev workspace for Claude Code. Starting witha design system to guide the code was incredibly useful. 

A figma page with the design foundations including specific text lockups, color sliders, and iconography.
Main UI components in the Figma file including overlay cards, hotspot indicators, and buttons. All complete with different states/variants.

FINAL PROTOTYPES/DESIGN

SIGN IN/SIGN UP
 

How the product would look and act overlayed on a social media feed with real-time detection for confident and more educated scrolling.

 

Made in Figma.

Sign up page: "become unfoolable."
After you create an account, it takes you to a page where it says "connect your platforms."
"Connect your socials." a beta screen where you can select to connect Proofpoint to Facebook, Instagram, LinkedIn, and X.

ONBOARDING
 

This is how users learn how to understand how the overlay works, what each icon means, and how to scroll more confidently with Proof.

Made in Figma.

"What do the icons mean? small hotspots help you quickly identify misleading, factual, and original content."
A sample of what a pop up would look like on a post, with the % of content verified with sources and AI reasoning. "Built for safer browsing. Yes, we use AI to verify content quickly, but our goal is to empower people, not replace them."
"It's your right to know what's wrong. Because when it comes to social media, you should never have to question what's safe, what's real, and what to trust."

APP DASHBOARD
 

This is where users interact and touch-base with the brand.

The user can access a dashboard with collected insights and profile management.

Made in Figma.

SOCIAL MEDIA OVERLAY
 

How the product would look and act overlayed on a social media feed with real-time detection for confident and more educated scrolling.

 

Made in Figma.

Factual post pop-up "97% of this post is verified as factual..."
Misleading post pop-up "45% of this post contains some deceptive content..."
Fake news post pop-up "This post could not be verified"

CONNECTING CLAUDE CODE TO FIGMA & EXTENSION DEVELOPMENT
 

After I created the system and prototypes, I worked directly alongside Claude Code as I continued developing the UX and UI of the main product. This was to be certain that it would look and work the way it's supposed to.

 

I had to ensure that my design system was clean and organized in order for it to be translated properly. This required some back and forth, and basic coding knowledge.

A pop-up that Claude made without a design system (it's hard to look at, with varying text sizes, not much hierarchy, and an awkward layout.) V.S. with a design system, which looks exactly like my prototype.
An exerpt of code: "Scoring guide. Use these categories in strict order of priority. 0-32 type: "fake" the post makes specific verifiable claims that are directly contradicted by credible sources..." etc.

FINAL WORKING AI-DRIVEN EXTENSION
 

Built from my product concept and design system, this extension was brought to life using Claude Code, with targeted code direction and refinement. Powered by Claude Opus 4.6, it analyzes posted image content to surface real-time insights.

When a user opens a post on Facebook or Instagram, the extension automatically appears as a side panel that prompts them to scan the content.

 

Within seconds, it generates a detailed analysis complete with verified sources, a step-by-step breakdown of how the post was evaluated, and insights into whether the accompanying image might be AI-generated.

 

The current model achieves about 78% accuracy, and actively improves with each use.

I combined generative AI with core design principles, system thinking, and a product concept to develop a functional extension in just a couple of weeks: deepening my ability and confidence in designing with new and evolving technologies.

IMPACT

I conducted a user testing survey where nine participants reviewed nine social media posts each. They were asked classify each one as factual, misleading, or fake.

 

After completing the task, they’re given the Proof. prototype which displays the correct classifications for the posts.

 

Users were asked to reflect on their responses, including how they performed and whether their perceptions changed after seeing the correct answers, as well as how they felt about the hotspot overlay for immediate identification. 

An average of only 3.2% of the posts were correctly identified, before Proof.

"This is a very interesting project. [...] The misleading elements and an AI images got me. [...] This is a cool way to visualize misinformation, and inform people about how certain elements can be deceptive or outright false in social media posts."

Want to help? You can take the survey here!

© 2026 NATASHA STURDEVANT

bottom of page