When the founders of Pixel & Prose, a mid-size digital marketing agency, adopted AI image generation tools in early 2024, they saw it as a competitive advantage. They could produce high-quality visual content for their 50+ brand clients at a fraction of the previous cost and turnaround time. What they did not anticipate was the operational crisis that would follow when social media platforms started flagging their clients' content as AI-generated. This is the story of how they went from scrambling to contain brand embarrassment to building a bulletproof metadata cleaning pipeline that has not produced a single flag in over six months.

The Agency Before AI: Content Production at Scale

Pixel & Prose is a digital marketing agency based in Austin, Texas, serving primarily small-to-medium businesses across industries including food and beverage, real estate, fitness, beauty, and professional services. Before adopting AI tools, their content production workflow relied on a combination of stock photography, client-provided photos, and occasional professional photoshoots.

The team consisted of:

  • 3 content strategists who plan campaigns and content calendars
  • 4 graphic designers who create visual assets
  • 2 social media managers who handle posting and community management
  • 1 creative director who oversees quality and brand consistency

They were producing approximately 600 to 800 images per month across all client accounts, covering Instagram posts, Facebook ads, Pinterest pins, LinkedIn content, and Twitter graphics.

The AI Adoption

In January 2024, the creative director introduced MidJourney and DALL-E into the production workflow. The results were transformative:

  • Production speed increased 300% for concept imagery and lifestyle visuals
  • Cost per image dropped from $15-25 (stock) to under $1 for AI-generated alternatives
  • Creative variety expanded dramatically since the team was no longer limited to what existed in stock libraries
  • Client satisfaction scores improved because campaigns featured more unique, on-brand imagery

By March 2024, approximately 40% of all visual content produced by the agency incorporated AI-generated elements. By June, that number had risen to 60%.

The Crisis: When Clients Start Getting Flagged

The first incident happened in July 2024. A real estate client forwarded an angry email to their account manager. Instagram had applied an "AI generated" label to a property lifestyle image, a beautifully composed living room scene that the agency had generated using MidJourney to supplement the actual property photos. The client's concern was immediate and specific: buyers might think the property listing photos themselves were fake.

Within the next three weeks, three more incidents occurred:

Incident 1: The Restaurant Client

A farm-to-table restaurant had their "hand-crafted" visual identity undermined when Facebook labeled several food photography posts as AI-generated. The restaurant's brand identity was built on authenticity and craftsmanship. Having their social media content flagged as artificial was, in the words of the owner, "the opposite of everything we stand for."

Incident 2: The Fitness Studio

A boutique fitness studio's Pinterest content was flagged and labeled across multiple pins. Their pin reach dropped significantly, and several potential clients mentioned in consultation calls that they had noticed the AI labels and questioned whether the studio's transformation photos were real.

Incident 3: The Beauty Brand

A clean beauty brand had Instagram Reels cover images flagged as AI-generated. In the beauty industry where consumers are increasingly skeptical of manipulated imagery, this was a serious brand trust issue.

The Internal Audit

After the fourth client complaint, the agency leadership called an emergency meeting. They conducted an audit of recent content across all 50+ client accounts and discovered the scope of the problem:

  • 12 client accounts had content that had been flagged or labeled as AI-generated across various platforms
  • Over 200 individual posts contained metadata that could trigger AI detection systems
  • Every AI-generated image in their asset library still contained its original generation metadata
  • No team member had been trained on metadata implications or cleaning procedures

The agency was sitting on a ticking time bomb. Any platform update that expanded AI detection could trigger flags across dozens of client accounts simultaneously.

Building the Solution: A Scalable Metadata Pipeline

The agency needed a solution that met several requirements:

  1. Scalable: Must handle 600-800 images per month without becoming a bottleneck
  2. Simple: Every team member, including non-technical content strategists, must be able to use it
  3. Reliable: Must produce zero false positives, meaning every cleaned image should pass all platform detection
  4. Auditable: The agency needed to track which images had been cleaned for client reporting
  5. Non-destructive: Image quality could not be degraded during the cleaning process

After evaluating several approaches, including command-line tools, Photoshop scripts, and dedicated metadata editors, they chose AI Metadata Cleaner as the core of their pipeline because it met all five requirements without requiring any technical setup or training.

The New Content Production Pipeline

The agency restructured their entire content production workflow around a mandatory metadata cleaning step. Here is how the pipeline works for every piece of visual content:

Stage 1: Content Planning (Content Strategist)

The content strategist creates the monthly content calendar for each client, specifying which pieces will use AI-generated imagery, stock photography, or client-provided photos. Every AI-generated or AI-assisted image is flagged in the content management system.

Stage 2: Image Generation (Graphic Designer)

The designer generates images using MidJourney, DALL-E, or Stable Diffusion based on the creative brief. At this stage, the images retain their full original metadata. The designer saves the raw files to a designated "Unprocessed" folder in the agency's shared drive.

Stage 3: Post-Production (Graphic Designer)

The designer performs any necessary editing: color correction, compositing with brand elements, text overlays, cropping, and format optimization. Edited files are saved to a "Post-Production" folder.

Stage 4: Metadata Cleaning (Graphic Designer or Social Media Manager)

This is the critical new step. Before any image moves to the approved asset library, it must go through metadata cleaning:

  • Open AI Metadata Cleaner
  • Upload the batch of images from the Post-Production folder
  • Process and download the cleaned files
  • Save cleaned files to the "Approved Assets" folder

The batch processing capability is essential at this scale. Rather than cleaning images one at a time, the team processes entire campaign batches in a single session. A typical batch of 30-40 images takes just a few minutes.

Stage 5: Quality Check (Creative Director)

The creative director performs a weekly spot check, randomly selecting 10-15 images from the Approved Assets folder and running them through a metadata viewer to verify that all AI generation markers have been removed. This audit step ensures the pipeline is functioning correctly and catches any images that might have skipped the cleaning step.

Stage 6: Scheduling and Publishing (Social Media Manager)

Only images from the Approved Assets folder are used for social media scheduling. The social media managers pull assets exclusively from this folder, ensuring that no uncleaned images ever reach a client's social media account.

The Traffic Light System

To make the pipeline visually intuitive for every team member, Pixel & Prose implemented a simple color-coded folder system:

  • Red folder (Unprocessed): Raw AI-generated images with original metadata. Never use these for client work.
  • Yellow folder (Post-Production): Edited images that still need metadata cleaning. Almost ready but not approved.
  • Green folder (Approved Assets): Metadata-cleaned images ready for client use. The only folder social media managers should access.

This system means that even a new intern on their first day can understand which images are safe to use. If it is not in the green folder, it does not get posted.

Training the Team

One of the biggest challenges was not the technology itself but getting every team member to consistently follow the new process. The agency invested in training to make metadata awareness part of their culture:

The Metadata Education Session

The creative director held a one-hour training session covering:

Standard Operating Procedure Document

The agency created a one-page SOP that was posted in every team member's workspace:

  1. Every AI-generated or AI-assisted image MUST be cleaned before entering the Approved Assets folder
  2. Use the batch processing feature for efficiency when handling multiple images
  3. Always save to the correct color-coded folder based on processing stage
  4. When in doubt, clean the metadata, even for images that might not be AI-generated
  5. Report any platform flags immediately to the creative director

The "When in Doubt, Clean" Rule

The agency adopted a policy that if there is any question about whether an image might contain AI metadata, it should be cleaned. This applies to:

  • Images received from freelancers or contractors
  • Stock photos that might have been AI-generated (increasingly common in stock libraries)
  • Client-provided images that look like they might be AI-enhanced
  • Any image with an unknown provenance

This conservative approach adds minimal time to the workflow but eliminates the risk of an uncleaned image slipping through. For context on the legal aspects of metadata cleaning, the team references our legal guide to removing AI metadata.

Results: Six Months of Zero Flags

The pipeline has now been operational for over six months, and the results speak for themselves.

Quantitative Results

  • Zero AI flags or labels across all 50+ client accounts since implementing the pipeline
  • Over 4,800 images processed through the metadata cleaning pipeline
  • Average processing time of 2 minutes per batch of 30-40 images
  • 100% compliance rate after the first month of training, verified through weekly spot checks
  • Client satisfaction scores increased 15% compared to the pre-crisis period

Client Retention

The agency did not lose a single client over the AI flagging incidents, thanks to their transparent communication and rapid implementation of the fix. In fact, they turned the crisis into a selling point. Their pitch to new clients now includes a section on their metadata hygiene practices, positioning it as a differentiator against agencies that do not take these precautions.

Efficiency Gains

Despite adding a new step to the production process, the overall workflow is actually more efficient than before the crisis:

  • The structured folder system reduced time spent searching for the right image version
  • Batch processing means metadata cleaning takes less total time than the old manual quality checks
  • The clear pipeline reduces errors and rework from using wrong image versions

Lessons for Other Agencies

The Pixel & Prose experience offers several takeaways for marketing agencies using AI-generated content:

1. Metadata Cleaning Is Not Optional

If your agency produces AI-generated content for clients, metadata cleaning must be a mandatory, non-negotiable step in your production pipeline. The risk of brand embarrassment from a single flag can damage client relationships that took years to build.

2. Build Systems, Not Habits

Individual habits fail under pressure. The traffic light folder system and mandatory pipeline ensure compliance even when the team is rushing to meet a deadline. You can learn more about building robust workflows on our how it works page.

3. Train Everyone, Not Just Designers

Content strategists, social media managers, account managers, and even interns need to understand metadata risks. The person who uploads the image to the scheduling tool is the last line of defense.

4. Audit Regularly

Weekly spot checks catch process failures before they reach clients. Assign this responsibility to someone with the authority to enforce compliance.

5. Communicate Proactively with Clients

Pixel & Prose found that clients appreciated transparency about their AI usage and metadata practices. Rather than hiding the fact that they use AI tools, they frame it as a technological advantage paired with rigorous quality controls.

6. Document Your Process for Onboarding

New team members need to understand the metadata pipeline from day one. A simple, visual SOP ensures consistency as the team grows.

Looking Forward

Pixel & Prose has continued to evolve their pipeline as platform detection systems become more sophisticated. They monitor updates to detection algorithms across all major platforms and adjust their practices accordingly. The agency has also begun exploring additional quality assurance measures, including automated metadata scanning tools that can verify cleaning before images leave the production pipeline.

For agencies considering AI adoption or already using AI tools, the message is clear: the creative benefits of AI image generation are substantial, but they come with operational responsibilities. Metadata management is not a technical afterthought. It is a core business process that protects your clients, your reputation, and your revenue.

To learn more about implementing a metadata cleaning workflow for your agency or creative team, visit AI Metadata Cleaner or explore our comparison of metadata cleaning approaches to find the right solution for your scale.