I scraped 400 of my own paintings, loaded them into a training pipeline, and watched a machine learning model learn to paint like me.
This is the moment most artists try to avoid. I did it intentionally. Because you can't build real protection against a threat until you understand the threat from the inside.
Here's what happened.
Why I Did This
Six months ago, we were building Art Vault and we kept hitting the same objection: "Does this actually work? Or are you just adding noise that doesn't matter?"
Fair question. We had mathematical certainty (adversarial perturbations are proven to degrade model performance), but we didn't have proof. Not the visceral, empirical kind of proof that matters to artists.
So I decided to run the experiment myself. I'd train a model on my own unprotected work, document the results, then train an identical model on my protected work and compare the outputs.
This isn't about proving the math. It's about proving the difference between theory and reality.
Setting Up the Experiment
I gathered 400 of my paintings from the last five years. Landscapes mostly—my bread and butter work. Nothing rare or secret. Paintings that have been online, that could hypothetically end up in a training dataset.
I used a standard fine-tuning approach:
- Base model: Stable Diffusion (open-source, reproducible)
- Training framework: DreamBooth (a common fine-tuning technique)
- Hardware: RTX 4090 (high-end but not exceptional for model training)
- Iterations: 1,000 training steps for consistency
The process took about 2 hours per run. Generate outputs from the trained model every 100 steps. Document everything. Keep a video log of the outputs deteriorating or improving.
This is what serious AI companies do. I was copying their workflow exactly.
The Unprotected Run: What AI Companies Do
I trained the model on my 400 unprotected paintings. The results were eerie.
By step 100, the model had captured the general color palette—warmer earth tones, specific blues I use for skies. By step 300, it was generating landscape compositions that looked like mine. By step 600, it was reproducing specific elements. The way I paint water. The particular curve I use for distant hills. The texture of foreground vegetation.
By step 1,000, the model could generate variations of my work that I honestly couldn't immediately distinguish from actual paintings I'd made.
I prompted it: "Landscape with warm light, distant mountains, water in foreground."
It gave me something that looked like it came from my portfolio. The compositional logic was mine. The color relationships were mine. The brushwork simulation was mine.
I tried variations: "Sunset in the style of Daniel Eckert." It worked instantly. The model had learned a stable representation of my aesthetic—the thing that makes my work recognizable, the thing that took me 15 years to develop.
From 400 images.
In 2 hours.
This is the moment where you realize that your aesthetic—the thing you spent decades refining—can be reduced to statistical patterns and extracted in an afternoon.
The Protected Run: What Art Vault Does
Now I took the same 400 paintings, applied Art Vault's adversarial perturbations to each one, and trained an identical model on the protected versions.
The perturbations were invisible to my eyes. I looked at the protected images on screen and saw my own work, unchanged, exactly as painted.
The model didn't see it that way.
Step 100: The model was struggling. The outputs were noisy, chaotic. The color relationships were inverted or wrong. The compositions were malformed.
Step 300: Still struggling. The model was trying to extract patterns but the perturbations were interfering at a fundamental level. Every time it latched onto a feature, the adversarial noise pushed it in the wrong direction.
Step 600: No improvement. The model was at a plateau. It had captured some basic statistics but nothing coherent. It wasn't generating "paintings in the style of Daniel Eckert." It was generating noise.
Step 1,000: The final output was incomprehensible. Not in an interesting, abstract way. In a broken way. The model had been fed contradictory information embedded in the pixel-level perturbations. It learned nothing useful.
When I prompted it with "Landscape in the style of Daniel Eckert," it gave me something that looked like it had been corrupted in transit. Colors wrong, composition fractured, no coherent aesthetic signal at all.
The adversarial protection didn't just slow training down. It made training pointless. The model couldn't extract usable aesthetic information from protected images.
What This Proves
A few things became immediately clear:
First: The threat is real and practical. I'm not an AI researcher. I'm a working artist with one GPU and open-source tools. If I can extract my own aesthetic in 2 hours, any company with better infrastructure can do it faster. And they have.
Second: Adversarial perturbations work. Not theoretically. Practically. I did this in my studio with standard tools. The math held up in reality.
Third: The protection is invisible to humans but catastrophic to models. My protected images looked identical to me. To a training algorithm, they were poison.
Fourth: One-directional protection matters. I can see my work as unchanged and beautiful. A company trying to scrape my aesthetic sees incomprehensible noise. That asymmetry—where human perception is unaffected but machine perception is devastated—that's the entire point of adversarial defense.
The Implications
Here's what gets scary when you've actually done this experiment:
Scale. I trained a model on 400 images. Large-scale training runs use millions or billions. The computational advantage stays with whoever can run the biggest models on the best hardware. But adversarial protection scales perfectly. If 400 unprotected images let a model learn my aesthetic, then protecting those 400 images makes learning impossible.
The ratio doesn't change. Protection is multiplicative, not additive. It doesn't slow extraction—it stops it.
Speed. The bottleneck in AI training isn't the algorithm anymore. It's the data pipeline. If AI companies have to scrape, filter, and process billions of images to find usable ones, and protecting your work makes that process yield nothing, then protection directly increases their costs.
This is economically devastating to extraction at scale.
Control. For the first time, artists have a technical mechanism that actually works to prevent their aesthetic from being absorbed into training data. Not through legal arguments. Not through begging companies to be ethical. Through mathematics.
The Honest Limitations
I want to be clear about what this experiment didn't prove:
It doesn't protect work that's already been scraped. If your unprotected images are already in a training dataset, protection going forward doesn't retroactively corrupt those models.
It doesn't stop human theft. Adversarial perturbations work against AI training pipelines, not against humans copying your style manually. That's a separate problem with a different solution (watermarking, legal action, community enforcement).
It doesn't protect against extremely small datasets. If someone deliberately selects 10 high-quality reference images of your work and trains on those by hand, adversarial noise might not prevent that. But that's not mass extraction—that's targeted study, and it's much harder to automate at scale.
It's not a complete solution. It's a solution to one specific threat: automated, large-scale extraction of your aesthetic by companies building models.
But that threat is real, it's happening now, and this works.
Why I'm Telling You This
Because I know what happens when you train a model on someone's work without permission. I know what the output looks like. I know how fast it happens. I know how little skill it requires.
And I know that it's preventable.
Most artists have never experienced that moment—where you watch your years of aesthetic development get compressed into statistical patterns and extracted in an afternoon. It's shocking. It changes how you think about your work.
But you shouldn't have to do this experiment yourself to understand the threat. You should be able to protect your work and move on.
That's why Art Vault exists. So you can apply protection without having to understand the mathematics of adversarial perturbation. So you can upload your work knowing that unethical extraction is economically irrational.
I did the experiment so you don't have to.
Your aesthetic is worth protecting. You built it. You earned it. You get to control how it's used.
Everything else is just mathematics.