I've come across a couple of interesting articles recently that examine the intersection of AI and photography.
In his weekly The Plain View column, Wired's Steven Levy looks at the controversy surrounding Princess Catherine's apparent manipulation of a family photo. As he points out, tools that let anyone do this are now ubiquitous. What will that mean for journalism going forward?
One might be tempted to say that the only proof now that something is real comes when you can see it for yourself. But consider Apple’s Vision Pro. When people don that headset, they view a mix of the real world intermingled with a digital layer. But that so-called “real” world isn’t directly visible to one’s eyes—instead a suite of cameras presents video images of what the eye would normally see. That videostream is prone to manipulation—in fact, recreating reality is the point of such devices. If you stroll out into the street wearing one of those, who knows—maybe the Royal Family Industrial Complex will hack your goggles to insert a convincing digital representation of Kate Middleton, shuffling through Wegman’s in her leggings.
All of this should have been apparent long ago. The trustworthiness of what we see no longer relies on images and videos themselves. Our belief in what we are presented with hinges on the credibility of who is presenting it. Maybe if the Windsors had a track record of straightforwardness, people would have accepted the image as a family photo, mildly rinsed by a Photoshop tweak.
But AI tools permit more than just simply tweaking someone's smile in a family picture.
There's been a long tradition of photographers colourizing old black-and-white photos. Originally that was done by an artist painting over the photo. Now you can easily do it in a tool like photoshop or by uploading your photo to a website. But What if the entire picture itself is a fake generated by an AI tool like Midjourney. How can you tell?
In "AI is creating fake historical photos, and that's a problem", Marina Amaral, a digital photo colourist looks at a new phenomena – digital fake historical images. I was startled at the realism of the examples she shows; I would certainly be fooled by them.
Just to give you a concrete example of how pervasive this issue is becoming, a few days ago, I stumbled upon two Instagram pages that have started sharing these fake historical photos, passing them off as real to their thousands of followers, complete with fabricated captions and all that. The last time I checked, the most recent of these photos had over 5,000 likes. Now, granted, not every single one of those likes was necessarily from someone who was completely deceived. Some people might have just been scrolling through their feed and hit the heart button without really thinking about it. Others might have recognized the photo as fake but still appreciated it on an artistic level. But even if we're being conservative and assuming that only a fraction of those 5,000 people have truly believed in the authenticity of the image, that's still a significant number of individuals who have been exposed to a piece of misinformation masquerading as historical fact. And that's just one post on one platform. Multiply that by the countless other social media accounts, websites, and even publications that could potentially be spreading these images, and you start to see the scale of the problem. The potential impact of such posts cannot be overstated. The more these fake images circulate, the harder it becomes to separate fact from fiction. Each new post or share distorts the truth a little bit more, until we're left with a version of the past that bears little resemblance to reality.
This is a big problem and it's only going to get worse.
In my own case, I now ignore any news, travel, or historical photos posted on the internet unless I am sure the source is legitimate. That rules out a large part of what's posted on Facebook or X, for example.
No comments:
Post a Comment