by Sammah Masoodi
AI enhances media and creativity but erodes ethics, authenticity, and human touch, risking misinformation, manipulation, and dependence that may gradually diminish our collective humanity.
While scrolling through YouTube, I came across a captivating AI-generated thumbnail titled Kashmir Architecture Tales. Instantly, I clicked on it. The video was about Burzhama, and beyond the mesmerising voice-over by Iqra Aakhoon, what truly held my attention were the AI-generated visuals. I watched artificial intelligence breathe life into an old, long-forgotten history, from intricately carved wooden pillars to handmade vessels. At that moment, AI felt like a meeting point between the past and the present.
The evolving nature of media in the age of artificial intelligence is transforming rapidly, creating new pathways and easy access to information and data with minimal resources. At the same time, it minimises opportunities, questions ethics and credibility, and strips the media of its human touch.
With the bombardment of information and an Olympic race between media houses over who will break the news first, journalists often resort to a new favourite assistant, ChatGPT, which, within seconds, crafts a well-written story with no grammatical mistakes. If there’s a case of not having a photograph, we simply generate one.
In filmmaking, too, AI offers a helping hand, assisting in editing, colour correction, voice-overs, and even filling creative gaps. But no matter how advanced it becomes, AI can only magnify and refine what already exists. It can’t think, feel, or imagine the way humans do. AI is like a lamp; it shines only when we choose to light it.
Last year, while reading Kashmir Life magazine, I came across a cover story highlighting cases of infertility among women in Kashmir. What enraged me most was the AI-generated image of a pregnant woman, a misleading visual. Before generating any photo, one must feed it a prompt, and in this case, the prompt seemed to directly blame women for infertility. The image included thought bubbles showing the “disgrace” women endure, creating a false narrative and reinforcing the idea of victim-blaming.
Had a real human photograph been used, readers might have empathised with these women and sought solutions instead of indulging in blame or emotional exploitation.
Another looming shadow comes in the form of deepfake videos, deceptive creations that mimic real people and, in January 2025, even disrupted the stock market in the US. During the Pahalgam attack, deepfake videos, fake military transmissions, and AI-generated photos were spread within an hour. Such fabrications blur the line between reality and hyperreality.
When the dynamic between slave and master reverses, with AI beginning to master us, truth and illusion start to look like copies of each other. And amid it all, the lessons taught in journalistic ethics remain just that: “lessons,” unpractised and forgotten.
Although the core function of news is to provide information that is factual, accurate, and verified, the rapid spread of misinformation and disinformation, now just a fingertip away, has challenged this purpose. Journalists initially relied on manual fact-checking and later turned to AI-based fact-checkers. However, these tools have proven unreliable.
On May 8, 2025, during the airstrikes between India and Pakistan, Grok, an AI tool developed by Elon Musk, misidentified an unrelated fire in Nepal as a military attack. Similarly, research by NewsGuard found that ten leading chatbots repeated falsehoods and spread Russian disinformation narratives, along with false claims about the Australian elections.
While AI can detect patterns, it cannot discern truth. It often confuses events and, in an attempt to respond quickly, ends up misleading. As a result, journalists have begun returning to human fact-checkers. The 2025 Digital News Report by the Reuters Institute supports this shift, showing that most people continue to trust news organisations and human verification over AI-generated checks.
Kashmiri social media creators and influencers have begun embracing AI to preserve heritage, reconstruct damaged sites, and create animations based on local folktales.
However, a single AI-generated video or photo can also be used to spread propaganda. In such content, stereotypical imagery and cultural misrepresentations can easily emerge, distorting folk characters and cultural symbols.
I believe that, just like washing machines, computers, and other innovations were once embraced, AI too must be accepted, but as a tool that assists human creativity, not replaces it. The conflict isn’t between humans and machines; in fact, there isn’t one. We built this tool, what I call the “Smart Dumb Machine”, to serve us, not to master us.
There is an urgent need to establish ethical frameworks, enforce laws, and define clear consequences for the misuse of AI. A transparent discourse is also necessary to determine when and how AI should be used in the media. For example, in Kashmir Architecture Tales, AI-generated visuals were used as tools to revisit and relive memories of the past that time has eroded.
As I’ve already said, it’s a “Smart Dumb Machine.” It doesn’t truly know anything. Ask it something as simple as “What is colour?” and it will tell you everything about colour except what it actually is. It blurs the line between right and wrong and, while doing so, risks creating joblessness or, worse, a sense of purposelessness.
The way AI is currently being used feels like an insult to art, creativity, expression, and individuality. The difference between AI and us has started to fade. There is an urgent need for AI to remain a tool, nothing more. Because if we fail to draw that line, AI won’t kill us, but it will kill our humanity.
(The author is a student currently pursuing a BA (Honours + Research) at Government Degree College, Baramulla. Ideas are personal.)















