Thousands of New Zealanders are liking, commenting on and sharing "news" on social media they may not realise has been written by artificial intelligence and paired with fabricated imagery that is unlabelled and inaccurate, a 1News investigation has found.
Experts say the popularity and proliferation of these accounts blur the line between real reporting and fabricated content and may contribute to Kiwis' already low trust in news, while civil defence groups have issued public warnings about the pages.
1News has identified at least 10 Facebook pages that take existing New Zealand news stories, run them through artificial intelligence to rewrite them, and publish them on Facebook with synthetic images.
A review of one of these social media "news" pages, named NZ News Hub with thousands of likes, comments and shares, looked at 209 posts made in the month of January. The page's name was similar to national outlet Newshub, which closed in 2024.
Its bio read, "NZ News Hub brings you the latest New Zealand news, breaking stories, politics, business, sport, and community updates", but the page does not appear to contain any original reporting.

Not one of the images was labelled as being AI-generated, with some of the posts featuring fabricated photos of real people.
In one case, a still photo of a minor killed in the Mount Maunganui landslide was manipulated to show her dancing. In another, an image of parents who had lost their teenage daughter to suicide was edited to make the couple appear affectionate.
Natural disasters and posts involving emergency services were consistently dramatised beyond what actually occurred.

Authentic slips on East Coast highways were depicted by NZ News Hub as far more destructive, crushed houses and cars were added to the Mount Maunganui slip, and a grounded tourist boat in Akaroa was edited to appear packed with far more passengers than in reality.
Police often wore British or American uniforms and were depicted with guns drawn when there was no indication that officers were armed in official releases.

In some instances, the raw prompts were erroneously left in the post, with "Here’s a news-style rewrite with a clear headline, emojis, and top hashtags" above one, and "If you want, I can also make this shorter, more dramatic, or social-media style" below another.
Google Image searches reveal that several pictures posted by the page contained a 'SynthID' digital watermark embedded in their pixels, indicating they were created using the tech company’s AI image‑generation tools.

NZ News Hub, created in late November last year, has more than 4700 followers. Individual posts regularly attract over 1000 likes and comments — many of them criticising the AI-generated images and blaming "the media" for fake news and use of the technology, though the page has no connection to any news organisation.
When a commenter called out the use of an AI photo, NZ News Hub's response was: "The news is true."
The page operators read but did not respond to detailed questions from 1News about their use of AI-generated imagery, including why an image of a deceased individual was created without family permission and why AI-generated content was not labelled.
For anyone scrolling past quickly, there's almost nothing to distinguish these posts from genuine news.
Officials raise red flag over AI-generated misinformation
Authorities have issued public warnings about fake social media pages mimicking news outlets and sharing fabricated or AI‑generated content.
Gisborne District Council and Tairāwhiti Civil Defence said last Thursday they were aware of fake pages "pretending to be news outlets and sharing AI-generated images and made-up content about local events and emergencies".
The agencies said some posts appeared credible because they used New Zealand phone numbers or addresses, mimicked branding and "breaking news" styles, or named real people and organisations without their permission.
"Accurate information matters, especially during an emergency response. Let’s keep our community safe and well-informed," the statement posted to Facebook read.
The National Emergency Management Agency (NEMA) issued a warning last month about AI‑generated imagery circulating online during severe weather across the country — particularly relating to the deadly Mount Maunganui landslide.
"It is important that the public has trust and confidence in reliable and accurate emergency information channels," the agency said.
"In an emergency, our primary channel to get information out to the public is the media."
NEMA worked closely with the media to ensure they provide verified, credible information to the public, it added.
"We encourage people to be vigilant, use trusted sources for their information, and find out if the source of information is credible before sharing it.
"We closely monitor what is being circulated during a response, but we would encourage New Zealanders to call out suspicious images when they see them or report them if there is a suitable way to do this."

Scraped information, fabricated visuals
Media researcher Merja Myllylahti said AI-generated "news" pages on social media risked blurring the line between legitimate journalism and fabricated content by repurposing official releases and pairing them with unlabelled AI visuals.
"They take obviously legitimate news from police notifications or press releases — the same information that appears on real news sites — but then they create AI images that are not real, and they are not labelled," she said.
Myllylahti, who recently published a report about how AI was used in the New Zealand media landscape, told 1News this practice differed sharply from how mainstream organisations operate.
"When I did my report and spoke to the news editors in all big news organisations — TVNZ, RNZ, the New Zealand Herald, and Stuff say that they don't create or generate videos or images with AI, and if they ever did, they would disclose it."

Victoria University senior lecturer in AI Andrew Lensen said the spread of AI-generated content masquerading as news was accelerating and becoming harder to detect.
"It's clearly an emerging problem, and it's getting worse and worse."
Lensen said many of the pages were based on real news stories, but inaccuracies were often introduced as content was automatically scraped, rewritten and republished by AI systems.
"Even though the underlying story might be true, details may not be accurate," he said.
Pages producing the material were “nearly always fully automated”, he said, using scripted workflows that monitor legitimate news sources and feed the content into large language models, like ChatGPT, which then rewrite it according to a pre-set prompt.
Images or videos were then automatically generated to accompany the text — sometimes using existing images as a base — before being posted to social media.

Fake pages eroding trust in legitimate media
Myllylahti said the problem was that many audiences struggled to distinguish between professional news organisations and social media pages designed to imitate them.
The confusion risked damaging trust in legitimate outlets, particularly when fake pages adopt similar branding or names, she said.
"They may think, 'the media is just putting fake pictures out there', without realising this page is not connected to any newsroom," she said.
Both researchers warned that the growing volume of AI-generated material risks eroding trust even in reputable outlets — especially at a time when only 32% of New Zealanders trust the news, according to the most recent Trust in News survey from AUT.
"People might go, 'Well, it's happening on social media, so why would I trust what 1News or the Herald is doing?'" Lensen said.
As the technology evolves and AI-generated images become more convincing, visual cues will become unreliable, he added, leaving source verification as the primary defence against being tricked.
"Is it Radio New Zealand or 1News, or is it some slightly weirdly named page you can't find referenced anywhere else?" he said.
"You'll have to do your own fact-checking."
For now, Lensen said inconsistencies could still offer clues, such as incorrect uniforms, equipment that doesn’t match New Zealand standards, or distorted and nonsensical text embedded in images.
Myllylahti said the moment presented an opportunity for news organisations to build trust by being clear about how artificial intelligence was used to support journalistic work.
"Be really transparent, tell the audience if you use it for researching, or summarising large documents, or for transcribing text," she said.
"The more you tell the audience, the better it is in terms of trust."
Meta, which owns Facebook, did not provide a statement to 1News by deadline about whether the pages violate its policies or what enforcement action, if any, would be taken.



















SHARE ME