Ananya (ಅನನ್ಯ)
punarpuli.bsky.social
Ananya (ಅನನ್ಯ)
@punarpuli.bsky.social
Science & tech journalist, translator. Interested in all things algorithms, oceans, urban & the people involved.
https://storiesbyananya.wordpress.com
ChatGPT, Perplexity, Ai2 ScholarQA and other AI tools cite retracted scientific papers without any warning. Read my report for @technologyreview.com

www.technologyreview.com/2025/09/23/1...
AI models are using material from retracted scientific papers
Some companies are working to remedy the issue.
www.technologyreview.com
October 8, 2025 at 10:52 AM
Reposted by Ananya (ಅನನ್ಯ)
Researchers argue over whether ‘novel’ AI-generated works use others’ ideas without credit.
By Ananya | @nature.com
www.nature.com/articles/d41...
What counts as plagiarism? AI-generated papers pose new risks
Researchers argue over whether ‘novel’ AI-generated works use others’ ideas without credit.
www.nature.com
August 20, 2025 at 10:33 AM
Researchers are developing AI tools to generate novel research ideas and papers. But, there's a concern that these tools might just be reusing existing ideas without credit. I dug into the debate for @nature.com
www.nature.com/articles/d41...
What counts as plagiarism? AI-generated papers pose new risks
Researchers argue over whether ‘novel’ AI-generated works use others’ ideas without credit.
www.nature.com
August 27, 2025 at 10:24 AM
For @nature.com, I write about the book Amphibious Anthropologies, which explores what it means to live in wet environments — where the boundaries between land and water are constantly redrawn — all over the world.
www.nature.com/articles/d41...
Why amphibious, wet environments hold the key to climate adaptation
As the planet braces for climate-change-induced fluctuations, the wisdom of communities living at the intersection of land and water could offer valuable lessons.
www.nature.com
August 27, 2025 at 10:24 AM
Indian researchers suspect they were passed over for major awards after criticizing government policies. New by me for
Science Magazine

science.org/content/arti...
Indian government accused of political meddling in science prizes
Researchers suspect they were passed over for major awards after criticizing government policies
science.org
October 11, 2024 at 11:37 AM
Today's tests cannot provide any meaningful assessment of AI's ability to reason or understand, and yet there are so many claims that AI systems have humanlike cognitive abilities. I report on the current state of evaluation practices for @sciencenews.bsky.social
www.sciencenews.org/article/ai-u...
AI's understanding and reasoning skills can't be assessed by current tests
Assessing whether large language models — including the one that powers ChatGPT — have humanlike cognitive abilities will require better tests.
www.sciencenews.org
July 16, 2024 at 10:27 AM
Researchers found that OpenAI's speech recognition model, Whisper, fabricated about 1.4% of the audio transcriptions tested, about 40% of which were harmful or concerning in some way. It's hard to spot them without listening to the audio again.
My report for Science: www.science.org/content/arti...
AI transcription tools ‘hallucinate,’ too
Study finds surprisingly harmful fabrications in OpenAI’s speech-to-text algorithm
www.science.org
April 26, 2024 at 6:47 PM
LLM-based multilingual chatbots are becoming common nowadays. But these chatbots might not be the best at answering your healthcare queries, especially if you ask in Hindi, Mandarin and Spanish. My latest for
@sciam.bsky.social exploring the problems.
www.scientificamerican.com/article/chat...
Chatbots Struggle to Answer Medical Questions in Widely Spoken Languages
Two popular chatbots showed some difficulty in providing medical information when asked in Spanish, Hindi or Mandarin
www.scientificamerican.com
April 1, 2024 at 3:29 PM
For Nature, I spoke to researchers about tracing sources of bias in image-generating AI, and if they can be fixed.
www.nature.com/articles/d41...

With inputs from @abeba.bsky.social, @rajiinio.bsky.social, Pratyusha Ria Kalluri, Kathleen Fraser and Will Orr.
AI image generators often give racist and sexist results: can they be fixed?
Researchers are tracing sources of racial and gender bias in images generated by artificial intelligence, and making efforts to fix them. Researchers are tracing sources of racial and gender bias in i...
www.nature.com
March 26, 2024 at 7:16 AM
For Al Jazeera, I report on how things keep getting worse for India's Urban Company workers and trainees.
www.aljazeera.com/economy/2024...
India’s Urban Company revolutionised gig work for women. Then it bled them
Multiple costs to qualify for work leads, forced product purchases, high rate of blockings have stripped their earnings.
www.aljazeera.com
March 4, 2024 at 4:40 PM
Recently, many generative AI tools have been released for public use and more and more companies are looking to integrate these tools into their workflows. What makes them popular and what are the concerns? I talked to
@melaniemitchell.bsky.social to know more. sciencenews.org/article/gene...
Generative AI grabbed headlines this year. Here’s why and what’s next
Prominent artificial intelligence researcher Melanie Mitchell explains why generative AI matters and looks ahead to the technology’s future.
sciencenews.org
December 13, 2023 at 2:01 PM
Free, accessible and well-stocked public libraries are hard to find in India. For @techreview.bsky.social, I write about an effort to digitize books in various Indian languages. technologyreview.com/2023/10/25/1...
The grassroots push to digitize India’s most precious documents
The Servants of Knowledge collection on the Internet Archive is an effort to make up for the lack of library resources in India.
technologyreview.com
October 28, 2023 at 10:26 AM
Algorithms are increasingly being used to make various decisions about our lives, decisions that are often invisible to us. Can we trust those decisions? For Scientific American, I write about how algorithms can go wrong.

www.scientificamerican.com/article/algo...
Algorithms Are Making Important Decisions. What Could Possibly Go Wrong?
Seemingly trivial differences in training data can skew the judgments of AI programs—and that’s not the only problem with automated decision-making
www.scientificamerican.com
September 8, 2023 at 5:59 PM
Reposted by Ananya (ಅನನ್ಯ)
Seemingly trivial differences in training data can skew the judgments of AI programs—and that’s not the only problem with automated decision-making, by @punarpuli.bsky.social
Algorithms Are Making Important Decisions. What Could Possibly Go Wrong?
Seemingly trivial differences in training data can skew the judgments of AI programs—and that’s not the only problem with automated decision-making
www.scientificamerican.com
September 7, 2023 at 1:43 PM