kaymacquarrie.bsky.social
@kaymacquarrie.bsky.social
That is an easy question: Yes! "Good" AI tools empower citizens to spot disinformation themselves. They can make people aware to be alert and for example make us think twice before publishing / retweeting (harmful) disinformation. #AICODEPROJECT
October 30, 2025 at 12:31 PM
The human judgment - at least in quality media - stands above AI. AI gives indications (and sometimes inspiration), but for the time being and hopefully for a bit longer..., the human takes the decision and the responsibility. #AICODEPROJECT
October 30, 2025 at 12:17 PM
In a world where we increasingly face synthetic media (generated media) trust is a big word. What is real, what is fake? Can articles be written by machines - with supervision of humans or without? Big questions… and it’s all a matter of providing trustworthy environments. #AICODEPROJECT
October 30, 2025 at 12:10 PM
Creating fair and transparent AI is a huge undertaking. How to manage this in a world which is not fair and diverse? LLM's are predominantly built upon English content - just to name one „unfair“ bottleneck. So, I think the bigger challenge is to create fair and transparent AI tools. #AICODEPROJECT
October 30, 2025 at 11:59 AM
This is a good question. It’s a bit like a cat-and-mouse-game. At the end it is important to stay alert - and use and improve the latest trustworthy tools and technologies to stay ahead.
#AICODEPROJECT
October 30, 2025 at 11:48 AM