sympap.bsky.social
@sympap.bsky.social
Reposted
📰 Calling all journalists and fact-checkers!

Be part of a European pilot to improve AI tools that help detect and analyse disinformation.

Your feedback will guide the next generation of trustworthy, ethical AI for media.

Sign up now 👉 forms.gle/S6RvKp91u5hQ...

#AI4TRUST #HorizonEU
November 14, 2025 at 11:08 AM
Reposted
New handbook published by @ebu.ch - the title sums it up: "Handbook on the legal and ethical obligations to developers and deployers of AI-based fact-checking tools".
Available for download.
EBU handbook on legal and ethical obligations in AI development released
VERification Assisted by AI. R&D & innovation co-funded by the HorizonEU. Continuing WeVerify work. And much more!
www.veraai.eu
September 8, 2025 at 7:41 AM
Reposted
Join us on 24 June from 2-5 pm CET for our second online webinar in which we present outcomes of our work on #disinformation detection and content analysis. Focus: research. More, incl. registration link: www.veraai.eu/posts/two-ve...
CC @sympap.bsky.social @ivansrba.bsky.social
June 23, 2025 at 9:02 AM
Reposted
Alert: 📌 Registration open to join the MediaEval 2025 challenge.

📂 Data & Submission Instructions
Available in the official GitHub repository:
👉 github.com/mever-team/m...

📅 Important Dates
Data release: June 20
Runs due: September 15
Paper submission: October 8
Workshop: October 25–26

Website 👇
MediaEval 2025
The MediaEval Multimedia Evaluation benchmark offers challenges in artificial intelligence for multimedia data. Participants address these challenges by creating algorithms for analyzing, exploring an...
multimediaeval.github.io
June 23, 2025 at 4:31 PM
Reposted
📢 ELLIOT is coming! A €25M #HorizonEurope project to develop open, trustworthy Multimodal Generalist Foundation Models, #MGFM, for real-world applications. Starting July, it brings 30 partners from 12 countries to shape Europe’s #AI future.

🔍 Follow for updates on #OpenScience & #FoundationModels.
June 12, 2025 at 7:35 AM
Reposted
Just over one week to go before our first workshop takes place in which we present outcomes of our work. On 17 & 24 June, 2-5 pm we invite the #verification, #factchecking, #disinformation detection community to meet us on Zoom.
Registration required. Hope to see you then!
vera.ai presents its outcomes - and invites you all to attend two webinars!
VERification Assisted by AI. R&D & innovation co-funded by the HorizonEU. Continuing WeVerify work. And much more!
www.veraai.eu
June 10, 2025 at 12:44 PM
Reposted
We're inviting the #verification, #factchecking and #disinformation detection community to 2 workshops in which we showcase veraAI results: 17 June is targeting #journalists and #factcheckers; 24 June focusses on the R&D community. Both events run from 2-5 pm. Registration required. See you soon!? 😃
vera.ai presents its outcomes - and invites you all to attend two webinars!
VERification Assisted by AI. R&D & innovation co-funded by the HorizonEU. Continuing WeVerify work. And much more!
www.veraai.eu
May 12, 2025 at 12:15 PM
Reposted
veraAI partner @disinfo.eu / Ana Romero-Vicente (@anicanaca.bsky.social) authored a report entitled "Visual assessment of Coordinated Inauthentic Behaviour in disinformation campaigns". Editor: @netosessa.bsky.social. We provide a summary and the full publication here. www.veraai.eu/posts/report...
Report: Visual assessment of Coordinated Inauthentic Behaviour in disinformation campaigns
VERification Assisted by AI. R&D & innovation co-funded by the HorizonEU. Continuing WeVerify work. And much more!
www.veraai.eu
January 30, 2025 at 11:25 AM
Reposted
Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU.

It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵

Full Report: assets.publishing.service.gov.uk/media/679a0c...

1/21
January 29, 2025 at 1:50 PM
Reposted
Receiving #ICLR2025 decisions and then #CVPR2025 reviews shortly after
January 22, 2025 at 9:00 PM
📢 We will be co-organizing the 4th edition of the Multimedia AI against Disinformation (MAD'25) workshop on June 30 @ Chicago, USA.

👉 more information on topics of interest, dates and submissions: mad2025.aimultimedialab.ro

ℹ️ The workshop is supported by @vera-ai.bsky.social.
January 22, 2025 at 8:38 AM
Reposted
Hello Bluesky! We are MedDMO, a regional hub of @edmo-eu.bsky.social covering Greece, Cyprus, and Malta. We bring together research, fact-checking, and media organizations that conduct internationally acknowledged research and activities in the area of disinformation.
Let's connect!
December 17, 2024 at 11:36 AM
Reposted
🌍 Guessing where an image was taken is a hard, and often ambiguous problem. Introducing diffusion-based geolocation—we predict global locations by refining random guesses into trajectories across the Earth's surface!

🗺️ Paper, code, and demo: nicolas-dufour.github.io/plonk
December 10, 2024 at 3:56 PM
Energy consumption of AI becomes a priority as such systems are widely deployed and are responsible for a big part of society's energy footprint. In our latest work, we proposed the concept of neural network "complementarity" to quantify the extent that two NNs lead to complementary predictions...
November 29, 2024 at 10:43 AM
Reposted
Just realized BlueSky allows sharing valuable stuff cause it doesn't punish links. 🤩

Let's start with "What are embeddings" by @vickiboykis.com

The book is a great summary of embeddings, from history to modern approaches.

The best part: it's free.

Link: vickiboykis.com/what_are_emb...
November 22, 2024 at 11:13 AM
Reposted
The Cosmos suite of neural tokenizers for images & videos is impressive.
Cosmos is trained on diverse high-res imgs & long-vids, scales well for both discrete & continuous tokens, generalizes to multiple domains (robotics, driving, egocentric ...) & has excellent runtime
github.com/NVIDIA/Cosmo...
November 20, 2024 at 10:58 PM