Infolab@SKKU
banner
infolab.bsky.social
Infolab@SKKU
@infolab.bsky.social
InfoLab is a research group at SKKU pushing the boundaries of security and machine learning, especially in bioinformatics and biomedical discovery. For more information, please visit our website at infolab.skku.edu
Top AI pioneers now say human-level general intelligence isn’t a future vision, it’s happening now. Are we ready for an era where machines truly rival humans?

#general_Intelligence #AGI #FT #AI_submit

youtu.be/0zXSrsKlm5A?...
The Minds of Modern AI: Jensen Huang, Geoffrey Hinton, Yann LeCun & the AI Vision of the Future
YouTube video by FT Live
youtu.be
November 8, 2025 at 2:02 AM
Study shows that humans and neural networks face the same trade-off when learning multiple tasks: learning more similar tasks boosts “transfer” but increases “interference”.
Understanding this helps optimise continual learning in both brains and machines.

Read more www.nature.com/articles/s41...
Humans and neural networks show similar patterns of transfer and interference during continual learning - Nature Human Behaviour
When learning new tasks, both humans and artificial neural networks face a trade-off between reusing prior knowledge to learn faster and avoiding the disruption of earlier learning. This study shows t...
www.nature.com
November 2, 2025 at 4:53 AM
How will Large Language Models reshape science?
Nature Computational Science’s Focus issue dives into breakthroughs, challenges & the future of AI in research.

Read here www.nature.com/collections/...

#AI #LLM #Science #Research #FutureOfAI #Nature
The impact of large language models in science
Large language models (LLMs) are rapidly being implemented in a wide range of disciplines, with the promise of unlocking new possibilities for scientific ...
www.nature.com
September 28, 2025 at 8:20 AM
A framework to build AI scaling laws for cost-efficient LLM training, helping teams get the most out of limited compute budgets.

#LLM #AITech #AIEfficiency #ScalingLaws #ArtificialIntelligence

💡 Key insight: smarter scaling = more performance per dollar.
Read more 👇
news.mit.edu/2025/how-bui...
How to build AI scaling laws for efficient LLM training and budget maximization
MIT and MIT-IBM Watson AI Lab researchers have developed a universal guide for estimating how large language models (LLMs) will perform based on smaller models in the same family.
news.mit.edu
September 24, 2025 at 12:20 PM
Reposted by Infolab@SKKU
Differentially Private Federated Clustering with Random Rebalancing

Xiyuan Yang, Shengyuan Hu, Soyeon Kim, Tian Li

http://arxiv.org/abs/2508.06183
August 11, 2025 at 3:48 AM
Apple releases Embedding Atlas — an open-source tool to explore & compare large-scale embeddings with ease. Ideal for research: visualize, cluster, and search embeddings in your browser via WebGPU. #MachineLearning #AIResearch

GitHub
github.com/apple/embedd...
GitHub · Build and ship software on a single, collaborative platform
Join the world's most widely adopted, AI-powered developer platform where millions of developers, businesses, and the largest open source community build software that advances humanity.
github.com
August 10, 2025 at 2:40 AM
MLE-STAR agent system by Google to automate complex machine learning ML pipeline, a true leap in the automation of machine learning engineering

github.com/google/adk-s...

#GoogleAI #MLEGEN #AItools #AutoML #AIAgent
adk-samples/python/agents/machine-learning-engineering at main · google/adk-samples
A collection of sample agents built with Agent Development (ADK) - google/adk-samples
github.com
August 4, 2025 at 2:42 PM
New large-scale study finds: LLMs fail to re-identify the same bug 78% of the time after harmless code edits. Most models don’t really understand code, they just pattern-match.
arxiv.org/abs/2504.04372

#AI #LLM #CodeIntelligence #AIResearch #Debugging #SoftwareEngineering
How Accurately Do Large Language Models Understand Code?
Large Language Models (LLMs) are increasingly used in post-development tasks such as code repair and testing. A key factor in these tasks' success is the model's deep understanding of code. However, t...
arxiv.org
July 9, 2025 at 7:16 PM
New AI research warns: LLM reasoning traces may leak private user data!

Privacy in AI isn't just about output — it's in the process too.

📖 www.marktechpost.com/2025/06/25/n...

#AI #LLM #Privacy #DataSecurity #ChainOfThought #AIResearch #ResponsibleAI
New AI Research Reveals Privacy Risks in LLM Reasoning Traces
New study exposes how reasoning traces in large language models leak sensitive user data, posing significant contextual privacy risks
www.marktechpost.com
July 7, 2025 at 4:29 PM
ETH Zurich & Stanford researchers just dropped MIRIADE — a massive 5.8M-pair dataset to boost LLM reasoning accuracy in medical AI!

📚 www.marktechpost.com/2025/06/25/e...

#AI #MedicalAI #LLM #HealthcareInnovation #MIRIADE #StanfordAI #ETHZurich
ETH and Stanford Researchers Introduce MIRIAD: A 5.8M Pair Dataset to Improve LLM Accuracy in Medical AI
ETH and Stanford launch MIRIAD, a 5.8M medical QA dataset designed to boost LLM accuracy and reduce hallucinations in healthcare
www.marktechpost.com
July 7, 2025 at 4:28 PM
Reposted by Infolab@SKKU
In a recent talk at the Simons Institute, Somesh Jha made a case for applying a security and cryptography mindset to evaluating the trustworthiness of machine learning systems, particularly in adversarial and privacy-sensitive contexts.

bit.ly/43Xe0AP
Safety of GenAI through the Lens of Security and Cryptography
In this deliberately provocative two-part talk from the recent workshop on Theoretical Aspects of Trustworthy AI, Somesh Jha (University of Wisconsin) makes a case for applying a security and cryptogr...
bit.ly
May 28, 2025 at 7:25 AM
Reposted by Infolab@SKKU
New research reveals a vulnerability in language models, enabling knowledge manipulation through subtle feedback, injecting false information and vulnerabilities into code generation, posing risks for AI applications reliant on user-driven preference tuning. https://arxiv.org/abs/2507.02850
LLM Hypnosis: Exploiting User Feedback for Unauthorized Knowledge Injection to All Users
ArXiv link for LLM Hypnosis: Exploiting User Feedback for Unauthorized Knowledge Injection to All Users
arxiv.org
July 4, 2025 at 5:40 PM
Reposted by Infolab@SKKU
Researchers have developed ExpProof, a new protocol using Zero-Knowledge Proofs to create trustworthy explanations for confidential machine learning models, ensuring model integrity and correct explanations in adversarial settings without losing confidentiality. https://arxiv.org/abs/2502.03773
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
ArXiv link for ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
arxiv.org
June 2, 2025 at 11:20 PM
Reposted by Infolab@SKKU
The most important aspect when facing data shift is the type of shift present in the data. I will give below a few examples of shifts and some existing methods to compensate for it.🧵1/6
July 1, 2025 at 9:39 AM
Reposted by Infolab@SKKU
Did you know computer science plays a vital role in many kinds of research? From methane-detecting satellites & digital models of human hearts, to using AI to improve pandemic preparedness or track leopards & listen to whales. #ResearchAppreciationDay
July 4, 2025 at 9:36 AM
Reposted by Infolab@SKKU
Federated Learning: A Survey on Privacy-Preserving Collaborative Intelligence

Edward Collins, Michel Wang

http://arxiv.org/abs/2504.17703
April 25, 2025 at 3:48 AM
Reposted by Infolab@SKKU
🚀 #SYNTHEMA + flower.ai = a new era for secure federated learning in healthcare!

With privacy & security at the core, we’re leveraging Flower’s open-source framework to enhance #AI model training—without sharing raw data.

👉 Learn more: synthema.eu/2025/02/17/s...

#HealthcareTech
February 18, 2025 at 8:55 AM
Reposted by Infolab@SKKU
Rob Romijnders, Stefanos Laskaridis, Ali Shahin Shamsabadi, Hamed Haddadi
NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation
https://arxiv.org/abs/2504.18147
April 28, 2025 at 4:48 AM
Reposted by Infolab@SKKU
Abrar Fahim, Shamik Dey, Md. Nurul Absur, Md Kamrul Siam, Md. Tahmidul Huque, Jafreen Jafor Godhuli
Optimized Approaches to Malware Detection: A Study of Machine Learning and Deep Learning Techniques
https://arxiv.org/abs/2504.17930
April 28, 2025 at 5:52 AM
Reposted by Infolab@SKKU
Firuz Juraev, Mohammed Abuhamad, Eric Chan-Tin, George K. Thiruvathukal, Tamer Abuhmed
From Attack to Defense: Insights into Deep Learning Security Measures in Black-Box Settings
https://arxiv.org/abs/2405.01963
May 6, 2024 at 5:13 PM
Reposted by Infolab@SKKU
Firuz Juraev, Mohammed Abuhamad, Simon S. Woo, George K Thiruvathukal, Tamer Abuhmed
Impact of Architectural Modifications on Deep Learning Adversarial Robustness
https://arxiv.org/abs/2405.01934
May 6, 2024 at 6:15 PM
Reposted by Infolab@SKKU
Local Differential Privacy is Not Enough: A Sample Reconstruction Attack against Federated Learning with Local Differential Privacy
Zhichao You, Xuewen Dong, Shujun Li, Ximeng Liu, Siqi Ma, Yulong Shen
http://arxiv.org/abs/2502.08151
February 13, 2025 at 4:33 AM
Reposted by Infolab@SKKU
(1/3) Apple open source a new Python library for simulation framework for accelerating research in Private Federated Learning. 🧵👇🏼

#datascience #machinelearning #python #deeplearning
March 4, 2024 at 2:16 PM
Reposted by Infolab@SKKU
Also note this alternative federated learning approach talk at the RDKit UGM: www.youtube.com/watch?v=Y1mj...
paper: pubs.acs.org/doi/10.1021/...
Andrea Andrews-Morger: Federated learning in computational toxicology
YouTube video by RDKit
www.youtube.com
November 19, 2024 at 1:51 AM