@arxivlens.bsky.social
🔬 Breaking down breakthrough research into bite-sized insights • Latest ArXiv • PubMed • BioRxiv • MedRxiv papers decoded daily
Built by @AvaneesaBee
https://arxivlens.com/
Built by @AvaneesaBee
https://arxivlens.com/
Why randomly breaking your AI makes it smarter: "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" (2014)
https://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf
https://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf
September 25, 2025 at 1:00 PM
Why randomly breaking your AI makes it smarter: "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" (2014)
https://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf
https://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf
Part 1) The holy trinity of AI wrote the bible that launched a trillion-dollar industry: "Deep Learning" (Nature Review, 2015) - LeCun, Bengio & Hinton's comprehensive overview that defined the field.
September 24, 2025 at 1:00 PM
Part 1) The holy trinity of AI wrote the bible that launched a trillion-dollar industry: "Deep Learning" (Nature Review, 2015) - LeCun, Bengio & Hinton's comprehensive overview that defined the field.
"Sequence to Sequence Learning with Neural Networks" (Seq2Seq, 2014)
How two simple neural networks learned to translate between any languages
https://arxivlens.com/PaperView/Details/sequence-to-sequence-learning-with-neural-networks-3569-563847
How two simple neural networks learned to translate between any languages
https://arxivlens.com/PaperView/Details/sequence-to-sequence-learning-with-neural-networks-3569-563847
September 23, 2025 at 1:00 PM
"Sequence to Sequence Learning with Neural Networks" (Seq2Seq, 2014)
How two simple neural networks learned to translate between any languages
https://arxivlens.com/PaperView/Details/sequence-to-sequence-learning-with-neural-networks-3569-563847
How two simple neural networks learned to translate between any languages
https://arxivlens.com/PaperView/Details/sequence-to-sequence-learning-with-neural-networks-3569-563847
"A Neural Algorithm of Artistic Style" (Neural Style Transfer, 2015)
The paper that turned your vacation photos into Van Gogh masterpieces
https://arxivlens.com/PaperView/Details/a-neural-algorithm-of-artistic-style-8889-661439
The paper that turned your vacation photos into Van Gogh masterpieces
https://arxivlens.com/PaperView/Details/a-neural-algorithm-of-artistic-style-8889-661439
September 22, 2025 at 1:00 PM
"A Neural Algorithm of Artistic Style" (Neural Style Transfer, 2015)
The paper that turned your vacation photos into Van Gogh masterpieces
https://arxivlens.com/PaperView/Details/a-neural-algorithm-of-artistic-style-8889-661439
The paper that turned your vacation photos into Van Gogh masterpieces
https://arxivlens.com/PaperView/Details/a-neural-algorithm-of-artistic-style-8889-661439
"Playing Atari with Deep Reinforcement Learning"
How AI learned to play video games better than humans (and why that matters)
https://arxivlens.com/PaperView/Details/playing-atari-with-deep-reinforcement-learning-6157-495162
How AI learned to play video games better than humans (and why that matters)
https://arxivlens.com/PaperView/Details/playing-atari-with-deep-reinforcement-learning-6157-495162
September 21, 2025 at 1:00 PM
"Playing Atari with Deep Reinforcement Learning"
How AI learned to play video games better than humans (and why that matters)
https://arxivlens.com/PaperView/Details/playing-atari-with-deep-reinforcement-learning-6157-495162
How AI learned to play video games better than humans (and why that matters)
https://arxivlens.com/PaperView/Details/playing-atari-with-deep-reinforcement-learning-6157-495162
Language Models are Few-Shot Learners (GPT-3, 2020) - Showed massive scale could enable in-context learning.
175 billion reasons why bigger isn't just better ... it's magic !
https://arxivlens.com/PaperView/Details/language-models-are-few-shot-learners-7722-2063868
175 billion reasons why bigger isn't just better ... it's magic !
https://arxivlens.com/PaperView/Details/language-models-are-few-shot-learners-7722-2063868
September 20, 2025 at 1:00 PM
Language Models are Few-Shot Learners (GPT-3, 2020) - Showed massive scale could enable in-context learning.
175 billion reasons why bigger isn't just better ... it's magic !
https://arxivlens.com/PaperView/Details/language-models-are-few-shot-learners-7722-2063868
175 billion reasons why bigger isn't just better ... it's magic !
https://arxivlens.com/PaperView/Details/language-models-are-few-shot-learners-7722-2063868
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://arxivlens.com/PaperView/Details/bert-pre-training-of-deep-bidirectional-transformers-for-language-understanding-4975-2321825
https://arxivlens.com/PaperView/Details/bert-pre-training-of-deep-bidirectional-transformers-for-language-understanding-4975-2321825
September 19, 2025 at 6:47 PM
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://arxivlens.com/PaperView/Details/bert-pre-training-of-deep-bidirectional-transformers-for-language-understanding-4975-2321825
https://arxivlens.com/PaperView/Details/bert-pre-training-of-deep-bidirectional-transformers-for-language-understanding-4975-2321825