Artem Moskalev
banner
artemmoskalev.bsky.social
Artem Moskalev
@artemmoskalev.bsky.social
Re-imagining drug discovery with AI 🧬. Deep Learning ⚭ Geometry. Previously PhD at the University of Amsterdam. https://amoskalev.github.io/
We test the proposed geometric long convolution on multiple large-molecule property and dynamics prediction tasks for RNA and protein biomolecules. Geometric Hyena is on par or better than equivariant self-attention at a fraction of its computational cost.

6/8
June 4, 2025 at 8:03 AM
To evaluate long geometric context capabilities of our models, we introduce the geometric extension of the mechanistic interpretability suite. Specifically, we evaluate equivariant models over the increasing degree of complexity of equivariant associative recall.

5/8
June 4, 2025 at 8:03 AM
Inspired by the recent success of long-context models, long-convolutions, and Hyena hierarchy, we propose the geometric counterpart. We rely on the power of FFT to push the computational complexity to NlogN, adapting it for vectors. The implementation is simple – just 50 lines of code!

4/8
June 4, 2025 at 8:03 AM
In many biological or physical systems, we need equivariance + global context. Leading to quadratic complexity with respect to system size multiplied by the cost of equivariance. Existing equivariant models are not equipped to work on that scale 🫠.

3/8
June 4, 2025 at 8:03 AM
ICML Spotlight 🚨 Equivariance is too slow and expensive, especially when you need global context. It makes us wonder if it even worths it? We present Geometric Hyena Networks — a simple equivariant model orders of magnitude more memory- and compute-efficient for high-dimensional data.

1/8
June 4, 2025 at 8:03 AM
North Sea weekend 🌊
March 2, 2025 at 7:24 PM
What did we learn? In the presence of severe noise simple sequence transformer without any geometry works the best but it requires much more data to converge! At the same time, 3D Geometric GNNs are the most vulnerable to geometric noise.
February 3, 2025 at 8:57 AM
We study different types of neural networks on various types of RNA representations: 1D vs 2D vs 3D. We evaluate property prediction performance, noise robustness, data efficiency and OOD noise generalization.
February 3, 2025 at 8:56 AM
HELM integrates codon-level hierarchy through hierarchical cross entropy, resulting 8% improvement in mRNA property prediction across various types and species. We can also generate more diverse sequences while maintaining biological plausibility.

3/5
January 23, 2025 at 11:52 AM
Accepted to ICLR 🚨 Don't treat the language of biology as natural language! Biology speaks in hierarchical patterns that natural language models don't fully capture. Meet HELM: a method to align your bio-Language Model with the intrinsic structure of mRNA sequences.

arxiv.org/abs/2410.12459

1/5
January 23, 2025 at 11:51 AM