Thanks for sharing the insights on smoother pooling operations, gives me a lot to think about! And I generally agree about the filter interpretation limitations. Going over various attribution methods, mostly developed in non-biology contexts, it's definitely tricky to gauge their reliability.
April 17, 2025 at 2:48 AM
Thanks for sharing the insights on smoother pooling operations, gives me a lot to think about! And I generally agree about the filter interpretation limitations. Going over various attribution methods, mostly developed in non-biology contexts, it's definitely tricky to gauge their reliability.
(And speaking as a student in Peter Koo's lab...) pooling in early layers could help enforce initial conv filters to capture more complete pattners rather than partial ones.
(But I like the ChromBPNet architecture, and the separation of bias signal in the PISA paper was beautiful!)
April 17, 2025 at 1:43 AM
(And speaking as a student in Peter Koo's lab...) pooling in early layers could help enforce initial conv filters to capture more complete pattners rather than partial ones.
(But I like the ChromBPNet architecture, and the separation of bias signal in the PISA paper was beautiful!)
I agree it's more of a workaround for pooling architectures than a complete solution. But I'm not sure how to achieve long receptive fields without overly parameterizing the model if not using pooling.
April 17, 2025 at 1:43 AM
I agree it's more of a workaround for pooling architectures than a complete solution. But I'm not sure how to achieve long receptive fields without overly parameterizing the model if not using pooling.
These strategies reduce technical variance and improve eQTL benchmarking, but substantial variance remains, pointing to the need for boundary-free architectures or more robust training 2/2
These strategies reduce technical variance and improve eQTL benchmarking, but substantial variance remains, pointing to the need for boundary-free architectures or more robust training 2/2