• Part-time at Multimodal Intelligence Lab@EdUHK
• fastai student
• 我会说一点儿普通话
• Loves learning
• Is scitech geek and space nerd
• Has been to 15 countries
🔗 forbo7.github.io
📍 China
Fusion feels like MS Word: workflow feels odd, certain functions don't work as you'd expect, certain functions don't work for no obvious reason, and certain logical keyboard shortcuts don't exist for whatever reason.
Fusion feels like MS Word: workflow feels odd, certain functions don't work as you'd expect, certain functions don't work for no obvious reason, and certain logical keyboard shortcuts don't exist for whatever reason.
Can get a similarly specced (13GB RAM/113GB Storage/2vCPU) RTX A4000 (which has an extra GB of VRAM and outperforms T4) for roughly $1 less!
Can get a similarly specced (13GB RAM/113GB Storage/2vCPU) RTX A4000 (which has an extra GB of VRAM and outperforms T4) for roughly $1 less!
And then GPU renters provide access to a "whole Linux machine" you can SSH into, whereas Colab is NB frontend+terminal access
And then GPU renters provide access to a "whole Linux machine" you can SSH into, whereas Colab is NB frontend+terminal access
For $10 on Colab: 20.5 hours of V100 use
For $10 on Tensordock: 33 hours of V100
For $10 on Colab: 20.5 hours of V100 use
For $10 on Tensordock: 33 hours of V100
I just checked on Tensordock and you can get a V100 there (256GB/32GB/8vCPU) for ~$0.30/hr. Same A100 machine is ~$1.20/hr.
I just checked on Tensordock and you can get a V100 there (256GB/32GB/8vCPU) for ~$0.30/hr. Same A100 machine is ~$1.20/hr.
I've only used GPU rent-by-the-hour, and for ref, A4000 is ~$0.11/hr, and an RTX4090 is ~$0.45/hr.
If you bite, tell us how it goes :P
I've only used GPU rent-by-the-hour, and for ref, A4000 is ~$0.11/hr, and an RTX4090 is ~$0.45/hr.
If you bite, tell us how it goes :P
www.mathsisfun.com/combinatoric...
www.mathsisfun.com/data/weighte...
www.mathsisfun.com/combinatoric...
www.mathsisfun.com/data/weighte...