Marcus Botacin
marcusbotacin.bsky.social
Marcus Botacin
@marcusbotacin.bsky.social
CS Assistant Professor at Texas A&M University, USA
CS PhD @ SECRET UFPR , Brazil
CE/CS Master @ UNICAMP, Brazil
#Malware Research; #Reverse Engineering #Antivirus #Forensics #Debugging
@MarcusBotacin@infosec.exchange
Website: marcusbotacin.github.io
Want to know more? Check our work!
July 11, 2025 at 2:32 PM
And there are pretty significant cases of dataset imbalances in popular malware dataset, such as in DREBIN. See the results for more than 5K runs with different configurations:
July 11, 2025 at 2:32 PM
This includes false positives (on the drift detection report). We are able to pinpoint, for instance, when a FP occurs because the model did not learn enough due to class imbalance.
July 11, 2025 at 2:32 PM
The result is that this approach can explain what is happening at every drift point.
July 11, 2025 at 2:32 PM
We created an entire taxonomy about when drift happens and when not, for the most formal ones.
July 11, 2025 at 2:32 PM
We also identified that concept drift is directional, i.e., only expansions towards the border cause true drift in the main classifier. Therefore, by measuring directionality we can predict if a concept expansion will cause a drift in the future and even anticipate to it (early retrain).
July 11, 2025 at 2:32 PM
We detect these cases via an architecture of external meta-models to be applied to any internal ML model. They measure the concepts and the main model measures the boundaries. True drift represent changes in both meta models and boundaries, and false ones affect only the boundary.
July 11, 2025 at 2:32 PM
Our insight is that there is a difference between the concept (circles) and the decision boundary (lines) of a classifier. Sometimes samples cross the boundary because concept expansion (true drift), but sometimes because the line is misplaced (false positive drift). We want to detect these cases.
July 11, 2025 at 2:32 PM
See you in the next offering!
May 2, 2025 at 12:11 AM
All the vulnerabilities were disclosed to the developers. Many of them (unfortunately not all) answered and even fixed them, which is great!
May 2, 2025 at 12:11 AM
I recorded some of the classes, if you are interested: www.youtube.com/watch?v=E8qV...
[SW Security] Random Number Generators: Demo
YouTube video by mfbotacin
www.youtube.com
May 2, 2025 at 12:11 AM
But don't worry. The students were able to patch many of those vulnerabilities and to verify many other patches, such as those escapes:
May 2, 2025 at 12:11 AM
In a more sophisticated attack, one team was able to abuse an intent to move the window to the foreground while screenshoting it via accessibility services.
May 2, 2025 at 12:11 AM
The previous attack was ran against a mobile app. What happen when the app is protected by a password? Well, students could bruteforce it.
May 2, 2025 at 12:11 AM
In the worst case, one could remotely trigger user deletion by manipulation the client-side requests.
May 2, 2025 at 12:11 AM
So why not setting it to the maximum value possible?
May 2, 2025 at 12:11 AM
Another classical attack: MITM. One team identified an application (game) whose credits were set at the user side and not validated.
May 2, 2025 at 12:11 AM
OK, sometimes the students exaggerate on how much payload they add to the requests...
May 2, 2025 at 12:11 AM