#RedTeaming
There are so. So. SO MANY stories of things like this, where even the tiniest bit of redteaming would have prevented so much fucking cleanup work. I could probably make a list of a dozen things they haven't patched the giant abuse potential out of yet, I just don't want to call TOO much attention--
September 26, 2025 at 11:37 AM
I would say that clearly they don't have anyone redteaming but honestly I'm not sure that seeing the obvious abuse potential in that "feature" counts as redteaming.

Like, it's not breaking and entering if the door is unlocked and there's a giant blinking sign over the door that says "VALUABLES."
September 26, 2025 at 3:18 AM
Excited for "Redteaming ADventures in Active Directory" tomorrow
November 12, 2024 at 9:49 PM
i'm genuinely concerned about how little T&S redteaming seems to have been done on the feature before it launched like.............. hellO,
January 24, 2025 at 1:43 PM
🔒 Plongez dans les techniques avancées de #RedTeaming avec cette formation complète donnée par Charles Hamilton.

Inscrivez vous rapidement! Vous avez jusqu'au 29 pour profiter de nos tarifs préférentiels!

#RedTeamTraining

https://nsec.io/training/2024-red-team.html
July 11, 2025 at 10:15 AM
Astroturfing is the practice of sowing fake support for something to sway public opinion. It's commonly seen as an underhanded advertisement tactic.
Non-sequitor = not sequential, as in a disconnected concept.
Redteaming is an aggressive cybersecurity testing method.
September 2, 2025 at 1:54 AM
Day 85 of looking at books through the lens of #CriticalSystemsThinking

💡 #BryceHoffman #RedTeaming advocates challenging of assumptions & structured questioning using multiple methods such as Devil’s Advocate, Pre-Mortems, Alternative Analysis, Think-Write-Share...

#complexity
June 13, 2025 at 10:09 AM
Mit der Übernahme des Sicherheitspionier SPLX will Zscaler den KI-Lebenszyklus absichern

#Bedrohungserkennung #KIAgent #KISicherheit #künstlicheIntelligenz #LargeLanguageModel #MCP #PromptHardening #RedTeaming @SPLX #ZeroTrust @Zscaler @Zscaler_DACH

netzpalaver.de/2025/...
November 5, 2025 at 1:14 PM
CDAO-sponsored #redteaming efforts identifies "over 800 findings of potential vulnerabilities and biases" related to the use of #LLM for #clinical note summarization & #medical advisory chatbot. #AI #DefenseAI
cc @aisupremacy.bsky.social
See: www.defense.gov/News/Release...
CDAO Sponsors Crowdsourced AI Assurance Pilot in the Context of Military Medicine
The Chief Digital and Artificial Intelligence Office has successfully concluded a Crowdsourced AI Red-Teaming Assurance Program pilot focused on the use of Large-Language Model chatbots in the context
www.defense.gov
January 3, 2025 at 7:18 AM
Is anyone using AWS to host redteaming or phishing infrastructure? Have you had infrastructure flagged or been contacted about needing to fill out a Simulated security events form before every test?
Is anyone using AWS to host redteaming or phishing infrastructure? Have you had infrastructure flagged or been contacted about needing to fill out a Simulated security events form before every test?
aws.amazon.com
April 29, 2025 at 2:39 PM
“Caught” isn’t failure — it’s feedback. Dave Spencer of Immersive explains why well-scoped red teaming turns cyber panic into preparedness. #CyberSecurity #RedTeaming #IncidentResponse
Red Teaming: How to turn cyber panic into cyber readiness
Red teaming reveals the truth behind cybersecurity readiness. Dave Spencer, Director of Technical Product Management at Immersive, explores how red team exercises move businesses beyond theoretical defences — testing real-world resilience across people, processes, and technology.
businessquarter.co.uk
September 9, 2025 at 9:43 AM
"The vital role of red teaming in safeguarding AI systems and data" (shorturl.at/JxDol) JK--Link to NIST redteaming risk management frameworks (shorturl.at/x4ucq)
The vital role of red teaming in safeguarding AI systems and data
AI red teaming offers an innovative, proactive method for strengthening AI while mitigating potential risks, helping organizations avoid costly AI incidents. Here’s how it works.
shorturl.at
January 7, 2025 at 4:24 PM
As someone who has done redteaming I am pretty sure this would go to minor nuisance/wontfix bins even if noticed and reported.

Implementing a user-facing UI that correctly presents trains as trains instead of a stream of kingsized cars is likely super low priority unless one can forge it into >
January 15, 2025 at 4:57 PM
🚀 The @salesforce.com #Responsible #AI & Tech team is looking for an experienced #ResponsibleAI Data Scientist w/ expertise in #ethical #RedTeaming. Collaborate with security, engineering, data science, & AI Research teams. Join us! #hiring #dreamJob
salesforce.wd12.myworkdayjobs.com/External_Car...
Senior or Principal Data Scientist - Technical AI Ethicist
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Data Job Details About Salesforce We’re...
salesforce.wd12.myworkdayjobs.com
March 20, 2025 at 10:39 PM
WEBINAR: Red Teaming

📅 19.03.2025
🕚 11:00 Uhr
⏳ ca. 30 Minuten

Red Teaming. Ein simulierter Angriff auf ein Unternehmen. Diese Methode deckt Schwachstellen in allen Bereichen auf. Das Ziel: Die sensiblen Daten Ihres Unternehmens.

zoom.us/webinar/regi...

#webinar #redteaming #cybersecurity #it
Welcome! You are invited to join a webinar: Red Teaming - Der simulierte Angriff auf Ihr Unternehmen. After registering, you will receive a confirmation email about joining the webinar.
Sie machen sich sicher bereits viele Gedanken, wie Sie die Sicherheit Ihrer IT-Infrastruktur verbessern können. Vielleicht haben Sie einen Prozess der kontinuierlichen Verbesserung eingeführt, um Syst...
zoom.us
March 5, 2025 at 1:20 PM
Red Teaming Reveals AI Blind Spots
• According to Microsoft, 100+ AI products tested
• Simple attacks still work
• Continuous checks urged
AI security never ends
https://arxiv.org/pdf/2501.07238
#AI #Security #RedTeaming
arxiv.org
January 25, 2025 at 8:30 PM
CopyBench (EMNLP 2024, led by @tomchen0112.bsky.social)
Oral at regulatableml.github.io & Poster at redteaming-gen-ai.github.io

tldr: We benchmarked LLMs' literal/non-literal copying of copyrighted content—risks found even in 8B models.

Detais: www.arxiv.org/abs/2407.07087
The 2nd Workshop on Regulatable ML @NeurIPS2024
Towards Bridging the Gaps between Machine Learning Research and Regulations
regulatableml.github.io
December 8, 2024 at 2:55 AM
OpenAI potenzia il proprio piano di sicurezza con grant, AI difensive e monitoraggio agentico per guidare in modo sicuro lo sviluppo dell’AGI.

#agentiAI #AGI #bugbounty #cybersecurity #grantAI #openai #promptinjection #redteaming #resilienti #SpecterOps
www.matricedigitale.it/tech/intelli...
March 29, 2025 at 8:30 AM
I don't know most of those terms... oh geez. Uhm... Can you please explain:

Astro-turf
Sequitor
Redteaming

Why did I make that a list. Oh my gosh. I can even delete it and redo it but for some reason I'm not... What the heck, me?!
September 2, 2025 at 1:52 AM
Agentic AI Red Teaming Playbook: end-to-end methods for agentic layers, covering prompt injection, RAG data exfiltration, tool-chaining, and exploitation techniques. Practical, battle-tested examples. #redteaming #AI #LLMsecurity https://bit.ly/3WEgBMw
October 15, 2025 at 8:07 AM
I'm looking for #pentesting and #redteaming accounts. I'm hoping the offensive tooling #cybersecurity community will move away from Twitter so I can stop going there.
October 21, 2024 at 12:27 AM
I'm speaking as someone who spent 2 years entrenched in antiai activism via things like redteaming. We do not see you as an ally if your work is in improving these systems rather then reducing exploitation. In fact that would make you an enemy of the movement as a whole.
March 29, 2025 at 6:58 PM
how does one honestly fill out the "Request Access to Azure OpenAI Service" form?
I should note I'm using "redteaming" loosely here, as described/suggested in this documentation: MS RAI Impact Assessment Guide
not "offensive" as in "on the attack" per se, more like "gonna ask for concerning things"
November 20, 2023 at 12:27 PM