Matthias Schulze
percepticon.bsky.social
Matthias Schulze
@percepticon.bsky.social
PhD in political science, studying infosec, cyber conflict & information war at IFSH. Self-taught hacker & blue team.

Blog and podcast about my work over at https://percepticon.de or https://ioc.exchange/@percepticon
Information control on YouTube during Russia’s invasion of Ukraine #cybersecurity #infosec
Information control on YouTube during Russia’s invasion of Ukraine
This research note investigates the aftermath of YouTube’s global ban on Russian state-affiliated media channels in the wake of Russia’s full-scale invasion of Ukraine in 2022. Using over 12 million YouTube comments across 40 Russian-language channels, we analyzed the effectiveness of the ban and the shifts in user activity before and after the platform’s intervention. We found that YouTube, in accordance with its promise, effectively removed user activity across the banned channels. However, the ban did not prevent users from seeking out ideologically similar content on other channels and, in turn, increased user engagement on otherwise less visible pro-Kremlin channels. By Yevgeniy Golovchenko Department of Political Science, University of Copenhagen, DenmarkKristina Aleksandrovna Pedersen Department of International Economics, Government and Business, Copenhagen Business School, DenmarkJonas Skjold Raaschou-Pedersen Copenhagen Center for Social Data Science, University of Copenhagen, DenmarkAnna Rogers Computer Science Department, IT University of Copenhagen, Denmark IMAGE BY David_Peterson ON PIXABAY Research Questions * How effective was YouTube’s global ban on Russian state-affiliated channels in reducing (commenting) activity on these channels? * To what extent did users previously active on banned channels redirect their engagement to other types of political content on YouTube? Essay Summary * We collected over 12 million comments across a range of pro- and anti-Kremlin Russian-language YouTube channels during Russia’s full-scale invasion of Ukraine. * The analysis focuses on YouTube’s global ban on several Kremlin-affiliated YouTube channels and the subsequent changes in commenting activity. * Comment activity on banned channels dropped sharply to near zero immediately after the ban, indicating that the ban was, in fact, successful in preventing exposure to these channels. * Users previously engaging with banned channels substantially increased engagement on other (non-blocked) pro-Kremlin channels in the weeks following the ban. * This suggests a potential “substitution effect” either through users actively seeking out alternative outlets in the wake of the ban or through YouTube’s algorithmic recommendations. * These findings have important implications for our understanding of information control as a means of suppressing disinformation sources. Global bans can prevent users from accessing certain content. However, we also show the challenges of such policies by empirically illustrating how the bans can redirect at least some of the online engagement toward ideologically similar alternatives. --- Implications The war between Russia and Ukraine takes place on physical battlefields as well as in the information space. This became apparent during Russia’s invasion of Ukraine in 2014 and continues to be relevant during the recent invasion on February 24, 2022. Since the beginning of the ongoing war, numerous scholars across various fields have noted that the so-called “information war” plays an important political and military role (Darczewska, 2014; Thornton, 2015). While the scholarly community, as well as the general public, has largely focused on the production and dissemination of content, an important part of the informational struggle also takes place through information control, both in Russia, Ukraine, and even the EU (European Commission, 2022; Golovchenko, 2022). This research note focuses on online user activity on YouTube, one of the world’s most popular social media platforms. YouTube, among other large social media platforms, has long been criticized for allowing hate speech and disinformation to foster without much action. These concerns intensified on the heels of Russia’s military aggression and crackdowns on independent media (Milmo, 2022). Simultaneously, YouTube also plays a valuable role in disseminating information and regime-critical opinions in autocracies like Russia (Gainous et al., 2018; Reuter & Szakonyi, 2015). YouTube’s double-edged sword nature makes it an important platform in the struggle for “truth” about the war. Pro-Kremlin disinformation about the war in Ukraine and the Kremlin’s strategic information control have also been met with great concern in the West. Russian state-controlled media, such as Sputnik and RT (formerly Russia Today), are widely recognized among researchers, fact-checkers, and the broader public as active perpetrators in the dissemination of disinformation (for an overview of the websites’ reach, see Kling et al., 2022) (BBC, 2019; Elliot, 2019; Golovchenko, 2020; Thornton, 2015). On March 2, 2022, the European Union responded by banning access to these channels to limit “the Kremlin’s disinformation and information manipulation assets” (European Commission, 2022). On March 11, YouTube took a further step by announcing a block of Russian state media as a whole across the platform, based on its policy against content that “denies, minimizes or trivializes well-documented violent events” (Reuters, 2022). This global ban is the focus of our research note. Using publicly available data from YouTube’s API, this research note assesses the effectiveness and implications of YouTube’s ban in reducing engagement with Russian state-affiliated media. We restricted our analysis to Russian-language YouTube channels, as these target not only domestic audiences but also Russian speakers abroad, including the Russian diaspora and large Russian-speaking populations in several post-Soviet states (Cheskin & Kachuyevski, 2018). Prior research has demonstrated that state-owned Russian-language media contributed to polarization during parliamentary elections in Ukraine, underscoring the scope of Russian-language political content disseminated by Kremlin-affiliated outlets (Rozenas & Pesakhin, 2018). We operationalize engagement as the number of comments for each video (for a discussion of the relation between comments and engagement, see Byun et al., 2023). Commenting serves as an important proxy for online activity because a high comment count also implies a high number of views. However, engagement through comments is also an important resource in its own right that can be used to gain even more visibility. While YouTube does not disclose the details of its algorithm, the platform has indicated that video visibility—for example, in search results—is also influenced by engagement (YouTube, n.d.). Our results suggest that YouTube’s ban against Russian state media almost eliminated online engagement with their videos. However, we also observed a sudden and discontinuous increase in commenting engagement on non-banned pro-Kremlin channels. We corroborated this further by showing that users who were active on blocked pro-Kremlin channels before the ban responded to the policy by increasing their activity on these non-blocked pro-Kremlin channels. The findings have two important implications. Firstly, we can confirm independently that YouTube did follow through with its effort to limit Russian disinformation. While there is a debate in the literature on the effectiveness of information control policies (Gläßel & Paula, 2020; Gohdes, 2020; Hobbs & Roberts, 2018; Jansen & Martin, 2015; Roberts, 2020; Shadmehr & Bernhardt, 2015), our findings partly support that such policies can limit “undesirable” information (Chen & Yang, 2019; King et al., 2013; Stockmann, 2013; Stern & Hassid, 2012). This is also in line with Santos Okholm et al. (2024), who found that the geo-blocking of the Russian RT and Sputnik within the EU’s territory added friction and reduced the sharing of these outlets on Facebook. Secondly, the findings also highlight the limits of online bans as a means of fighting disinformation. We show empirically that some of the activity may have moved to channels known for spreading disinformation about Russia’s invasion of Ukraine. The sudden increase in commenting engagement among non-blocked pro-Kremlin channels supports the notion of a “substitution effect,” a pattern where at least some of the engagement from the blocked channels shifted to non-banned parts of the pro-Kremlin media ecology on YouTube. This could be driven either by users’ direct effort to search for non-blocked alternatives that may offer similar content or indirectly by YouTube’s suggestion algorithms that introduce new pro-Kremlin content to users based on their viewing history. Substitution of banned or blocked information has previously been documented across different contexts, including in authoritarian regimes’ moderation of online communities and deplatforming studies investigating migration to alternative platforms (Buntain et al., 2023; Chandrasekharan et al., 2017; Horta Ribeiro et al., 2023; Roberts, 2018; Rogers, 2020). This research note focuses on within-platform migration and substitution of content. While it is not possible to isolate the main mechanism behind this within the scope of this study, our findings emphasize the challenges of online bans. While the initial bans can be effective, they may not be sufficient to fully curb disinformation efforts on a broader scale. It is outside of the scope of this research note to estimate the final net effect of the ban on the pro-Kremlin environment on YouTube as a whole. Theoretically, one can expect that a portion of the audience of the banned channels did not find their way to the non-banned alternatives. In this case, the online activity for pro-Kremlin YouTube content would be reduced overall. It is therefore likely that the ban succeeded in disrupting the pro-Kremlin YouTube media environment, despite the substitution effect captured in this research note. We encourage future research to empirically test whether this is the case. Additionally, further research is encouraged to investigate whether the ban prompted pro-Kremlin audiences to migrate to other platforms in search of the banned pro-Kremlin content. Furthermore, the findings are limited to engagement through non-deleted comments; it does not reveal to what extent an immediate decline in viewership followed the YouTube ban. The latter was not possible because the data was collected after the ban, and YouTube’s API only provided access to the latest view count rather than historical changes. The exact date or nature of the ban was not publicly announced by YouTube in advance, to the best of our knowledge. The advantage of commenting data is that each individual comment is time-stamped, enabling post-hoc historical studies of bans. The analysis does not geolocate the commenting activity for both pragmatic and ethical reasons. It is possible that the commenting activity on Russian state media channels declined mainly among Russian-speaking audiences outside the Russian Federation but only to a lesser degree within the country, or vice versa. Despite these limitations, our findings serve as a reminder that similar social media policies should not view state-affiliated channels in isolation but instead consider them as part of a broader ecology that promotes similar propaganda and disinformation narratives, regardless of the actual funding or formal state affiliation. Going beyond the case of Russia’s invasion of Ukraine, it is theoretically possible that bans in other contexts could redirect engagement to non-banned substitutes that are even more prone to spreading disinformation. If policymakers or social media firms choose to combat disinformation through similar bans, it is important that such measures are sufficiently broad in scope from the outset and also encompass more fringe sources, in order to minimize the risk of users substituting harmful content with even more extreme versions of the blocked sources. Perhaps more importantly, one should always consider the additional risk of harmful substitution when making the decision. Perhaps more importantly, one should always consider the additional risk of harmful substitution when making such decisions. This requires not only empirical analysis but also, ideally, data access for independent analysts who can critically examine both the intended and unintended consequences of these interventions. Additionally, this study is agnostic on the appropriateness of removing social media content based on accuracy assessments; rather, we are interested in focusing on the effects of doing so. Findings Finding 1: YouTube’s ban on Russian state-affiliated media successfully reduced activity on the blocked channels. First, we examined the effects of YouTube’s ban on Russian state-affiliated media. Figure 1 shows the change in the daily number of comments on the videos from the respective channels. We demonstrated activity among regime-critical outlets, as well as the relatively apolitical Russian-speaking entertainment channels, as a baseline. There was a sharp and strong decline in comment engagement for Russian state-affiliated media (bottom left) on the day after YouTube announced its global ban policy. This includes a decline in major, mainstream Kremlin-affiliated media outlets like Rossiya 24 and relatively popular yet more niche outlets like the ultra-conservative Tsargrad and Zvezda, run by the Russian Ministry of Defense (see Appendix B for the complete list of channels). In contrast, we observe no sharp and discontinuous drop in other channels. The latter supports the interpretation that the drastic decline was likely the result of YouTube’s targeted ban rather than a broader decline in the Russian-speaking YouTube environment. Figure 1. Change in the number of comments for banned pro-Kremlin media and entertainment channels. February 24 and March 4, 2022, are marked with grey and red lines, respectively. Looking at the trends in commenting activity within the 40 days prior to the ban, Figure 1 shows a notable increase at the onset of the full-scale invasion, peaking at 250,000 daily comments. This is followed by a slight drop in commenting activity coinciding with the implementation of heightened censorship measures, before comments level out.1 Comparing the commenting activity among blocked channels in the 10 days preceding the ban and the first 10 days after the ban (March 12–22), the daily number of comments drops from 12,517 to 23. While commenting activity drops down to 0.18% for the blocked pro-Kremlin channels, it does not disappear completely. A few comments were made on the blocked channels during the period after the ban. Appendix C shows an overview of the post-ban activity of the blocked channels. A deeper investigation of why this activity continued is outside the scope of this Research Note. However, it is an important factor to keep in mind, as this could indicate that the ban was not fully implemented everywhere (at least not at once). Nevertheless, the findings indicate that engagement among the banned pro-Kremlin channels was severely reduced following the ban. These findings confirm that YouTube successfully limited the online activity tied to the Russian state-affiliated channels in the sample. Arguably, these are also the most influential Kremlin-affiliated channels. Therefore, while we cannot comment on the effectiveness of the ban on channels outside of this sample, we can reaffirm that the ban did halt the activity on some of the largest spreaders of state-sponsored pro-Kremlin content. Finding 2: YouTube’s global ban was potentially accompanied by a “substitution effect” where some commenting engagement from the blocked pro-Kremlin channels moved to other non-blocked pro-Kremlin channels. Our findings suggest that YouTube’s ban on the major pro-Kremlin channels likely increased commenting engagement for other pro-Kremlin channels. As shown in Figure 1, the increase is sudden and sharp around the cut-off date (March 12). The daily engagement with the channels almost doubles after the ban compared to before: Increasing from 1,199 during the period before the invasion (Jan 31–Feb 10) to 2,513 during the first ten days after the ban. In contrast, although we observe a slight increase in commenting activity among regime-critical channels, there is little indication that this is caused by the ban. Unlike the jump for the non-banned pro-Kremlin channels, the change appears to occur days before the ban. The sudden increase among non-blocked pro-Kremlin outlets suggests that some users commonly engaging with Kremlin-associated channels have migrated to non-blocked pro-Kremlin alternatives. As mentioned earlier, this pattern aligns with a “substitution effect,” where users either directly search for replacement channels that still disseminate pro-Kremlin disinformation or are indirectly nudged to these sources by social media algorithms. To further corroborate this pattern, we examine the activity of users who have posted at least one comment on the blocked Pro-Kremlin channels before the ban within the examined period. As shown in Figure 2, the number of comments by these users more than doubled on non-blocked pro-Kremlin channels. While they also become slightly more active on regime-critical channels, there is a much larger influx of comments on pro-Kremlin channels where daily comment engagement nearly doubles and appears to be driven by users migrating from the blocked channels. It is worth noting, however, that the commenting activity on both non-blocked pro-Kremlin channels and regime-critical anti-Kremlin channels declines approximately 2–3 weeks after the ban. The drop is likely driven by a reduction in video uploads in the data set (see Figure D3 in the Appendix). Figure 2. Substitution activity among pre-block followers of pro-Kremlin channels, weekly aggregation. Methods Data The data consists of 12,315,588 YouTube comments tied to 13,950 videos from 40 channels in the 40 days preceding and following March 12, 2022, the day YouTube fully implemented its ban on Russian state media globally. YouTube announced the ban on March 11. Although we do not know precisely when YouTube intended to enforce the ban, we treated the following day (March 12) as the day of implementation for pragmatic reasons. We restricted the sample to Russian-language channels; accordingly, we operated on the assumption that those engaging with the channel content were also predominantly Russian speakers. The data was collected in late spring 2022 (after the ban was put in place) using the following procedures. First, we identified 10 pro-Kremlin media outlets banned by YouTube,2 10 non-banned pro-Kremlin channels, and 10 regime-critical channels. The selection followed systematic inclusion criteria (subscriber counts above 100,000, Russian-language audience content, and established reputations for pro-Kremlin or regime-critical content; see Appendix A for details). We additionally included the 10 most popular entertainment channels in Russia, based on whatstat.ru and br-analytics.ru, as a non-political baseline. It should also be noted that at the time of data collection, the content of the banned channels was no longer accessible through YouTube’s front end. However, their channel front pages (i.e., youtube.com/@username) and associated metadata were still retrievable via the YouTube Data API. We identified the relevant channel user IDs through manual searches and, in turn, collected video and comment metadata from the blocked channels. This information was available during our data collection period but has since become inaccessible through the API. In the second step, we used the YouTube API to collect historic metadata from all channels, which included the video IDs and posting time of all videos uploaded by the 40 channels between January 24 and April 24. We then used the video metadata to collect all the public comment data on these videos, including the comment text, author IDs, and comment timestamps.3 The data collection took place from April 6 to May 25, 2022, and the full list of channels is available in Appendix B. The data only includes comments that had not been deleted at the time of data collection. It should be noted that YouTube’s own moderation mechanisms may already have removed some comments prior to collection, which could affect the completeness of the dataset. This does present a considerable limitation to our analysis, as the drop in comments observed in the initial days following the ban could have been driven by this. While this impacts our interpretation of the ban’s timing and immediate effectiveness, it was unlikely to impact the findings related to channel migration by commentators. Investigating change in time Our analysis of commenting activity is descriptive. For our investigation of the effectiveness of YouTube’s own ban, we focused on the comprehensive global ban implemented after March 11. The exact time of the ban, however, was unknown to the public. The sudden decrease to near-zero in activity on banned pro-Kremlin channels right after the exogenous ban does warrant a causal interpretation. However, we do not attempt to estimate or claim any causal effects regarding the potential “substitution” or movement to non-banned channels. In this setting, we visualize the commenting activity using a disrupted time series setup, allowing for different slopes before and after the implementation of the ban. To get a comprehensive overview of the development in commenting activity across channel types, the number of comments is grouped by the day each comment was posted and channel type—i.e., regime-critical, pro-Kremlin (banned and non-banned), and entertainment. The post Information control on YouTube during Russia’s invasion of Ukraine first appeared on HKS Misinformation Review.1    A further deep dive into the potential implication of the censorship laws implemented during this time is outside the scope of this paper but is addressed in a separate working paper being finalized by the authors of this research note.2    There is one exception in our data: The channel Tsargrad (царьград-тв) was blocked in July of 2020 for breaking YouTube guidelines, meaning that the block of this channel had no connection to the invasion in 2022.3    The analysis of comment content is outside the scope of this research note; however, the authors address this in a separate working paper.
misinforeview.hks.harvard.edu
November 27, 2025 at 3:44 AM
Reposted by Matthias Schulze
“Incrementalism in military aid disrupted Ukraine’s battlefield momentum and provided time for Russian forces to adapt; it gave the Kremlin time to surge Russia’s defense industrial base…reduced domestic pressures on Putin…masked Russia’s weaknesses, and undermined America’s policies.”
As ISW’s Nataliya Bugayova writes, Putin counts on offsets — using operations and partnerships in one region to offset the limits of Russia’s capability in another. (1/7)

Read the full report: isw.pub/SeizingtheIn...
November 25, 2025 at 9:14 PM
Reposted by Matthias Schulze
The US and Europe must show that there are consequences for Russia repeatedly allowing the peace negotiations to come to nothing.

Kyiv Post share.google/UnnxNDKwqCQ6...
The Peace Plan and Its Likely Fallout
Now that a potential peace plan has been aired out, what are the possible outcomes? Political scientist Andreas Umland offers insight in an interview with German daily Der Tagesspiegel.
www.kyivpost.com
November 26, 2025 at 6:47 AM
China’s APT31 linked to hacks on Russian tech firms #cybersecurity #infosec
China’s APT31 linked to hacks on Russian tech firms
Moscow-based Positive Technologies says a China-linked group tracked as APT31 appears to be responsible for breaches of entities in Russia's tech sector.
therecord.media
November 26, 2025 at 3:44 AM
Why we should tax AI #cybersecurity #infosec
Why we should tax AI
Hardly a day goes by without new headlines about how AI is poised to transform the economy. Even if claims that ‘AI is the new electricity’ prove to be exaggerated, we should still prepare for deep change. One of the most powerful and reliable mechanisms for ensuring that AI benefits society is also one of the most familiar: taxation. What would an AI tax look like in practice? The most practical approach would be to target the key inputs and most tangible metrics of AI development: energy, chips, or compute time. The United States already imposes a 15 per cent fee on sales of specific AI chips to China, and though this is technically an export control, it shows how an AI input tax could work. Alternatively, others have suggested changing how we tax capital to account for AI-driven economic shifts. This would be an AI tax in spirit, but broader in form. The structure of any AI tax would depend on what governments want to achieve. But one thing is clear: the current debate is far more grounded and urgent than it was when Bill Gates raised the idea of a ‘robot tax’ in 2017, echoed later by Bernie Sanders and others. The case for taxing AI Of course, some might ask why we should tax AI at all. The answer reflects two fundamentals about tax systems and how AI is changing the economy. First, many countries now tax human workers more heavily than their potential AI competitors in the labour market. In the case of the US, roughly 85 per cent of federal revenue comes from taxing people and their work (through income and payroll taxes), while capital and corporate profits are taxed far less. Technologies like AI benefit from favourable treatment in the form of generous write-offs, low corporate rates, and carve-outs. Second, economists expect AI to increase the financial returns to capital relative to labour, even if it doesn’t cause unemployment. The most extreme version of this would entail AI agents that can design, replicate, and manage themselves, meaning that capital would be performing its own labour. Under current tax policies, such a shift would widen inequality and shrink government revenue as a share of GDP. An AI tax could help level the field between humans and machines. Earlier this year, Anthropic CEO Dario Amodei warned that AI might eliminate half of all entry-level white-collar jobs and push unemployment to 10-20 per cent within five years. Whether such forecasts are borne out may depend partly on policy. Taxing labour more heavily than capital tilts the scales toward automation that replaces, rather than augments, human workers. At the very least, we shouldn’t let our tax system help put people out of work. Policymakers do not want to curb innovation or lose ground in the global AI race. But that reluctance may fade as public awareness matures. Moreover, at a time when the fiscal outlook is darkening, an AI tax could protect public revenues from technology-induced shocks. If mass job losses or hiring slowdowns do occur, governments that rely on income and payroll taxes could face fiscal crises even if new AI-ready jobs emerge later. More optimistically, the right tax policies – combined with an AI-driven productivity boom – could help fix structural fiscal problems. Rich countries are already struggling to fund health care and pensions for aging populations, while poorer countries face an inverse challenge: educating and employing large young populations despite thin tax bases. AI-generated revenue could be part of the solution for both. Alternatively, revenue could be directed to AI-related causes. Hypothecated taxes, which send revenue back to the sector they come from, like the US gasoline tax that funds highways or the United Kingdom’s television fee that supports the BBC, underscore that the goal is to enhance the public benefits of the taxed technology. An AI tax could do the same: funding grid upgrades, education technology, worker training, open-source AI models, AI-safety research, or mental-health protections. An AI tax could also bolster unemployment insurance and retraining for displaced workers, or even advance broader AI policy goals. For example, it could discourage excess energy use, greenhouse-gas emissions, ‘AI slop,’ or anticompetitive behaviour, or encourage new energy production and safer models. Policy needs to keep pace with technology and anticipate change. Taxing AI may sound politically far-fetched. Policymakers do not want to curb innovation or lose ground in the global AI race. But that reluctance may fade as public awareness matures. If ‘winning’ in AI means having healthier people, happier kids, a more capable workforce, and stronger science – not just bigger models or richer companies – an AI tax could help deliver victory. Nor is such a tax likely to stifle innovation. AI is not a fragile startup industry. It is a 70-year-old technology that is now backed by the world’s largest corporations, with corporate investment exceeding $250 billion in 2024 alone. An AI tax could be structured to ensure that it does not impede national security, market competition, or research. In any case, crises can change minds fast. If AI is blamed for mass unemployment or fiscal shocks, elected officials and policymakers across the political spectrum will want to act. Better to prepare good options now than improvise later. As OpenAI CEO Sam Altman wrote in 2021, ‘The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.’ Altman was speculating about the development of even more advanced artificial general intelligence, but his point already applies: Policy needs to keep pace with technology and anticipate change. One way or another, AI will reshape our economies and societies. But the results are not predetermined. Whether we get a future where people and communities can thrive will come down to the policies we choose. Taxing AI is not about punishing innovation. It’s about ensuring that the rewards are shared and the risks managed in the public interest. The sooner we start that work, the better prepared we will be able to use AI to create the future we want. © Project Syndicate
www.ips-journal.eu
November 25, 2025 at 11:10 PM
Massive Cyberattack Hits Kenyan Ministries, Sites Replaced With Racist Messages #cybersecurity #infosec
Massive Cyberattack Hits Kenyan Ministries, Sites Replaced With Racist Messages
The Government of Kenya cyberattack on Monday morning left several ministry websites defaced with racist and white supremacist messages, disrupting access for hours and prompting an urgent response from national cybersecurity teams. The cyberattack on Government of Kenya targeted multiple high-profile platforms, raising new concerns about the security of public-sector digital infrastructure. According to officials, the Government of Kenya cyberattack affected websites belonging to the ministries of Interior, Health, Education, Energy, Labour, and Water. Users attempting to access the pages were met with extremist messages including “We will rise again,” “White power worldwide,” and “14:88 Heil Hitler.” Government of Kenya Cyberattack Under Investigation The Interior Ministry confirmed the Government of Kenya cyberattack, stating that a group identifying itself as “PCP@Kenya” is suspected to be behind the intrusion. Several government websites were rendered temporarily inaccessible while national teams worked to secure affected systems. “Preliminary investigations indicate that the attack is suspected to have been carried out by a group identifying itself as 'PCP@Kenya',” the ministry said. “Following the incident, we immediately activated our incident response and recovery procedures, working closely with relevant stakeholders to mitigate the impact and restore access to the affected platforms.” [caption id="attachment_106846" align="aligncenter" width="533"] Source: X[/caption] Officials confirmed that the situation has since been contained, with systems placed under continuous monitoring to prevent further disruption. Citizens have been encouraged to reach out to the National KE-CIRT if they have information relevant to the breach. Regional Cyber Issues Reported Within 24 Hours The Kenyan incident took place just a day after Somalia reported a cyberattack on its Immigration and Citizenship Agency. Somali officials said they detected a breach involving data from individuals who had entered the country using its e-Visa system. Early findings suggest that leaked data may include names, dates of birth, photos, marital status, email addresses, and home addresses. Authorities are now assessing how many people were affected and how attackers gained access to the system. The U.S. Embassy in Somalia referenced claims from November 11, when hackers alleged they had infiltrated the e-visa system and accessed information belonging to at least 35,000 applicants — potentially including U.S. citizens. “While Embassy Mogadishu is unable to confirm whether an individual’s data is part of the breach, individuals who have applied for a Somali e-visa may be affected,” the embassy said. [caption id="attachment_106848" align="aligncenter" width="377"] Source: X[/caption] No Claim of Responsibility So Far As of Monday afternoon, no threat group has formally claimed responsibility for either the Kenya or Somalia cyber incidents. Investigators are assessing whether the timing suggests any form of coordination or shared exploitation methods. For now, authorities emphasize that sensitive financial information, core government systems, and essential services in Kenya were not impacted. The cyberattack on Government of Kenya appears to have been limited to public-facing platforms.
thecyberexpress.com
November 25, 2025 at 6:30 PM
Reposted by Matthias Schulze
Wenig überraschend: #Russland beschuldigt #Europa, den „Frieden“ zu sabotieren und lobt den ursprünglichen #Trump-Plan. #Moskau wird Gespräche über die neue Version blockieren und Forderungen nachlegen.

www.politico.eu/article/russ...
Russia trashes Europe’s peace plan — but likes Trump’s Ukraine proposal
Top Kremlin aide scoffs that the European counterproposal “constructively doesn’t fit us at all.”
www.politico.eu
November 25, 2025 at 7:52 AM
Reposted by Matthias Schulze
Every time you share their life online, you risk sharing their personal data with the world. Pause before you post.
November 24, 2025 at 11:58 AM
The news that many MAGA accounts are foreign agents is completely no surprise: this is what Russian information war has looked like since 2016. It's only news because the US dismantled all its counter #disinformation efforts and social media regulation: www.theregister.com/2025/11/24/x...
X promptly catches fire after rolling out location feature
: Accuracy errors or inadvertent unmasking of rage-bait trolls? Probably somewhere in between
www.theregister.com
November 25, 2025 at 8:02 AM
Pro-Russian group claims hits on Danish party websites as voters head to polls #cybersecurity #infosec
Pro-Russian group claims hits on Danish party websites as voters head to polls
Voting was not disrupted Tuesday by a wave of DDoS incidents affecting political party and government websites in Denmark a day earlier, officials said.
therecord.media
November 24, 2025 at 11:10 PM
Reposted by Matthias Schulze
"This has created a system where it makes financial sense for people from the entire world to specifically target Americans with highly engaging, divisive content. It pays more. "

www.404media.co/americas-pol...
America’s Polarization Has Become the World's Side Hustle
The 'psyops' revealed by X are entirely the fault of the perverse incentives created by social media monetization programs.
www.404media.co
November 24, 2025 at 6:42 PM
MI5 Issues Spy Alert as Chinese Intelligence Targets UK Parliament Through LinkedIn #cybersecurity #infosec
MI5 Issues Spy Alert as Chinese Intelligence Targets UK Parliament Through LinkedIn
Two headhunters named Amanda Qiu and Shirly Shen appeared on LinkedIn offering lucrative freelance work authoring geopolitical consultancy reports, but MI5 now confirms they served as fronts for China's Ministry of State Security conducting recruitment operations targeting British parliamentarians, staffers, and officials with access to sensitive government information. On Tuesday, Britain's domestic intelligence service issued an espionage alert to MPs, Peers, and Parliamentary staff warning that Chinese intelligence officers are attempting to recruit individuals through professional networking sites in what Security Minister Dan Jarvis characterized as a "covert and calculated attempt by China to interfere with our sovereign affairs". House of Commons Speaker Lindsay Hoyle circulated the MI5 alert warning that Chinese state actors were "relentless" in their efforts to interfere with parliamentary processes and influence activity at Westminster. The alert named two specific LinkedIn profiles believed to be conducting outreach at scale on behalf of Beijing's intelligence apparatus. Social Engineering Route MI5 assessed that the Ministry of State Security was using websites like LinkedIn to build relationships with parliamentarians to collect sensitive information on the UK for strategic advantage. The fake headhunter profiles offered consulting opportunities while actually intending to lay groundwork for long-term relationships that could be exploited for intelligence collection. Security Minister Jarvis told Parliament that targets extended beyond parliamentary staff to include economists, think tank consultants, and government officials. "This government's first duty is to keep the country safe, which is why I've announced new action to give security officials the powers and tools they need to help disrupt and deter foreign espionage activity wherever they find it," Jarvis stated. The minister said the espionage alerts represent one of the main tools used to undermine spies' ability to operate, with the public exposure intended to disrupt ongoing recruitment operations and warn potential targets. Pattern of Hostile Activity Jarvis noted the LinkedIn recruitment attempts build on a pattern of hostile activity from China, citing Beijing-linked actors targeting parliamentary emails in 2021 and attempted foreign interference activity by Christine Lee in 2022. Lee, a London-based lawyer, was accused by MI5 of facilitating covert donations to British parties and legislators on behalf of foreign nationals coordinating with the Chinese Communist Party's United Front Work Department. The alert arrives weeks after prosecutors abruptly abandoned a case against two British men charged with spying on MPs for Beijing. Christopher Cash, a former parliamentary researcher, and Christopher Berry, an academic, faced charges under the Official Secrets Act 1911 but prosecutors claimed the government's evidence was missing a critical element. That critical element was the government's refusal to call China an "enemy" or "national security threat," which prosecutors said meant they had no option but to collapse the case since the 1911 Act requires information passed on to be useful to an enemy. New Counter-Espionage Action Plan The government announced a comprehensive Counter Political Interference and Espionage Action Plan to disrupt and deter state-sponsored spying. Intelligence services will deliver security briefings for political parties and issue new guidance to election candidates helping them recognize, resist, and report suspicious activity. Authorities will work with professional networking sites to make them more hostile operating environments for spies, while new Elections Bill provisions will tighten rules on political donations. Jarvis added the government will continue taking further action against China-based actors involved in malicious cyber activity against the UK and allies. The government committed £170 million to renew sovereign and encrypted technology that civil servants use to safeguard sensitive work. An additional £130 million will fund projects including building Counter Terrorism Policing's ability to enforce the National Security Act and supporting the National Cyber Security Centre's work with critical businesses to protect intellectual property. Jarvis also informed Parliament that the government completed removal of surveillance equipment manufactured by companies subject to China's National Intelligence Law from all sensitive sites operated worldwide by the British government. "As a country with a long and proud history of trading around the world, it's in our interests to continue to seek an economic relationship with China, but this government will always challenge countries whenever they undermine our democratic way of life," Jarvis declared. The National Security Act provides government power to prosecute those engaging in espionage activity, with offenses including obtaining protected information, assisting a foreign intelligence service, and obtaining material benefit from a foreign intelligence service. The government recently introduced the Cyber Security and Resilience Bill to help protect organizations from cyber threats posed by states like China. Also read: ENISA and European Commission Launch €36 Million EU Cybersecurity Reserve to Strengthen Digital Resilience
thecyberexpress.com
November 24, 2025 at 3:58 PM
Reposted by Matthias Schulze
lol AI bubble goes brrr
Turns out interest in Metaverse had about a ~9 month half life.
November 24, 2025 at 11:47 AM
Reposted by Matthias Schulze
«Deutschland hat seine #DigitaleSouveränität in die Hände von Donald Trump und seiner Freunde der großen Tech-Unternehmen gelegt … Sollte Big Tech irgendwann keine Updates mehr liefern, … wäre das, als schaltete man Deutschland oder Europa mit der Fernbedienung ab.» taz.de/Digitale-Sou...
Digitale Souveränität: Die digitale Abhängigkeit jetzt durchbrechen
Irgendwer muss irgendwo mal anfangen. Mit einer internationalen Allianz, Open Source und klaren Regeln für US-Konzerne könnte sich Europa von BigTech lösen.
taz.de
November 24, 2025 at 10:30 AM
Reposted by Matthias Schulze
Reuters has confirmed that President Trump's son-in-law Jared Kushner and his special envoy Steve Witkoff held a private meeting in Miami with a sanctioned Kremlin proxy to help shape the new "peace plan" for Ukraine. www.reuters.com/world/europe...
www.reuters.com
November 23, 2025 at 5:12 AM
Reposted by Matthias Schulze
Machtmissbrauch und Erlösmaximierung 😠

Facebook- und Instagram-Mutterkonzern - Gerichtsakten: Meta soll Studie zu psychischen Schäden vertuscht haben

"Der Mutterkonzern der Online-Anwendungen Facebook und Instagram, Meta, hat Gerichtsdokumenten zufolge..."
www.deutschlandfunk.de/gerichtsakte...
Facebook- und Instagram-Mutterkonzern - Gerichtsakten: Meta soll Studie zu psychischen Schäden vertuscht haben
Der Mutterkonzern der Online-Anwendungen Facebook und Instagram, Meta, hat Gerichtsdokumenten zufolge eine Studie mit Belegen für psychische Schäden durch seine Plattformen gestoppt. Das wurde im Rahm...
www.deutschlandfunk.de
November 23, 2025 at 10:55 PM
Reposted by Matthias Schulze
We live in an age where there's a glut of information and scarcity of attention, mediated through platforms that prioritise engagement over accuracy, and emotion captures attention more readily than reason.
This chart (which applies even more to social media than it did to TV) lives in my head rent free.

Social media enveloping traditional media means everything and everyone is now competing in the entertainment market. Boring stuff like policy that affects millions of lives doesn’t stand a chance.
November 23, 2025 at 10:17 PM
Während wir mal wieder Praxisgebühren und Telefonhotlines diskutieren, um die Fachärzte und Notaufnahmen zu entlasten, macht China das Erstassessment des Hausarztes einfach durch KI: www.technologynewschina.com/2025/11/chin...
China harnesses AI to bridge healthcare gap
(Xinhua) Getting medical advice in China has never been easier than it is now. Via a simple tap on a smartphone and a brief conversation, an...
www.technologynewschina.com
November 24, 2025 at 9:49 AM
Amazon warns of global rise in specialized cyber-enabled kinetic targeting #cybersecurity #infosec
Amazon warns of global rise in specialized cyber-enabled kinetic targeting
Amazon said the lines between cyberattacks and physical, real-world attacks are blurring quickly — prompting the tech giant to call for a new category of warfare: cyber-enabled kinetic targeting.  Nation-states have combined and understood how logical systems and the physical world interact for a long time, but more non-traditional attackers are showcasing expertise in using cyberattacks to enable and amplify the impact of kinetic military operations, according to Amazon Threat Intelligence. “The collective industry and our customers have to really pay attention to this and change the way we’re doing things,” Steve Schmidt, chief security officer at Amazon, told CyberScoop in a phone interview. “Physical and digital security cannot be treated as separate domains with separate domains and approaches.” Governments traditionally have requirements for actions to occur or access to specific information, and oftentimes those objectives were treated separately. Yet, now when governments want to achieve military objectives, military planners are asking for more precise details about the target, Schmidt said. While nation-state attackers can compromise networks that contain data identifying those targets, those details are typically generalized. To get more exact information, nation-state attackers are compromising closed-circuit television (CCTV), or security cameras, on the target itself.  This allows military planners to “see where the [target] is physically and actually do live adjustments of targeting while you have weapons in flight,” Schmidt said. Amazon provided two case studies as examples of cyber-enabled kinetic targeting in a blog post Wednesday. The most recent attack involves MuddyWater, a threat group linked to Iran’s Ministry of Intelligence and Security, that provisioned a server in May and used that infrastructure a month later to access another compromised server containing live CCTV streams from Jerusalem. When Iran launched missile attacks on Jerusalem on June 23, Israeli authorities said Iranian forces were using real-time intelligence from compromised security cameras to adjust missile targeting, Amazon said. Cyber-enabled kinetic targeting employs common tools and tactics that display advanced skills in anonymizing virtual private networks, using their own servers for command-and-control capabilities, compromising enterprise systems such as CCTV systems or maritime platforms, and gaining access to real-time data streams, according to Amazon. These multi-layered, collaborative attacks require critical infrastructure operators and threat intelligence professionals to expand their remit, Schmidt said.  “Traditional cybersecurity frameworks treat the digital and the physical threats as really separate domains, but we realized, through our own internal work and our research, of course, that this separation is not only artificial but actually detrimental,” he said.  “You have to think about these things as integrated wholes, because even physical world assets, like a ship, are really a cyber asset as well. And multiple nation-state threat groups are pioneering a new operational model where cyber reconnaissance directly enables kinetic targeting,” Schmidt added.  Amazon said this is a warning and call to action for defenders to consider how compromised systems might be used to support physical attacks and recognize that their systems might be valuable targeting aids for kinetic operations. This also demonstrates the need for threat intelligence sharing across the private sector and government to work through more complex attribution response frameworks, the company said.  Multiple nation-states will increasingly employ cyber-enabled kinetic targeting, CJ Moses, chief information security officer of Amazon Integrated Security, said in the blog post.  “Nation-state actors are recognizing the force multiplier effect of combining digital reconnaissance with physical attacks,” he said. “This trend represents a fundamental evolution in warfare, where the traditional boundaries between cyber and kinetic operations are dissolving.” Many seemingly espionage-focused attacks that have already been made public might ultimately be an entry point for kinetic targeting, according to Schmidt.  Countries that have both advanced cyber capabilities and military strength are most likely to succeed at cyber-enabled kinetic targeting, he said.  The most prominent threats come from nation-state attackers who are more specialized in their targeting. “The targeting of maritime navigation systems is a relatively unique skill, and it is different from the targeting of a cryptocurrency exchange,” Schmidt said.  “It takes different knowledge, and so you’re seeing groups pop up onto the radar, which we may not have followed before because there wasn’t that volume of activity.” The post Amazon warns of global rise in specialized cyber-enabled kinetic targeting appeared first on CyberScoop.
cyberscoop.com
November 24, 2025 at 3:45 AM