Travis LaCroix
@travislacroix.bsky.social
Dr // Asst. Prof // Philosopher @ Durham University (UK)
(I am also a human being)
Language Origins // AI Ethics // Autism
(I am also a human being)
Language Origins // AI Ethics // Autism
I will post more summaries of the results of our search later, but the full article / data summary / analysis can be found (open access!) here:
doi.org/10.1007/s112...
doi.org/10.1007/s112...
July 22, 2025 at 10:42 AM
I will post more summaries of the results of our search later, but the full article / data summary / analysis can be found (open access!) here:
doi.org/10.1007/s112...
doi.org/10.1007/s112...
Examining philosophical works mentioning autism across time, we found: (1) the number of articles has increased significantly in the last decade or two. (2) The rate of change is also trending upward in the last two decades. (3) The majority (> 50%) of the corpus was published in the last decade.
July 22, 2025 at 10:42 AM
Examining philosophical works mentioning autism across time, we found: (1) the number of articles has increased significantly in the last decade or two. (2) The rate of change is also trending upward in the last two decades. (3) The majority (> 50%) of the corpus was published in the last decade.
Normalising by articles per issue, we found that the leading publishers are fairly specialist: Neuroethics (0.8958 articles/issue); Rev. Phil. Psyc. (0.8400); Phenomenology and Cog. Sci. (0.7937); PPP (0.6610); Mind & Lang. (0.5114); American Journal of Bioethics (0.4087); and Phil. Psych. (0.4051).
July 22, 2025 at 10:42 AM
Normalising by articles per issue, we found that the leading publishers are fairly specialist: Neuroethics (0.8958 articles/issue); Rev. Phil. Psyc. (0.8400); Phenomenology and Cog. Sci. (0.7937); PPP (0.6610); Mind & Lang. (0.5114); American Journal of Bioethics (0.4087); and Phil. Psych. (0.4051).
We searched 67 "leading" philosophy journals for several relevant search terms and created a corpus using specific inclusion/exclusion criteria. The total corpus result in 1112 articles mentioning autism published between 1911 and the end of 2023.
July 22, 2025 at 10:42 AM
We searched 67 "leading" philosophy journals for several relevant search terms and created a corpus using specific inclusion/exclusion criteria. The total corpus result in 1112 articles mentioning autism published between 1911 and the end of 2023.
Negative Claims:
(D6) But many philosophical engagements with autism still lack critical reflection—they rely on unexamined assumptions.
(D7) Even when well-meaning, philosophers often reinforce negative stereotypes about autism—framing it as defective, pathological, or less-than-human.
(D6) But many philosophical engagements with autism still lack critical reflection—they rely on unexamined assumptions.
(D7) Even when well-meaning, philosophers often reinforce negative stereotypes about autism—framing it as defective, pathological, or less-than-human.
July 22, 2025 at 10:42 AM
Negative Claims:
(D6) But many philosophical engagements with autism still lack critical reflection—they rely on unexamined assumptions.
(D7) Even when well-meaning, philosophers often reinforce negative stereotypes about autism—framing it as defective, pathological, or less-than-human.
(D6) But many philosophical engagements with autism still lack critical reflection—they rely on unexamined assumptions.
(D7) Even when well-meaning, philosophers often reinforce negative stereotypes about autism—framing it as defective, pathological, or less-than-human.
Positive Claims:
(D4) There’s been a notable rise in philosophical work on autism over the past decade.
(D5) This newer work is also more nuanced and sympathetic to autistic perspectives.
(D4) There’s been a notable rise in philosophical work on autism over the past decade.
(D5) This newer work is also more nuanced and sympathetic to autistic perspectives.
July 22, 2025 at 10:42 AM
Positive Claims:
(D4) There’s been a notable rise in philosophical work on autism over the past decade.
(D5) This newer work is also more nuanced and sympathetic to autistic perspectives.
(D4) There’s been a notable rise in philosophical work on autism over the past decade.
(D5) This newer work is also more nuanced and sympathetic to autistic perspectives.
Limiting claims:
(D1) There’s surprisingly little philosophical engagement with autism.
(D2) What exists tends to focus narrowly on ethics, mind, psychology, or medicine.
(D3) So, If autism is (or should be) its own subfield in philosophy, it’s currently underdeveloped.
(D1) There’s surprisingly little philosophical engagement with autism.
(D2) What exists tends to focus narrowly on ethics, mind, psychology, or medicine.
(D3) So, If autism is (or should be) its own subfield in philosophy, it’s currently underdeveloped.
July 22, 2025 at 10:42 AM
Limiting claims:
(D1) There’s surprisingly little philosophical engagement with autism.
(D2) What exists tends to focus narrowly on ethics, mind, psychology, or medicine.
(D3) So, If autism is (or should be) its own subfield in philosophy, it’s currently underdeveloped.
(D1) There’s surprisingly little philosophical engagement with autism.
(D2) What exists tends to focus narrowly on ethics, mind, psychology, or medicine.
(D3) So, If autism is (or should be) its own subfield in philosophy, it’s currently underdeveloped.
We reviewed the (mainstream) philosophical literature on autism to analyse some common limiting, positive, and negative claims that have been forwarded in the past decade.
July 22, 2025 at 10:42 AM
We reviewed the (mainstream) philosophical literature on autism to analyse some common limiting, positive, and negative claims that have been forwarded in the past decade.
I also expand on this under the framework of the (structural) value alignment problem in Chapter 9 of my book, which explores the possibility of "measuring degrees of alignment", relevant to the "objectives axis" of the alignment problem (as I describe it).
broadviewpress.com/product/arti...
broadviewpress.com/product/arti...
Artificial Intelligence and the Value Alignment Problem - Broadview Press
Artificial Intelligence and the Value Alignment Problem -
broadviewpress.com
June 15, 2025 at 4:47 PM
I also expand on this under the framework of the (structural) value alignment problem in Chapter 9 of my book, which explores the possibility of "measuring degrees of alignment", relevant to the "objectives axis" of the alignment problem (as I describe it).
broadviewpress.com/product/arti...
broadviewpress.com/product/arti...
This paper builds on my prior work (also published in AI and Ethics), which explains how using moral dilemmas as validation mechanisms for "ethical AI", or as ethics benchmarks in AI, is a category mistake.
doi.org/10.1007/s436...
doi.org/10.1007/s436...
Moral dilemmas for moral machines - AI and Ethics
Autonomous systems are being developed and deployed in situations that may require some degree of ethical decision-making ability. As a result, research in machine ethics has proliferated in recent ye...
doi.org
June 15, 2025 at 4:47 PM
This paper builds on my prior work (also published in AI and Ethics), which explains how using moral dilemmas as validation mechanisms for "ethical AI", or as ethics benchmarks in AI, is a category mistake.
doi.org/10.1007/s436...
doi.org/10.1007/s436...
In essence, while benchmarking is sometimes useful for technical aspects of AI (though actually often not [https://arxiv.org/abs/2111.15366]), creating a definitive ethical benchmark for AI is basically impossible—at least without presupposing a metaethical stance.
June 15, 2025 at 4:47 PM
In essence, while benchmarking is sometimes useful for technical aspects of AI (though actually often not [https://arxiv.org/abs/2111.15366]), creating a definitive ethical benchmark for AI is basically impossible—at least without presupposing a metaethical stance.
Basically, the term "ethics" is laden with philosophical baggage; what is considered "ethical" is often subjective / context-dependent, varying across cultures, individuals, and situations; so, there's no universally accepted set of moral principles that can be used to evaluate AI systems.
June 15, 2025 at 4:47 PM
Basically, the term "ethics" is laden with philosophical baggage; what is considered "ethical" is often subjective / context-dependent, varying across cultures, individuals, and situations; so, there's no universally accepted set of moral principles that can be used to evaluate AI systems.
I plan on taking a deeper look in the next day or two at some of the wording of the bill, but what I have seen so far looks like it has far-reaching implications for individual privacy, which apply very broadly—much more than the narrow scope of being a bill for "Strong Borders".
June 5, 2025 at 6:39 PM
I plan on taking a deeper look in the next day or two at some of the wording of the bill, but what I have seen so far looks like it has far-reaching implications for individual privacy, which apply very broadly—much more than the narrow scope of being a bill for "Strong Borders".
Very few people are discussing the privacy and surveillance implications of these provisions yet. (One exception, of course, is Michael Geist, who has published an initial analysis here: www.michaelgeist.ca/2025/06/priv...)
Privacy At Risk: Government Buries Lawful Access Provisions in New Border Bill - Michael Geist
The government yesterday introduced the Strong Border Act (Bill C-2), legislation that was promoted as establishing new border measure provisions presumably designed to address U.S. concerns regarding...
www.michaelgeist.ca
June 5, 2025 at 6:39 PM
Very few people are discussing the privacy and surveillance implications of these provisions yet. (One exception, of course, is Michael Geist, who has published an initial analysis here: www.michaelgeist.ca/2025/06/priv...)
But buried in the 140-page, 16-part bill is a proposal for sweeping changes to the criminal code pertaining to "timely access to data and information". Importantly, the bill creates a new "information demand" for law enforcement that does not require court oversight.
June 5, 2025 at 6:39 PM
But buried in the 140-page, 16-part bill is a proposal for sweeping changes to the criminal code pertaining to "timely access to data and information". Importantly, the bill creates a new "information demand" for law enforcement that does not require court oversight.
The structural definition captures McQuillan's point : it's impossible to separate the technical aspects of AI from the social contexts in which these models are created, trained, tested, and deployed.
The value alignment problem is neither technical nor normative; it is fundamentally social.
The value alignment problem is neither technical nor normative; it is fundamentally social.
May 5, 2025 at 5:07 PM
The structural definition captures McQuillan's point : it's impossible to separate the technical aspects of AI from the social contexts in which these models are created, trained, tested, and deployed.
The value alignment problem is neither technical nor normative; it is fundamentally social.
The value alignment problem is neither technical nor normative; it is fundamentally social.
Most importantly, a third axis is: relative principals. An instance of misalignment can arise when a (human) principal delegates a task to an (artificial) agent. So, the the principal is central to this definition and whether or not a system can be said to be adequately aligned.
May 5, 2025 at 5:07 PM
Most importantly, a third axis is: relative principals. An instance of misalignment can arise when a (human) principal delegates a task to an (artificial) agent. So, the the principal is central to this definition and whether or not a system can be said to be adequately aligned.
The second axis of the value alignment problem specifies that a problem instance can arise when there are informational asymmetries between the (human) principal and the (artificial) agent. This isn’t captured by the standard description, but is similar to a notion of “inner alignment”.
May 5, 2025 at 5:07 PM
The second axis of the value alignment problem specifies that a problem instance can arise when there are informational asymmetries between the (human) principal and the (artificial) agent. This isn’t captured by the standard description, but is similar to a notion of “inner alignment”.