Jonathan
banner
jonathanerhardt.bsky.social
Jonathan
@jonathanerhardt.bsky.social
🤓 philosophy stuff: http://oxford.academia.edu/JonathanErhardt
⚙️ founder @twinearth.bsky.social
⚙️ co-founder @ http://morphieslaw.com
http://twinearth.com
Perhaps we can find some philosophically interesting concepts which are close to perception that could be used for case studies, e.g. composite object, part, statue, lump, etc.?

I'm excited & hope this method will be applied by philosophers in the (relatively) near future.10/10
November 29, 2024 at 11:54 PM
- Will this work for philosophically interesting concepts (knowledge, freedom, money, art, ...) in the foreseeable future or merely for concepts which are relatively close to perception (cats, dogs, houses, etc.)? 9/n
November 29, 2024 at 11:54 PM
- When (if ever) can we be sufficiently confident that classifiers identify the extension of concept similarly to how we do it (and not based on features which merely correlate with the ones that we use to determine the extension of concepts)? 8/n
November 29, 2024 at 11:53 PM
- How should we specify the scenarios on which we train the classifiers? Will 2D/3D images or videos work, or do we expect the application conditions for concepts to be at a different level? 7/n
November 29, 2024 at 11:53 PM
namely when it comes to extracting application conditions from the cases.

Obviously a lot more needs to be said about this. For starters, we could think about the following questions: 6/n
November 29, 2024 at 11:53 PM
This seems to make the problem ideal for this indirect approach to conceptual analysis: We can use the judgment of humans to build the training set (consisting of scenario/extension pairs) and then replace philosophers where they have a bad track record... 5/n
November 29, 2024 at 11:53 PM
It should be noted that philosophers have a surprising amount of agreement about the extension of concept in various scenarios - it's just that we can't typically figure out the criteria we use to determine the extension of concepts in these cases. 4/n
November 29, 2024 at 11:53 PM
using techniques such as the circuits approach (distill.pub/2020/circuit...) and extract their application conditions for concepts. This could bring conceptual analysis into the domain of the empirical sciences. 3/n
November 29, 2024 at 11:52 PM
in their toolbox.

We can train artificial neural networks to determine the extension of concepts across scenarios. Once(!) we’re sufficiently confident that they determine the extension of concepts similarly to how we do it, we can begin to probe their structure... 2/n
November 29, 2024 at 11:52 PM