This is essentially a rehash of various bits of Rorty and Price that were floating in my head.

Ontology (the theory of what really is, or the what is there ‘objectively’) makes conflict between things we say possible. For example, the idea of objectivity needs to be invoked for there to be incompatibility between “Alice thinks there is a saber-toothed tiger coming” and “Bob thinks there is no such tiger coming.”

We can think of the social function of the whole concept of objectivity as serving this purpose.

Clearly it’s useful in some situations, especially ‘ordinary empirical descriptive’ (OED) language (e.g. “the frog is on the log”). So, we would lose the ability to cope with the world in important ways if we were to adopt a thoroughgoing idealism.

However, outside of the ‘home language game’ of OED vocabulary, do we really need it? Could it be causing unnecessary conflict and confusion when pulled outside of its original motivating context, much like the tooth pain anecdote?

For whatever sociohistorical reasons, objectivity-talk is incredibly pervasive in almost all domains of discourse. It takes serious work to wean ourselves off of it, to show one can coherently and self-respectingly grapple with non-OED concepts without it.

  • For example: we’re primed to think of sentience as some objective phenomenon (because we’re primed to be descriptivists), but this is entirely unnecessary.
  • We can completely sidestep talk of the ontological nature of sentience and get along perfectly fine.
  • Given a real world problem that depends on this issue (e.g. “Is this AI a sentient being?”) this paradigm shift is a net positive because we can focus our attention on relevant things (our social practices relating to the AI, rather than some feature purely of the source code / training process).

However, if you want power over others, couching your beliefs in objectivity talk is useful for gaining authority.