Any religious conviction is an assertion of reality that exists outside subjective experience. But the assertion that there is only subjective experience and nothing beyond it is itself metaphysical conjecture. I suspect that this is why debate about reality can quickly turn ill-tempered. The heat in the debate rests at the interface between personal identity and its association with metaphysical and existential propositions.
A lot depends on what we mean by subjectivity. I think subjectivity is the name we give to a process of regulation between individual minds and bodies, and a real (i.e. Objective) world of matter, society and agency. Many would disagree: those, for example, who postulate 'mental causes' for experience (but who can never say 'what caused the cause..?'); those who postulate behavioural causes for experience (but who ascribe the cause of behaviour to behaviour, and discount any 'inner life'). My underatanding of subjectivity starts with thinking through how subjectivity may be infused with a reality about which some degree of naturalistic objectivity is possible. There may be other aspects of that reality about which naturalistic objectivity is not possible, but those aspects are nevertheless also causally efficacious on subjective experience. This is, of course, a supernatural conjecture - but one which I am not uncomfortable with - maybe only because my own personal identity, intellectual endeavour and belief has developed along these particular lines.
To begin with, we can characterise a 'game' being played between homeostatic systems, whereby the communications of those systems, and the consequent responses from neighbouring systems are constitutive of their homeeostasis. This is the analogue of subjective experience, where individuals maintain homeostasis through making communications, which are themselves constituted by the state of those individuals and constitutive of the the environment which in turn is causal on the future states of agents. A simple NetLogo program can represent this sort of game, and the agents and their different types of communications (which are constitutive of the different regulating mechanisms of the agents) are shown in different colours. Some agents are not connected. These are red, which means their unmanaged variety is at a critical level: they are in oscillation. Regulation only comes through communication.
I've been thinking about this aspect of it. When we talk about 'learning' in an abstract sense, we reflect on our own experience, and mistakenly abstract a learning individual in isolation from anyone else. It's a similar sort of category error to that of a private language. We cannot conceive of an individual in absolute isolation from any social context. (I remember Margaret Archer arguing with Roy Bhaskar about whether there was always a social element in being. She didn't think so, he did. I think he was right.) In essence, this means that sentient existence and an environment of communication are co-determining in the same way as Lovelock's Gaia theory presents 'life' and 'atmostphere' (again using cybernetic models).
I want to go further and express the 'game' that is played between environment and agency. In my model, each agent has three regulating levels (based on the Viable System Model). Each of these is essentially a number which determines the probability of handling communications appropriate to that level of regulation. With each communication, there is a calculated pay-off. The objective of the game is to maintain homestasis through continuing to make successful communications. What might this look like mathematically? With fairly random numbers, it would take this sort of form where A might have a start state which would determine three possible game states from where B might then move.
However, this is a repeated game, so there are emergent strategies. I have a bit more work to do on this model! If I start to play with deltas for the emergent strategy, is there a way in which I can identify the 'double binds' which emerge from particular communications?
The key thing, it seems to me, is to find a way of characterising the agency of someone who might manipulate this model. In my NetLogo model, agents are dragged around the screen (hopefully to places where they are more likely to have successful communications). Agents might learn of these dragging events. They might detect difference in the agency of dragging for beneficial communicative effect, and the agency of dragging for detrimental communicative effect. This might create the grounds for identifying some sort of contradiction, upon which the beginnings of a topology might emerge.