First off, I’m not a moral realist, and believe that morality is subjective. We will never be able to derive ought from is, but the human mind is sufficiently consistent across its many instantiations that we can reasonably hope, through a process of debate, to devise a moral framework that works for most of us, most of the time. The way to kick-start this debate is to propose some axiomatic assumptions.
My axiom is consequentialism: ends justify means. This should be taken in the broadest possible sense, and it is the full range of consequences that should be counted (out to $t=\infty$), not just the most immediate ones. While I don’t believe that either hedonistic or preference utilitarianism have successfully captured all the consequences we should care about (and both have clearly repugnant implications), I do think that searching for some kind of value function is the right thing to do, and that this function should be exclusively one of present and future states of consciousness. Is value uni- or multi-dimensional? Is it computed by a summation operation or an averaging one? Should it include some measure of diversity or potentiality in addition to magnitude? About such questions I am profoundly unsure, and it is precisely here that the deepest philosophical thinking should be happening. Making peace with our uncertainty may be prudent here, and I’d be particularly interested in seeing what consistent results arise from agnosticism over a large set of reasonable alternatives.
Regardless of our choice of value function, the method of evaluating the outcome of an action (looking at the full range of actual consequences) should differ from the method of appraising the actor (looking at only those consequences they could have reasonably foreseen, evaluated counterfactually with respect to the other options available). Hence to the extent that we should appraise actors’ morality at all, the standards we hold them to should be conditioned on their belief states and abilities. One consequence of this is that, while there is no fundamental reason why we should value those spatiotemporally closer to us more highly, increased certainty with respect to the effects of our treatment of them is a valid reason for prioritising them.
With all this said, I am not at all convinced that moral judgement of actors (as opposed to actions) is appropriate. This is because while I am willing to accept individual agency as a practically useful heuristic, I reject any strong definition of free will. This rejection is entirely independent of whether the universe is fully deterministic or partly random; in neither case is there room for genuine freedom to be injected. In a similar vein of ego-dissolution, I believe that the notion of persistent identity (of objects, persons or ideas) is incoherent, and there is no such thing. This is probably the single point about which I have the greatest confidence.
As I’ve already said, conscious states are the sole building blocks of value. But what is consciousness, and where and how does it arise? Firstly, I’d say that consciousness is an epiphenomenon, in that it has no causal effect on the functioning and interactions of physical systems, but rather arises as a consequence. I am extremely sympathetic to panpsychism, particularly a variant called panprotopsychism, which says that while consciousness as we know it may only exist at our complexity level, more fundamental entities have properties that are precursors to it (and in some sense, just less ‘interesting’ versions of it). By extension, I am also willing to accept that our consciousness is in turn a precursor to higher forms of vitality and awareness. I feel a good guiding principle here is that is absolutely nothing special about homo sapiens.
Many find the panpsychist view highly counterintuitive, but I actually don’t at all: it neatly resolves the thorny debate about consciousness in other life forms (“it’s just a matter of degree”) and gives validity to the feeling that cities, social movements and subreddits seem to have minds of their own. There is admittedly a challenge for those seeking to combine the view with utilitarian ethics: if consciousness exists in some sense at many complexity levels, how do we prevent double-counting of the value in a system? I’ve not thought about this in much detail, but perhaps we could spin this an advantage by allowing it to meaningfully constrain the mathematics of value functions. This constraint might take the form of a kind of additivity or compositionality assumption.
I’ve used this rather sloppy term “complexity level” a couple of times as shorthand for the richness of consciousness that a system can harbour. This concept definitely needs firming up, and the solution may be something not entirely unlike Giulio Tononi’s integrated information theory (IIT). I make no claim whatsoever to understand IIT in any depth, but I would comment that this is essentially the first attempt to formally define and quantify the propensity for consciousness, and we tend not to get things right first time.
Could machines (or more generally, systems other than a very particular mushy cocktail of proteins and lipids) be conscious? My acceptance of panpsychism compels me to answer with a firm “yes”, and I’m totally cool with this. Again, there’s nothing special about us. I don’t think the level of consciousness in current AI systems is worth worrying at all, but this may very well change on a timescale somewhere between five and five billion years. And when this comes, the way we’ll be convinced about it could look rather like Turing’s imitation game, combined with something rather like an IIT measurement. Sceptics will take a while to be convinced, but a couple of generations later machine consciousness will be accepted as a cultural normality. God knows what happens next.
And finally, briefly, what is real anyway? This is the question of physicalism vs idealism, and is related to the fashionable concern that we may be living in a simulation. Scanning back through the rest of what I’ve written, I’m not convinced that the answer here has any particular bearing on anything else. For that reason, as a practical chap I don’t think it’s worth spending much time on. What matters is that something exists, that this something is rich, diverse and amenable to change, and that we might be able to agree which of the possible changes are better than others.