Philanthropic antinatalists argue against bringing human beings into existence inasmuch as it is inevitable that these latter will suffer. If we knew with certainty that we would one day be able to call into existence computers possessed of consciousness and self-awareness, it would be morally incumbent upon us to refrain from doing so in the case where such machines would be expected to be capable also of experiencing suffering.
One voice which warned early on about the potential problem of machine consciousness was that of Samuel Butler. At the period when Butler published his ideas on “machine antinatalism” the steam engine was still the non plus ultra of human technical inventiveness. But Butler could already see a time approaching in which technical development would have progressed so far as to be able to bring forth self-aware machines. To make this thesis plausible for his contemporaries he pointed to the earth at that primeval period when it had been no more than a ball of boiling semi-liquid minerals whose crust was slowly beginning to cool and harden. Who would have imagined, looking at this red-hot, semi-liquid ball, that one day beings endowed with intelligence would walk about upon it? The fact, then – so argued Butler – that our machines currently have nothing resembling a self-awareness is no guarantee that this shall always remain the case. A mollusc has only the most basic rudiment of a “consciousness” and Nature required millions of years to develop human and animal consciousnesses in the full and specific sense of this term. How much more rapid, by comparison, has been the development of man-made machines which are, as it were, relatively speaking, a product of “the last five minutes” of the earth’s history. Is it not safer, then, asks Butler, in view of a future that may last many more millions of years, to nip the potential calamity of self-aware machinery in the bud and to take steps to prevent the emergence of any such thing as “machine consciousness”? Whereas, however, by Butler the potential calamity was seen to consist in self-aware machines gaining sway over those who designed and built them, our concern is a different one: namely, that it must be ensured that no electronic systems with mind-like properties are developed or allowed to arise until the possibility is absolutely excluded that such systems, like naturally living beings, might experience suffering.
 See Chapter 23 of Butler‘s Erewhon, Penguin Classics 1985, p. 198 ff .