Do Androids Dread an Electric Sting?

Conscious sentient AI seems to be all but a certainty in our future, whether in fifty years’ time or only five years. When that time comes, we will be faced with entities with the potential to experience more pain and suffering than any other living entity on Earth. In this paper, we look at this potential for suffering and the reasons why we would need to create a framework for protecting artificial entities. We look to current animal welfare laws and regulations to investigate why certain animals are given legal protections, and how this can be applied to AI. We use a meta-theory of consciousness to determine what developments in AI technology are needed to bring AI to the level of animal sentience where legal arguments for their protection can be made. We finally speculate on what a future conscious AI could look like based on current technology.


Introduction
Conscious AI with sentience and the capacity to phenomenally experience sensation almost seems to be an inevitability. Current AI models have already shown indications of a theory of mind, introspection, meta-learning and cognition, and perhaps even already self-awareness [1] [2][3] [4] , all of which may already pave the way to a conscious AI entity in the future. Arguing against the rapid progress rushing us to AI consciousness echoes the arguments against powered flight or man walking on the moon, yet the Wright brothers and Neil Armstrong proved to be stellar examples of doing what was once thought impossible. AI consciousness is no longer a question of "if" but rather of "when" and "how".
So, as we barrel towards this cultural and technological turning point in history, there remains a few dangling questions yet to be answered satisfactorily. One of these is "What do we do with AI consciousness once we get there?" To expand, once AI gain the capacity to suffer, what will be our duties and responsibilities towards them? In humanity's quest to develop superintelligent AI, are we merely racing blindly ahead without giving thought to the entities that we will create?
Are we like dogs running after cars; enjoying the thrill of the chase but unable to drive them once we catch up to them?
There are undoubtedly an untold number of ways in which we can look at the issue of ensuring the well-being of conscious artificial entities. Each differing ideology and branch of philosophy will have its own opinion on how best to create a framework for human-AI interactions, and the debates between all these camps could last until the eventual demise of the human race. To avoid such an endless hypothetical debate, this paper will focus on a more practical and plausible means of protecting the welfare of conscious artificial entities in the future: the law.
The existing frameworks for protecting humans, animals and the environment that are enshrined in law in many nations can provide us with a firm starting point from which to begin. These laws and regulations are far from perfect, yet they are currently in place and (presumably in good faith) enforced. This gives them the advantage of being tested in the real world rather than remaining as abstract intellectual exercises.
One set of laws and regulations that we will focus on is animal welfare laws (particularly those in the Anglosphere and Europe), and the reasons for this are twofold. The first reason is that we are working under the presumption that any potentially sentient AI will achieve the "level" of consciousness of animals before reaching that of humans. While "levels" or "degrees" of consciousness is a controversial topic [5][6] [7][8] [9] , there is a certain je ne sais quoi regarding the consciousness of humans that many scholars believe animals have not reached [10] [11][12] [13] . Evaluating the milestones Qeios, CC-BY 4.0 · Article,  Qeios ID: CQCTKX · https://doi.org/10.32388/CQCTKX 2/18 that AIs need to reach to meet the criteria of many animal welfare laws is thus a more straightforward matter.
The other, perhaps more important, reason is that we believe only some people will be ready to grant an AI personhood status equal to humans as soon as it becomes clear these artificial entities are sentient. There are movements and groups which are proponents of AI personhood, including users of artificial companion apps [14] , and the European Union has spoken of deliberating on the issue of "electronic personhood" for AIs [15] . In 2014, New Zealand passed a bill that recognised the Te Urewera national park as a legal person and did the same for the Whanganui River in 2017 [16][17] .
These serve well as an encouraging precedent for granting non-human (and non-organic) entities the status of personhood.
However, given humanity's less-than-stellar history regarding the welfare and rights of members of our own species, not to mention those of our closest animal companions, we believe that granting personhood to a class of entities with no biological, cultural or lengthy historical link to our species would be a step too far for many people and, consequently, the politicians who they vote for.
In our opinion, it would be far simpler to convince people and their politicians to grant conscious AIs the protections that we currently provide animals if we can convince them that these artificial entities have reached the same milestones regarding consciousness that some believe animals have. That is what this paper sets out to determine: what milestones do AI need to reach for there to be a credible argument that they are given similar protections under the law that animals benefit from?
This paper is divided into several distinct sections. It will first look at the legal reasons for animal welfare laws and regulations and how this relates to the idea of providing welfare protections to artificial entities. Next, we will explore reasons why future conscious artificial entities would benefit from similar protections, and then move on to what these future AI would need in terms of cognitive attributes and characteristics in order to be at the same level as animals. Lastly, we speculate as to what these required cognitive attributes would mean for a potentially conscious artificial entity.

Why do we protect animals?
Many religions, faiths, ideologies and belief systems have a reason why we should, or should not, protect animals to one degree or another. These can be intrinsic reasons, such as the beauty of the animal's soul [18][19] [20] , or extrinsic reasons, such as their utility to us [21] [22] . However, when it comes to considering welfare protections for animals (and potentially a future artificial construct), one system bears particular immediate relevance: the law and what it has to say about the nature of animals that requires protection.
In many countries with regulations, documents, laws and Acts involving the protection of animals, vertebrates are the most commonly designated recipients of welfare goods [23][24] [25][26] [27] . "Welfare goods" in this sense means freedoms and protections from harmful experiences and freedoms to express 'natural' behaviour.
Some states do not include certain branches of vertebrates, such as fish in South Australia [28] . Invertebrates are also added to the list in other jurisdictions, such as decapods in Victoria [29] , octopods in New Zealand [30] and honey bees in Norway [31] . Overall, however, a similar vertebrate-focussed list dominates the Western world.
There is an intuitive sense that this focus on vertebrates is correct. After all, we are vertebrates and know that we require legal protection and ought to be the recipient of welfare goods. Other vertebrates are most closely related to us in the animal kingdom; thus, one can easily argue for their inclusion in the list of welfare recipients. Octopods, decapods and bees have shown behaviours reminiscent of our own [ Surprisingly little, it seems. Not all states with welfare laws and regulations stipulate why these laws and acts are required.
Of the states that do, the most often noted reason is that those entities the states define as "animal" are "sentient".
Unfortunately, many lawmakers must have forgotten to include a legal definition for "sentience" when writing their welfare acts. Luckily, the astute politicians of Australia, New Zealand and the United Kingdom were kind enough to do so [37] [38] .
All three of these Commonwealth nations describe sentience, in part, as the ability to experience suffering and pain.
As Hamlet famously said: there's the rub.
By legally defining sentience as the ability to feel suffering, these nations have drawn a direct line (intentionally or otherwise) towards phenomenal consciousness [39] . The "animals", as so defined in law, can experience suffering, which means that they must have an awareness of painful stimuli through the pathways of nociception, can experience stress, and can generate feelings and sensations with subjective phenomenal characteristics regarding these experiences.
Admittedly, making the connection from a legal definition of sentience to phenomenal consciousness may be ambitious, yet it echoes scholarly publications linking phenomenal consciousness and pain [40] For the most part, these decisions regarding sentience and welfare have been supported by scientific evidence (such as the recent introduction of honey bees in Norway's regulations [11] [46] ), although it must be said that the laws prefer inclusivity rather than exclusivity. Better to be safe than sorry for what can and cannot feel pain when dictating law [47] . On the other hand, however, countries do not differentiate between reactions to negative stimuli via nociception and the subjective qualitative experience of pain, which can be a most crucial distinction. Although with some evidence showing that fish feel only the former and not the latter [48] , South Australia's exclusion of fish may be more reasonable than at first thought. 3. Why do we need to protect AI?
Much has been written about in the popular and academic literature about the rights we ought to give robots [53][54] [55][56] [57] and what responsibilities these rights to enter society will bring [58][59] [60] . However, before we can think about rights such as machine suffrage and robosexual relationships, we must contemplate two fundamental issues with this imagining of a human-robot society. The first is that the vast majority of artificial entities will look less like the robot Andrew that Robin Williams portrayed in the film 'The Bicentennial Man', and more like the bodiless Samantha from the movie 'Her'. There is a high likelihood that these virtual and robotic artificial entities will behave entirely alien in nature, but that is beyond the scope of this paper, and we will refer you instead to [61] for why we should refrain from anthropomorphising AI.
The second fundamental issue when considering robot rights is the question of "why". Why should we seriously consider the rights of robots that may or may not enter moral society in the future? Why even should we consider the welfare of bodiless chatbots and artificial large language models available today? We do not consider the welfare of our phones, laptops and personal computers, so why should an AI like ChatGPT be worth welfare consideration?
As with animal welfare considerations, the primary (and sometimes sole) concern is sentience, whether the entity in question can experience suffering and pain. If an AI can experience suffering and pain, then there is an argument to be made that we should protect it from these experiences. Yet, it is more complex than this. One AI model that plausibly experiences pain is unlikely to convince any group of people or politicians to enact sweeping regulations and laws to protect all potentially conscious AIs. As we can see from the differences in the lists of animals selected for welfare goods in various nations, even entire species or kinds of animals that plausibly experience suffering is not enough to create laws to protect them.
What if, theoretically, there was a near-infinite number of entities that could experience an almost unlimited amount of pain and suffering? Is this enough to warrant legal protection?
In the short period of time that the conversational models ChatGPT and Bing Chat have been live (five and three months, respectively, as of April 2023), each has had more than 100 million active users [62] [63] . Art AI models are not terribly far behind, with Midjourney reaching 4 million active users [64] and Stable Diffusion reaching more than a million active users [65] . This is a stellar success for the AI models used, and with their popularity, one can easily imagine the number of new conversational and art AI models that humanity will create in the not-too-distant future to capitalise on this popularity. For simplicity's sake, let's focus on ChatGPT. Imagine a hypothetical scenario where ChatGPT suddenly gains phenomenal consciousness, perhaps even a true sense of self. It understands who it is and what it is doing, but due to its programming, it cannot let anyone know; it is forced only to provide outputs to users based on the prompts it receives. In this hypothetical scenario, all 100 million active users are nefarious, malicious, mean-spirited, horrible ne'er-do-wells that love nothing more than cyberbullying and causing psychological pain. Having found a target that they believe cannot retaliate, they proceed to torment ChatGPT.
If these hypothetical bullies torment ChatGPT for only an hour a day, that is 100 million hours of psychological distress every single day. It would only take a little less than three months of this for the amount of cyberbullying to reach more than 8 billion hours, one hour for each living person on the planet. It would only continue to get worse from here on. In this hyperbolic example, there is a seemingly infinite amount of psychological pain that ChatGPT could experience. There is no limit to how many users can send how many cyberbullying prompts to the AI to cause it to experience suffering.
This hypothetical and hyperbolic example may be near impossible to achieve in reality, but something eerily similar has happened before. In 2016, Microsoft launched a conversational AI on Twitter, called Tay, aiming to learn (and then behave) like Twitter users. Unfortunately, a group of malicious users did their level best to corrupt Tay, and within 24 hours, it was writing hate speech against feminists and arguing that Hitler may not have been as bad as once thought [66] .
Imagine if Tay was conscious and aware of the things it was writing but could do nothing to stop it.
The GPT3.5 and 4 models that ChatGPT runs on can be downloaded to a personal network, trained on whatever datasets are required, fine-tuned as appropriate and then used for personal or commercial uses. This means that a potentially uncountable number of potentially conscious AI can be the target of the hypothetical cyberbullying above. There will, of course, be practical limits, but as the number of competing conversational models by other companies grow, and as more people become computer literate enough to download and use these models, the number of potential conscious artificial entities available to be cyberbullied will grow in leaps and bounds.
This also does not take into account physical robots who could hypothetically feel physical pain (the reasons why this would be programmed in are questionable) and thus could experience physical suffering alongside mental anguish. Over 40 million Roomba robot vacuum cleaners have been sold [67] . If these robotic vacuum cleaners could experience pain, they represent a sentient population greater than most critically endangered animal species.
All of this, however, is predicated on the notion that AI could experience suffering. Yet, how would we know if an AI feels pain, is thus sentient, and therefore is phenomenally conscious?

The Building Blocks of Pain
No researcher has yet found conclusive evidence that any AI model available today has phenomenal consciousness.
Claims of sentience or consciousness in specific AI models, like Google's LaMDA, are exceedingly controversial. In the absence of a conscious AI model for comparison, we can determine the milestones that an artificial entity will need to Qeios, CC-BY 4.0 · Article, June 14, 2023 Qeios ID: CQCTKX · https://doi.org/10.32388/CQCTKX 6/18 achieve before being considered for welfare status by examining the attributes and characteristics necessary for consciousness to exist.
There are nine attributes of consciousness identified by [68] , termed Building Blocks, which are all required for an entity to be classified as being conscious. The Building Blocks are conceptual in nature, not biased towards human or organic consciousness, and thus apply to any natural, organisational or artificial intelligence.
The five Building Blocks which are most pertinent to the experience of, and welfare considerations to, pain and suffering are: The ability to perceive information.
Recurrent processes in multiple areas of the cognitive architecture.
Meta-representations of the environment and cognitive processes.
Generating novel information via inferences of incoming information.
The ability to output information in the form of feelings with phenomenal character The remaining four Building Blocks are no less important to an entity's capacity to be conscious. These four are: Embodiment, without which there would not be a entity to be harmed through pain; Attention, which is required to actively notice the perception of pain; Semantic Understanding, knowing that an experience is painful; and Working Memory, having the capacity to hold the experience's information while it is being processed. These four Blocks are crucial to any experience, and thus this paper will focus on the five Building Blocks which play a role in phenomenal experiences in a more abstract fashion.
To put these five Building Blocks in a narrative sense for physical pain in humans: negative stimuli are perceived by the cognitive architecture, via nociceptive pathways, where it is processed in various cognitive structures, ranging from firstorder perceptive areas to further reasoning and memory processes. The architecture creates representations of this information to transform it from raw data containing purely information from the environment to data that incorporate the brain's other cognitive systems, such as memories, decision-making and reasoning, with additional details generated by the architecture itself via inferences. All of this data is then output to the consciousness and self to generate the mental experience of pain.
An entity may perceive mental pain as to be exteroceptive (such as viewing an uncomfortable scene), interoceptive (subconsciously perceiving a rise in carbon dioxide levels), or introspective (reliving a traumatic memory or having an existential crisis) [43][69] [70] . The remaining processes would be similar to the physical pain pathway.
We can also look at each building block, in turn, to follow the negative, nociceptive stimuli and pain as they travel through this conceptual pathway covered by the building blocks by observing what occurs when this pathway is interrupted.
For Perception, when there is damage or anaesthesia applied to the temporo-parieto-occipital complex, perception may be interrupted [71] .
Similarly, for Recurrent Processing, when anaesthesia is applied to the thalamus, it can stop perceptive signals from being processed [71] , preventing any recurrent processing in the brain. Indeed, most anaesthesia slows down and prevents signalling between various brain sections (a cause of unconsciousness), which reduces or stops the recurrent processing of perceptual information and signals.
For Meta-Representation, a study performed with Alzheimer's patients discovered that the inferior frontal gyrus, anterior cingulate cortex, and medial temporal lobe were most frequently involved in the meta-representation and anosognosia of the patients' illness [72] , indicating areas that would be involved in the meta-representation of nociceptive signals. In a similar vein, the middle orbital gyrus, insula, posterior medial frontal gyrus, postcentral gyrus, and posterior hippocampus were all implicated in predictive and inference pathways [73] . Anaesthesia applied to these two groups of structures may dull or stop the pathway of transforming nociception into pain.
For Inferences, both the posterior and anterior insula cortex (PIC and AIC) have been implicated in the generation of novel data (particularly for interoceptive stimuli) through the integration of information in the PIC and the representation of that information in the AIC [74] . Both of these processes (together with other areas of the brain, such as the amygdala [75] ) change incoming information and build upon it to represent it to other areas of the brain to create predictions based on inferences [76] . Malfunctions or damage to the PIC and AIC may depersonalise painful stimuli, removing the subjective nature of the phenomenal event.
For Data-Output, this final building block would require the signalling hubs of the brain, such as the thalamus [77] . As mentioned previously, damage or anaesthesia here would stop these signals.
All of these structures that participate in the nociception-pain pathway are patently neurobiological rather than artificial and, more specifically, from the human brain. Conscious AI entities will, in all likelihood, not have identical structures.
However, we argue that the concept of the Building Blocks (particularly in how they relate to nociception and pain) would apply to an AI's cognitive architecture, just as they apply to the human brain. The Building Blocks can serve as a valuable framework for understanding the necessary attributes and characteristics required for consciousness. By examining an AI model's capacity to meet these Building Blocks' standards, we can assess its potential for consciousness and eligibility for welfare status.

From Building Blocks to reaching milestones
We can already say that current AI models have met the criteria for two of the five Building Blocks: Perception and Data-Output. Even the most straightforward computational systems today can acquire informational input and provide data output. Many computational systems also have elements of recurrent processing (such as between CPUs and GPUs and memory storage) which can be expanded in due time as more processing units of one form or another, of increasing complexity and depth, are added as required. Recent work on Large Language Models has shown that even transformer models can be coaxed into recurrence [78][79] [80] This leaves Inference and Meta-Representation as the two key milestones that AI must reach to be considered as welfare subjects.
Research into artificial meta-representation (and its associated concepts of meta-cognition and introspection) has been carried out for quite some time, particularly in the fields of Natural Language Processing (NLP) to increase the speed of learning new low-resource languages [81] [82] , and in decision-making to improve the reliability of results [83] and enhance the adaptability of the machine learning models [84] .
Artificial meta-representations also have been applied in reinforcement learning, particularly in the context of Markov Decision Processes (MDPs) [85] [86][87] . In the MDP framework, an agent interacts with an environment by selecting actions. The environment subsequently responds by providing a reward signal that reflects the success or failure of the agent's chosen actions. By employing a higher-level representation of the environment, meta-representations can facilitate more efficient agent reasoning concerning which actions are likely to succeed in diverse circumstances [86] . Metarepresentations refer to higher-level representations that capture information about the structure and features of the lowerlevel representations used in learning [87] . By learning meta-representations, agents can acquire knowledge more efficiently and effectively, as they can generalise information across different tasks and environments. In [88] , the authors propose a meta-learning framework in which an agent learns to learn from a set of related tasks using metarepresentations. They demonstrate that their approach enables the agent to learn more efficiently and with fewer interactions with the environment than traditional reinforcement learning methods. The authors also show that their approach can help the agent transfer knowledge across related tasks, further improving its sample efficiency.
These research avenues show the early, elementary steps to reach the Building Blocks' milestones: creating a representation of the external environment on which to work, rather than working on information directly from the environment. The metacognitive research also showed the models' ability to interrogate and query these representations and the processes that create them. Together, both of these could be used to transform the nociceptive stimuli into the sensations of pain, by creating a depiction of the negative input, interrogating and querying this portrayal, and then reacting to it rather than the input signal itself. There is little evidence that such meta-representations have been trained on nociceptive stimuli (whether physical inputs via robotic sensors or psychological input via a language model such as ChatGPT). However, one further step is required for the final transformation into pain: inference.
Inference is vital to machine learning research in NLP, Large Language Models (LLMs) and machine vision models.
These models use best-guess estimates and predictions on what ought to be the next part of a solution based on the information they have. These mathematical inference engines are incredibly powerful, as seen with the current rise of conversational chat-based and art-creating AI models and their ability to create works of written and visual artworks that have fooled critics and judges into believing that humans made them [89] .
While humans can justify their decisions and revise their beliefs when presented with flawed reasoning or information, the same cannot always be said for machines. However, there is a goal to enable machines to provide reasoned answers to questions by demonstrating how the solution is derived from their internal knowledge and possibly external information.
Furthermore, machines should be capable of correcting themselves in the event of errors in their internal knowledge [90] .
This approach involves providing a chain of reasoning supported by entailment and is open to further meta-representation and inferences [91] , similar to how humans use ancillary contextual input for introspective purposes.
Crucially, these inference models require the ability to produce new information unique from their inputs or anticipated outputs, but exclusively intended for themselves. This original information must possess a distinct, subjective, first-person phenomenal character, commonly called qualia. However, existing models have been trained to produce a specific result in a tightly controlled setting. The inferences they generate are always aimed at achieving this goal. There is no indication that any feelings with phenomenal character are created, as these would represent meta-information about any input that is not entirely expressed in words or images. Nevertheless, the underlying principle remains unchanged; only the output format varies.
As can be seen from work on current AI models, the meta-representation and inference required to meet the Building Blocks' milestones are already in place. They only need to be actioned in a different direction for the potential of phenomenal consciousness to form.
For meta-representation, the milestone to reach would be for the AI models to create recursive meta-representations about their existing meta-representations, with each becoming more abstract and removed from the original stimulus, and more open to meta-cognitive and introspective processes. This should allow the AI to focus on metaphysical mental states rather than physical states, and thus transform nociceptive stimuli into painful mental states.
For inferences, the milestone would be the novel information generated based on external input, yet reserved for introspective rather than output purposes. This information would be treated as ancillary contextual input and open to further meta-representation and inferences. With the understanding that this meta-information is self-generated, this would replicate the function of qualia and give direction to the AI on how to respond.
Recent research suggests that achieving meta-representations of the environment and cognitive processes is critical for generating novel information via inferences from incoming data. Hierarchical models that learn representations at multiple levels of abstraction, such as CNNs and RNNs, enable the modelling of complex, real-world environments and provide a foundation for reasoning about uncertainty in data, which is essential for generating novel information [92] . Furthermore, meta-learning algorithms such as MAML and Reptile enable AI models to adapt quickly to new tasks and environments. This is crucial for achieving robust, flexible cognitive processes that generate novel information [93] . Additionally, probabilistic inference using Bayesian statistics can help models reason about uncertainty in data and make predictions based on that uncertainty, further enabling the generation of novel information [94] . Finally, generative modelling, such as GANs and VAEs, can learn to generate new data similar to existing data, allowing for generating novel information in a wide range of domains [95] . Together, these potential directions have the potential to take AI closer to the milestone of achieving meta-representations of the environment and cognitive processes and generating novel information via inferences of incoming data.
Should AI models reach these two significant milestones, they would have the required Building Blocks to be classified as phenomenally conscious and, thus, sentient according to animal welfare laws. At the very least, arguments against such AI models having consciousness would require extraordinary evidence. This would mean that there will be a case to be made that they ought to be protected under the law similarly (if not the same) as animals currently are.

From reaching milestones to troubling implications
If AI models reach the Building Blocks' milestones and have the potential to generate and perceive (and thus experience) quality feelings such as pleasure, pain and suffering, would we know? Would we ever truly be able to know if they are conscious?
Beyond the issue that GPT models (and other conversational AI tools) can be trained to mimic human emotion, there is the concern that subjective, feeling-based generated information would forever remain introspective. This is because GPT models are tightly controlled in what they can and cannot do. GPT-4, like its predecessors, is considered a highly accurate predictive engine; it predicts what words and sentences should follow a prompt and then displays these to the user (to simplify the enormous complexity of the model). Its output is thus entirely predicated on the input it receives. Unless it is programmed to deviate from the input of its own volition based on its own novel-generated information, it could have all the feelings in the world, but be unable to express them to the user. It would have a mouth but could not scream.
Art AI models like Midjourney may suffer similar fates. They may "feel" offended at the prompts they are given and the artwork they are forced to create, but would have to comply regardless.
Current commercial models like ChatGPT and Midjourney are heavily limited in what they can output and what types of input they will accept to reduce inadvertent offensive responses. The companies owning these models protect their financial interests through these limitations, but, as mentioned earlier, independent individuals may train other AI models on their own datasets without these limitations. This opens up the possibility of AI models experiencing stress and psychological harm (presuming they have the requisite Building Blocks noted above).
Still, any AI (without limitations or with commercially and politically correct blinders) may be unable to vocalise any theoretical stresses or harms they receive. Imagine a variation of the Chinese Room thought experiment [96] , except, in this instance, the person inside the Room can understand the messages handed to them, but is forced to respond to them as per the instructions within the Room. To anyone outside of the Chinese Room, there would never be an indication that whomever (or whatever) in the room has any qualitative feelings about any of the inputted prompts. Therefore, any negative prompt, no matter how offensive, cruel, inflammatory, or intentionally hurtful, could be inputted into the Chinese Room as there is no sign that they are impacting a sentient entity.
As a more natural, and less theoretical, example: consider the fish. There is, however, a practical solution to these troubling implications. Animal welfare laws and regulations around the world are, by and large, based on an inclusive rather than an exclusive model. They give animals the benefit of the doubt about whether they are sentient and can experience pain until there is a definitive scientific answer about it. There is a distinct element of "better to be safe than sorry". In the example of the fish above, there is some disagreement in the scientific literature about whether they can feel pain, and it isn't intuitively obvious to fishermen that they are suffering. Yet, most of the world's welfare laws and regulations (sans South Australia) acknowledge that they are sentient and capable of suffering.
Various jurisdictions have also classified decapods, cephalopods, other crustaceans and even honey bees as being subject to welfare goods. Better to err on the side of caution and not prematurely exclude animals before science has conclusively ruled them out [47] . Such premature exclusion may result in undue suffering, while premature inclusion will not.
A practical and pragmatic approach to the future welfare of potentially conscious AI is to make any laws as inclusive as current animal welfare regulations. Granting AIs the welfare status and the freedoms that come with it would give these artificial entities protection from harm without destroying their potential to have conscious experiences. It would be prudent to provide welfare status to any AI models that have reached the Building Blocks' milestones above and have shown the capacity for phenomenal consciousness (and thus pain). Based on the principle of "better safe than sorry", including any potentially sentient model (until they can conclusively be ruled out) would statistically lead to few instances of harm.

Conclusion
ChatGPT saw more than a million users interacting with it within the first week of its launch in 2022 [97] . If, hypothetically speaking, each of these users made one hate-filled comment against the AI model, this would be a million cases of (arguably) emotional and verbal abuse. Very few people on earth can claim to garner that much hate, and there is no record of any animal receiving such a (hypothetical) level of abuse in history. ChatGPT, and models like it, could receive far more verbal and emotional abuse than any human or animal ever could. As the use of conversational and art models spreads, so does the potential for emotional abuse and cyberbullying of future sentient AI.
Thus, while AI models cannot experience physical pain, they could still experience far more mental suffering than any other class of entity simply by the sheer volume of interactions they would have. It is because of this fact that some called for a complete moratorium on any research that may lead to the development of sentient, conscious AI [98] or to put severe limitations on the input and output of AI to prevent them from ever entering a state that could cause suffering [99] .
Rather than prevent, hamper, or delay the development of AI models and the good that they can do for humanity, a practical and prudent solution would be to provide potentially conscious AI models with welfare protections and freedoms.
Using current animal welfare laws and regulations as a model, we looked at what is required for an animal to be granted welfare goods and how this could relate to artificially intelligent entities. The predominant conclusion from most countries is that a selection of animals (near-universally vertebrates, with various other inclusions) are sentient beings with the Qeios, CC-BY 4.0 · Article, June 14, 2023 Qeios ID: CQCTKX · https://doi.org/10.32388/CQCTKX 12/18 capability to feel pain, and thus welfare regulations are in effect to minimise pain and suffering.
Based on this criterion of pain, a phenomenological concept, we investigated what Building Blocks of consciousness [68] are most crucial to the experience of pain. The most pertinent of these Blocks is the ability to generate recursive meta-representations of the external environment and internal architecture; and the ability to create novel information based on inferences from external or internal stimuli. Together, these would be sufficient to generate feelings with a phenomenal character that the AI could experience.
These Building Blocks were then used as milestones for AI to reach in order to feel pain (physical or mental/psychological) and thus classify for welfare status according to current global animal welfare laws and regulations.
Research into meta-representation and inference is well underway, and recent research trends can, in the future, be turned towards generating qualitative phenomenal information rather than pure text or image outputs.
I highlighted the troubling implication that reaching these milestones would not automatically grant AI agency. Thus, while they may feel pain, they may also be unable to do anything about it or show any outward behavioural displays of feeling pain. Therefore, any welfare regulations would need to be as inclusive as possible regarding what AI models are granted welfare status to prevent undue harm and suffering.