Philosophy in Technology: Objectives, Questions, Methods, and Issues

Philosophy in technology is a research program that studies the philosophical roots of engineering and technology. Philosophy in technology asserts that the resolutions to these problems need to be based on an understanding of their philosophical roots. In åthis paper, we define the objectives of philosophy in technology, the kinds of questions it explores, the methods it uses, and how it differs from the philosophy of technology. We then look at six selected problems to illustrate how the philosophical perspective shines new light onto technology.


Introduction
As a research program, philosophy in technology is concerned with the philosophical roots of engineering and technology.It is not concerned with any specific technical domain but rather how different technologies can benefit from purely philosophical concepts, how technological domains often unknowingly adapt traditional philosophical concepts to meet their needs, and how from an abstract metaphysical, ontological, or axiological perspective, philosophy shapes and defines what technology does, how it develops, and how it evolves 1 .
Philosophy in technology also points out the semantic gap between the concepts used by technology and the concepts understood in philosophy.
We claim here that this semantic gap is a source of confusion that causes misunderstandings between philosophers, the general population, and technologists.It also serves to overinflate or underestimate the risks and threats posed by technological development.
We start in the following section by contrasting philosophy in technology with the philosophy of technology.In Section 3, we then discuss the main tenets of philosophy in technology as a research program, before Section 4 outlines the methodological assumptions of philosophy in technology.
Section 5 then presents exemplary cases of philosophical thinking in technology.Finally, in Section 6, we summarize our observations of philosophy in technology and suggest there is a need for open dialogue between philosophers and technologists, although they are actually not as far apart as many seem to think.

Philosophy in technology versus the philosophy of technology
The philosophy of technology can be viewed from many perspectives.For example, it can be seen as (1) a systematic clarification of the nature of technology as an element and product of human culture.Alternatively, it can be regarded as (2) a systematic reflection on technology's consequences for human life or (3) a systematic investigation of the practices involved in inventing, designing, engineering, and making technological artifacts.
In contrast, philosophy in technology (1) seeks out the implicit philosophical ground of technology and engineering and its role in shaping technological solutions; (2) explicates the ontological, metaphysical, axiological, and methodological dimensions of technology; and (3) clarifies the

Exemplary study cases
Under the light of philosophy in technology, seemingly exclusively technical problems emerge as multidimensional concepts that draw on ideas from ontology, metaphysics, philosophy of mind, and philosophy of nature rather than being a purely technological pursuit.More specifically, the following exemplary cases reveal how sticking to the technological perspective acts as a limiting factor on technology itself by enforcing a myopic vision of technical enterprise that constrains the horizon of possible solutions, much to the detriment of technology itself.and this appears to be our best bet so far for modeling the mind.Thus, Searle's Chinese room argument is correct in that computers will never think like humans because the mind is not a TM.But at the same time, Searle's argument is wrong in that the issue is not whether TM systems understand semantics in addition to syntax (as can be used to argue against large language models [LLMs] like ChatGPT 3/4 and similar)-the real Orthogonality thesis and AGI: AI systems still cannot replicate human intelligence and a human agent's ability to cope with reality.One of the cited obstacles here is the orthogonality thesis, which holds that "the final goals and intelligence levels of artificial agents are independent of each other."This is obviously not a computing problem, because the orthogonality thesis is a philosophical concept from the philosophy of mind.On limiting the discussion of the orthogonality thesis to the ethical dimension, the orthogonality thesis could be restated as "the level of intelligence of an AI agent does not correlate with its ethical capacities."Our experiences with psychopaths provides strong confirmation for this."So what?" you may ask.The philosophical insight is that the orthogonal thesis indicates that human intelligence is a complex of several relatively independent but connected faculties, and developing rational synthetic intelligence (e.g., AGI) will not automatically create moral systems.(For an extended discussion of this, see, for example, the work of Brooks [1991], Minsky [1991], Armstrong [2012], Dreyfus [2016], Wooldridge [2021], Smith [2018],

AI and Ethics:
Today's ethics in AI systems (robots, bots, etc.) are implemented in the abstract computational model of the Turing machine (TM), as all computer software is.From the ethical perspective, AI systems based on the TM model can only be behavioral systems.Engineers say that this will do, for them at least, as they continue to elaborate solutions to harebrained ethical problems like the trolley problem.Nevertheless, the behavioral approach to morality has been long discounted, perhaps with the exception of Big Five.Thus, TM-based bots and such like implement the wrong ethical model under the wrong computational paradigm.The philosophical insight here is that one of the critical differences, although not the only one, between synthetic ethical systems as they are currently designed and those of humans lies in their ethical decision methods.Higher levels of ethical proficiency in AI systems may be realized by adapting the Aristotelian concept of phronesis, which introduces the concepts of the telos of ethics (the ultimate goal of ethics) and eudaimonia (the best good).In the case of AI, the telos of their ethical decisions is the best good of the human actors involved rather than the AI entities themselves.(For an lengthier discussion, see the work of Aristotle [2004], Wallach [2004], Wallach et al. [2010], Leslie [2019], Russell

Whole brain emulation:
A whole brain emulation (WBE) recreates a fully functional brain, such that it is functionally indistinguishable from the original mind.But we are of course not there yet, and no one knows if and when we will ever be there.But is WBE at least a tenable idea?How can we answer this question?A possible answer may come from philosophy.As it is currently conceptualized, WBE is based on eight assumptions.For example, the third assumption is that the relevant functions of the brain are Turing-computable, while the seventh assumption states that at the emulated level, the simulated components can be realized in an operational computer (i.e., a TM).The eighth assumption is that while a WBE must reproduce the original brain's functions, it need not necessarily replicate all of the bodily functions, so a WBE does not need to be a fully embodied brain.These assumptions are philosophical rather than technical.They assume there is a specific model for the mind/brain, one based on physical reductionism where a TM can host a computational model of the mind.The probity of these models has to be addressed before we can pass judgment on the whole WBE project.Thus, the answers to WBE's feasibility question lie in the philosophy of mind, phenomenology, neurology, and so on rather than in technology.Philosophy does not have these answers yet, but neither does technology.(For an extended discussion, see, for example, the work of Koene [2006Koene [ , 2012Koene [ , 2013]], Sandberg and Bostrom [2008], Sandberg [2013], and Shanahan (ontology) of these AI systems should be consistent not with internal AI-based ontological theories but rather with the real world, such that it can be verified through operational success.(Operational or performative success refers to the ability to cope with specific tasks.)Finally, the ontology of these AI systems must account for the dynamic environment of the real world rather than using static internal representations of the world.(For an extended discussion, see, for example, the work of Brooks [1991], Berto and Plebany [2015], Dreyfus [1991], Hutchins [1995], Minsky [1991], Roitblat [2020], Smith [1998Smith [ , 2019]], and Krzanowski and Polak [2022]).

Conclusions
The list of philosophical problems in technology presented here is selective, limited, and biased toward computing technology and AI, which are the hot topics of today.Nevertheless, this selection should not imply that philosophical problems in technology are specific to just these technologies.
The lessons we can draw from this discussion are as follows: 1. Technology tends to substitute its own meaning for terms with traditional connotations in philosophy.For example, synthetic ethics are not ethics as they are generally understood, and synthetic phenomenology is not phenomenology.Synthetic ontology is not ontology, and an autonomous agent is not an agent, nor is AI intelligence, and so on.As such, there is a wide gap between what engineers claim to have done and what they have actually produced.The differences in technological and philosophical concepts are often so significant that they may denote completely different things, such as with ethics, ethical behavior, justice, agency, autonomy, intelligence, the mind, and so on.In this way, the meaning of such terms becomes obfuscated and lost, having taken on a limited narrow interpretation.PinT therefore needs to reopen the lost semantic horizon.
2. Semantics matter: The meanings we attribute to specific terms like ethics, justice, the mind, intelligence, phenomenology, and so on matter because they define the horizon of research and study objectives.Incorrect meanings lead to a myopic vision of technology.
3. Technologists must understand that cannibalizing the deep philosophical meanings of many concepts works to their own detriment, while philosophers need to realize that their musings are simply perceived as musings without relevance to technology.Both sides are wrong, so both sides should seek to understand each other.
4. There should be an open dialogue where both sides (i.e., technologists with a philosophical bent and philosophers with a technological understanding) can freely exchange their ideas without fear of being shouted down as ignoramuses or simpletons.
But what is philosophy in technology?(A) Philosophy in technology (PinT) is a reflection on classical concepts in technology.It is analogous to philosophy in science because we propose tracking the presence and role of the big classical philosophical questions in technology, such as the nature of free will, the mind, autonomous agents, and so on, so we can identify and analyze references to classical philosophical concepts like matter and time.Furthermore, philosophy in technology explores how classical philosophical concepts can be adapted to meet the needs of technology.(For example, Aristotelian phronetic ethics have been adapted for machine ethics, and the utilitarian and deontic ethical schools have been used in AI.) (B) Philosophy in technology is a disclosure and critical analysis of technology that reveals philosophical prejudices and assumptions in technology, reconstructs accepted philosophical concepts in technology and engineering, and clarifies the unclear use of concepts.(C) Philosophy in technology analyzes the consequences of philosophical prejudices in technology, thus revealing philosophical assumptions in technology, determining their role in specific technical realizations, and analyzing the consequences and possible postulates for changes in philosophical underpinnings.
Qeios, CC-BY 4.0 • Article, January 15, 2024 Qeios ID: D0RDW7 • https://doi.org/10.32388/D0RDW72/6 issue is whether TM-based models are simply incorrect models of the mind.Discussions about whether Searle was right or not is misdirected, yet they still retain their vigor.(For an extended discussion, see, for example, the work of Smith [1998, 2019], Dreyfus [2016], Cole [2020], and Woodridge [2020].)Synthetic phenomenology: Some in the philosophically-minded AI community, such as Thomas Metzinger, have proposed a global moratorium on synthetic phenomenology from 2021 until 2050, although this may be revised.Synthetic phenomenology aims to model, design, and develop conscious systems, including their states and functions, with artificial hardware.The philosophical objections to the development of synthetic phenomenology are twofold: (1) Entities, whether artificial or natural, with consciousness or phenomenal experience will have the capacity to suffer, and we should avoid creating additional entities that would add to the overall level of suffering among conscious beings.(2) Creating entities with an artificial consciousness will essentially create "alien" beings, and we cannot predict how they will act, what moral code they will adopt, or how they will perceive us humans.It seems that engineers do not spend much of their time on these aspects of synthetic phenomenology, but we think that they should.Stuart Russell's (2019) question about AI ("What if AI succeeds?")therefore remains open.(For an extended discussion of this, see, for example, the work of Arabales et al. [2009], Chrisley [2009], Metzinger [2021], Alexander [2022], and Cali [2022].) [2019], Coeckelbergh [2020], Polak and Krzanowski [2020], Dubber et al. [2020], Powers and Gansacia [2020], Muller [2021], and Veliz [2021]).
[2015].)Qeios, CC-BY 4.0 • Article, January 15, 2024 Qeios ID: D0RDW7 • https://doi.org/10.32388/D0RDW73/6 Meta-ontology in robotics: The AI systems we currently design and implement cannot replicate a human agent's ability to cope with reality.One of the reasons for this failing is, it seems, that these AI systems lack a proper ontology or representation of the real world.The computational sciences redefine what philosophical ontology, or the representation of the world, is about, and it loses its essence to question "what is" by focusing instead on abstract internal representations.So, what sort of ontology should AI systems have to replicate the ability of human agents to cope with real-life situations?This is a meta-ontological question.Such AI systems should have an ontological commitment to the real world, and the truth condition , truth conditions, verification, and so on, with this list being virtually endless.
Qeios, CC-BY 4.0 • Article, January 15, 2024 Qeios ID: D0RDW7 • https://doi.org/10.32388/D0RDW71/6 semantic gap between technical and philosophical concepts and attempts to bring them together under one perspective.The latter could involve concepts such as agents, autonomy, intelligence, the mind, ethics, justification, responsibility, phenomenology, selfhood, personhood, knowledge, wisdom, privacy, power, right vs. wrong, ontology Chinese Room: Searle's Chinese room argument is about whether modern computers based on a Turing machine (TM) operate on a syntactic or semantic level.If Searle's argument holds, computers will never think like humans, so it is impossible to base artificial general intelligence (AGI) on a TM.The insight from philosophy in technology, namely "what is wrong with this discussion," does not come from technology but rather from the philosophy of mind.It holds that the discussion is misguided: A TM model of the human mind is inappropriate because if AI systems are to match human-level intelligence (i.e., AGI), they need to be embodied, embedded, extended, and enactive (i.e., embodied cognition),