The Algorithmic Philosophy — A Synthetic and Social Philosophy

This paper as the introduction part of the forthcoming namesake book introduces how the Algorithmic Thinking Theory, a theory of human minds, can be used to solve many philosophical puzzles and therefore form a “grand synthesis” or unification of existing various philosophies. Innate discrete thinking tools process information or data from the external world serially, selectively, repetitively, roundaboutly, and economically, leading to consequences such as “mental distortion”, knowledge solidification, and combinatorial explosion, thereby forming a dynamic, pluralistic, embracive, and expansive knowledge system where “Being” and “opinions”, ontology and epistemology, rationality and irrationality, and so on are all synthesized. Thoughtful entities exist, move, change, combine, interact, and multiply to make social phenomena as the “independent third party” between human and the world, which implies both the differences and compatibility between natural and social sciences. Democracy, freedom, market, institutions, and organizational power are explained logically consistently like never before. All major philosophical branches, schools, and scholars are included in this concise panorama that stems just from this formula: thinking = computation = (Instruction + information) × speed × time. This highly original approach was inspired by computer science and customized just for the making of principles of humanities and social sciences.


Introduction
The purpose of this book 2 is to introduce a highly original theory of mind, or a "thinking theory".The theory and its corollaries form a series of principles that did not exist in the world before.When it is introduced into philosophy, many of the major puzzles that have been confusing philosophers for long times are answered one after another, and the existing branches and schools of philosophy are all merged into a single whole.This is called the "Grand Synthesis", and the logic presented in it can also be used to construct a unified knowledge system of humankind and lay the real foundations for a unified social science.Therefore, this "Algorithmic Philosophy" is both a synthetic philosophy and a social philosophy.
In this era when pluralism is prevalent, theories of a grand unification have been seeming doubtful and unsellable.
Nonetheless, with those fresh but self-evident logics, this series of principles is used to indicate how the unification can be established in an acceptable (and critical, of course) way, while encompassing plurality, conflict, subjectivity, relativity, uncertainty, development-and crucially, any logical inconsistencies here may only be existent temporarily or locally, thus there is something here that can be called "Higher-Order Consistency".I discovered this theory at the beginning of this century in order to solve some fundamental problems in economic theory, and simultaneously found that it could be used to synthesize all existing social sciences and humanities.However, its philosophical application has progressed more slowly.It was only in recent years that I discovered that when it is used to synthesize the various philosophies, it is just as strikingly satisfactory.
The Algorithmic Thinking Theory It's not surprising to say so, as it's just a theory of mind.From ancient times to the present, we humankind have not had a satisfactory theory of thinking.The history of philosophy throughout history can be read as a search for some appropriate thinking theories.The existing doctrines of thinking are fragmentary, or in the style of natural science.One of the criteria for an "appropriate thinking theory", if any, could be "softwareization"; that is, the mind should be able to explain itself with its own language, and without using natural science terms such as molecules, atoms, neurons, etc.I was surprised to find out that the solution to this seemingly difficult task was almost ready-made, which had been in obscurity for decades in the most basic textbooks of computers.Since there have been so many gifts that computers have offered to humankind, how can they once again contribute such a precious one to philosophy, humanities, and social sciences!This is the principle of "computation = instruction + information" 3 .An "instruction" is a kind of basic operation in a computer, among dozens of core instructions in it.It is what the user of the computer "tells" the computer to perform as a minimal task, so it reflects a type of basic thinking activity that can be carried out in the mind of the user, the human being.
A computer simply uses physical means to simulate these types of thinking activity and make them run faster than the human brain.The core types of these instructions are identical for all computers.In this way, isn't an instruction a basic innate thinking tool in the human brain that was suggested by Immanuel Kant?Or, doesn't it reflect the concrete capacity of thinking in the human brain?Aren't different people able to communicate with each other because they share the same basic abilities of thinking?Clearly, the concept of instruction has been severely neglected by intelligentsia.Instruction can be defined and paired with external information; only from this perspective can we come to know what Instruction and information respectively are.This pair constitutes a relatively complete set of concepts that can be used to provide the necessary and bottom structure for the theory of mind.An instruction operates on a quantity of no more than two data (or two pieces of information), like a machine processing certain raw materials, which constitutes the minimal unit of mental activity ("Meta-computation", or "an operation").The human brain can only perform such a unit of operation at a time, so computational operations must be strung together to form a stream of behaviors.This naturally introduces the temporal dimension to thinking activities.The data resulting from computations must be stored "alongside" for intermittent re-use later.This naturally introduces the spatial dimension to thinking activities.This is what economists call the "roundabout method of production" 4 .It works in cycles, both requiring and generating stocks of knowledge.In this way, the concept of "knowledge" as computing results is formed, which is in principle different from instructions and from original information.
Knowledge exists as a relatively "independent third party" in the human brain, or in books, databases, etc.This is the "Algorithmic Thinking Theory" (or the "Algorithmic Theory", "Algorithm Framework Theory", "Algorithmic Framework", hereinafter referred to as "ATT") that I propose.It can be shorthanded to a formula: thinking = computation = (Instruction + information) × speed × time.In natural language, when human thinking, it means that a human computes, i.e., uses the Instructions (the capitalized first letter indicates that they are human's rather than computer's) inherent in their brains, universal to everyone, to process information from the outside world serially, selectively, roundaboutly, and repetitively.Information can be reprocessed after processing, so it is called "repetitively".The Instructions can be understood and equated with the verbs in natural language that refer to mental actions.Moreover, we can expand the scope of Instruction to any verb that refers to a mental action and that we believe or assume is carried out also in the form of "Instruction + information", or "verb + object", relatively independently, even if it cannot be simulated by a computer for the time being (the "Manual Instruction").Therefore, ATT is a theory of human thinking rather than of computers, and it can be used, apart from computers, directly in traditional manual study of humanities and social sciences.In the above, an "Algorithm" is a method by which Instructions are selected to make a sequence to process information or data.
Apparently, it is the hardcore of intelligence, thus I use it to title this thinking theory, and use words such as "Algorithmic" and "Algorithmical" to refer to multiple meanings such as "of Algorithm", "of ATT" or "under ATT", and so forth.

The Algorithmic Principles
This theory shall be bound to be questioned by opponents of artificial intelligence (AI) or computationalism.However, significant advances in AI show that computers are increasingly capable of performing tasks that are distinctly subjective, just like human brains.Presumably, these developments have sent shockwaves through opponents.I argue that, just as the concept of instruction has been neglected, computers can actually behave more like humans than they once appeared, but programmers usually prevent them from doing so.The similarity between a computer and a human brain lies not only in the fact that the computer can perform meaningful high-level operations that the human brain can do, but also in the fact that the computer experiences such "uncertainty", "confusions", and failures as the human being, which are instead the additional evidence that the human brain runs in much the same way as a computer.And, the assumption of the above-mentioned Manual Instruction can already make ATT largely free from the controversy about AI.
Particularly, the effectiveness of ATT is reflected in the surprising and significant series of inferences it derives (together with ATT, they are called the "Algorithmic Principles").Some of these inferences previously existed as the relatively independent propositions, subject to theoretical justification, while others can constitute brand-new principles and knowledge.Readers are generally supposed unknown that there are clear causal connections between the above simple theory of mind and these propositions.
The computing activities may seem mechanical and "chilly", however, meta-computation works or produces in the sea of data, which means that information, data, and knowledge are all the "real entities", "substances", or "realities", and computation is a kind of "behavior" that is similar, juxtaposed, and interactive with human body movements.Computer principles help us to clarify how these entities "exist", how they relate to their material carriers, how they arise, move, change, disappear, and how they bind, separate, or interact with other entities, and so on.Therefore, the actors need to consider the costs and benefits of computations, apply economic analysis to the thinking activities 5 , and arrange the time sequence of computing operations ("Algorithmic Logic").Although the deductive method, as an Instruction or Algorithm, can produce reliable results, its conditions are strict, the processes are often lengthy, and therefore not often economically desirable, so that other less reliable but relatively simple and rapid methods such as induction, analogy, experimentation, lottery, association, and imagination (the "Alternative Algorithms") can come in handy and compete with deduction. 6cision-making in the spatio-temporal environment is often time-limited, but the actors must consider as many factors as possible ("factor completeness"), so they have to trade off between the computing efficiency and the resultant quality, concocting the combination of all the above-mentioned methods, in order to close computations and make the decisions timely ("forced closure of computations").Therefore, the computing results as knowledge is inevitably just some "makeshifts" (e.g., attitudes, beliefs, values, etc.) with varying degrees of effectiveness.This is the "Mental Distortion" or the subjective turn of computations, deflecting from mainstream deductive and perfect tracks.The so-called "purpose" generally is also a result of mental distortion under the serial processing method.Although these computational results are crude and heterogeneous, one has to store them selectively for future use (the "sedimentation" of thinking).And in the following computations, the actors often have to refer to these ready-made results, otherwise they will be even more helpless.Therefore, knowledge as a stock is actually mostly arbitrary and rigid things, which only provide fixed answers to problems (the "solidification" of knowledge), and cannot take into full and flexible account of its applicability in specific current operations.Compared with the huge stocks of knowledge left over from history, the computing power of is extremely limited; whether current operations are used to develop new knowledge realtimely on the spot of problemsolving or to revise the existing old knowledge, they must only proceed marginally and gradually in a very small proportion of the stocks.
The considerations of prudence and completeness force a thinker to summarize the entire world with limited computing power, and then close the computations and draw conclusions.In this way, it forms a specific version of knowledge about the world.Different individuals can concurrently hold their different versions of knowledge.The improvement of imperfect existing knowledge, or the use of new information to form new knowledge, can lead to innovations.Accumulation of small innovations result in a new version of knowledge that can replace the earlier versions.Furthermore, during the continuous computations, an almost infinite number of combinations can be formed between Instructions and massive amounts of information, which is called the "Combinatorial Explosion", indicating the infinite potential for knowledge development.
Therefore, knowledge development must be an endless process intertwining the improvement of knowledge quality and the expansion of its quantity.This is one of the most important discoveries that ATT can contribute.Under the premise of infinite development, the convergent processes and divergent processes, equilibria and disequilibria, are mixed, and the related discreteness, plurality, heterogeneity, "softness", individuality, and differences also exist as some Algorithmical inferences.Since ATT accurately describes the specific structure, form, existence, movement, change, and development of human thinking, this constitutes a precise theory of bounded rationality (or the "concrete rationality").All phenomena, and thus all states of the world, can be seen as the consequences and manifestations of this "concrete reason" or "concrete rationality".
A specific extension of the above Algorithmic principles is in the field of psychology.Since it has been proved that the socalled "rational thinking" is inevitably "distortive" in one way or another, there is no essential difference between thinking or computational phenomena and psychological phenomena such as emotions, desires, instincts, impulses, and so forth.
Since metacomputing requires a stock of knowledge, an individual cannot just carry the Instruction system to the world, and some minimal knowledge must be born with the Instruction system like some "hard software" pre-installed in a computer before it leaves the factory for a user; therefore, the mechanism of biological inheritance, like the arrangement of "hard software" in the computer, should logically be used to transfer knowledge to the descendant.This "hard software" knowledge supports the descendant to make basic decisions after birth, and buys time for his/her development or acquisition of necessary "purely-soft" knowledge.However, because this "hard software" cannot be updated after motherbaby separation, this leads to its widening rift to the acquired knowledge, so that it is eventually deemed "irrational".

The Philosophical Implications
The philosophical application of the Algorithmic principles has actually begun above, and below are some of the major points that are to be directly and briefly explained.
The fundamental problem facing philosophy for more than 2,000 years can be considered to be the division between "Being" and "opinions" caused by Parmenides.From Being to Plato's Ideas, to God, to the Absolute, to modern science, this "correct knowledge" in different names was strongly implied to engulf all other human thoughts and then converge into some simple and internally consistent "ultimate truth".Kant's "Copernican revolution" actually means that thought is to be seen as something concrete that is independent of and juxtaposes with things.After Hegel's failed attempt to pull this subjective-objective split back into the monist philosophy, his critics basically followed two lines, one was to enter analytic philosophy pursuing precision, which eventually led to the creation of computers, and the other line was to emphasize the importance of all kinds of intellectual activities and knowledge other than Being (or science).These two lines could converge into the Algorithmic Thinking Theory: the former as a tool to make ATT and the latter as the inferences.Most of the elements of the Algorithmic principles have been ready-made, and I just assembled them together in a logical order and accomplished the "last mile" of the grand synthesis.
What exist a priori in the human mind do not need to be assumed to be all knowledge, as Plato did, but primarily the thinking tools such as Instructions (with the exception of the above-mentioned "hard software").Such thinking tools do not need to be "perfect", or preloaded with some ciphers of the world, or even many (since certain single Instruction can develop into multiple Instructions, see §30) as long as they are finite and concrete.Computer science provides a principle that allows us to understand how the concrete spiritual beings, such as "Instructions", interact with foreign objects in a specific way in a spatiotemporal environment, which then solves the problem that since ancient times ideas have been potentially considered incapable of being placed as objects alongside foreign physical objects.The concreteness and finitude of Instructions lead to the independent and discrete existence of metacomputation as the smallest unit of thinking activity, and then different metacomputations can objectify each other or themselves in a serial way.This further enables that thoughtful entities actually exist in the world as one of many types of entities, and that thinking activities are also a real kind of activity.Now, we have a basic completeness in the identification of entities, and any "activity" is the real activity of any of these real entities, and there is no human activity outside of these entities.
An activity of human thinking is an "encounter" between two "strangers", namely, the thinking tool and the external object, and thus the information, in a serial manner.This concept can cure all the major ills of philosophy up to date in one fell swoop.This kind of encounters, like chemical reactions between elementary particles, will inevitably first form a large amount of uneven knowledge, which exists as an "independent third party" between human and things.Different pieces of knowledge are compared with each other, giving rise to their qualitative differences, namely, the differences between right and wrong, consistency and conflict, good and bad, beauty and ugliness, and so on.The concreteness and definiteness of thinking tools lead to the effect that a result of processing specific information by specific Instruction is always definite and certain, which is the "a priori certainty", namely, people have certain definite concepts of right and wrong before they go to know foreign objects.It is typically reflected in logics and mathematics.This is like a self-test, drill, or rehearsal of a machine, which manifests the constancy and interpersonal universality of functions of Instructions.However, the perspective of metacomputation allows us to recognize that the elements contained in any real object or in any real problem are infinite in principle, therefore, the empirical knowledge formed by the processing of specific foreign information in the course of time can generally only be local, and the depth of the processing is also limited, so its correctness can only be limited and relative; even if some knowledge has obtained the position of winner in the competition with other knowledge for a long time, its ultimate position in the knowledge system cannot be certified, as Hume argued.But, on the other hand, such encounters and long-term, large-scale experiments will not lead to nothing; the law of probability could ensure that both high-quality and low-quality knowledge will be continuously generated, and the knowledge system must in principle contain a mixture of all these various components.Some knowledge, initially fragmented and abundant, may later be refined into knowledge that is very simple in form (such as scientific theories), leading to great economy, and then is worshiped, and particularly and deliberately sought after.Henceforth, traditional philosophy speculates that all knowledge will be eventually refined into this concise form.However, as long as we perceive that this is actually a knee-jerk obsession with the computational economy, we will further realize that there is actually no conclusive objective evidence to support this speculation.On the contrary, because the above convergent processes conserve computing resources and thus allow new computations to be launched, the total amount of human knowledge continues to grow.This Algorithmical thinking can be used not only to know, but also to construct and transcend, to propose new goals and initiate human engineering, so as to move towards an infinite future.All of these elements and activities are within a mixed, interconnected framework.It is difficult to explain them only in division or in isolation; by merging them together can they be interpreted once and for all.The basic form of philosophy can now be transformed from "The Great Convergence" to "The Big Bang", as in astrophysics.discern the results.The actors can generally distinguish between subjective opinions and objective "facts" on their own.The latter, although ultimately the beliefs or assumptions, have been considered relatively reliable.The actors conceive of this type of beliefs as "realities", distinct from subjective opinions, which can bring computational convenience and economy.This leads to the principle of "universal in things".On the other hand, since the result of processing individual information by individual Instruction is deterministic and unchangable, we can think that all high-quality and low-quality knowledge are unexceptionally "predetermined", thus we can conjecture the existence of a super-infinite "human knowledge thesaurus" composed of the results of all Instructions processing all information.It exists "among people" and depends on the existence of humankind as a whole.Each person only possesses a part of it, moving along the routes in it similar to, or different from, each other.This explains the principle of "universal before things".These principles can be used again to synthesize absoluteness and relativity, determinism and freedom, and ontology and epistemology.

The Social Philosophy
Now turn to social philosophy.
By applying the above ontology of minds to human individuals as natural beings, it can "naturally" lead to the society that coexists with nature, and the humanities and social sciences that coexist with natural sciences.The reason why the existing natural sciences could not be logically extended to the social sciences and humanities is that human minds have not been able to become real beings in the limited, concrete, and characteristic way described above, thereby failing to be the objects of social and humanistic study.Such individuals who are both objective and subjective think, make decisions, and act in the world, and then produce various social existences and phenomena.
The discrete existence of individuals with independent thoughts under the conditions of time and space creates the necessity of exchange of their thoughts.As a kind of software mainly for interpersonal communication, language is acquired after one's birth, attached to the thinking system pursuant to the communicational principles provided by computer science, and combined with physical media in the multimedia manner.This shall be the right way to define language, as well as the philosophy of language.
ATT provides explicit or implicit foundations for the endogeny of basic social issues such as freedom, democracy, market, and justice.Meanwhile, it also provides the soil for emergence of the opposite issues and many other issues.Eventually, it will reach a comprehensive and integrated framework.
From the principle of "solidification of knowledge", rules and institutions can be deduced , which lay the fixed tracks on the roads with enormous branches, speeding up computations, but losing a certain quality, accuracy, and opportunity.This is the Algorithmical perspective on institutions.Society, like an individual, has limited current computing power, so only by adopting a variety of measures, including institutions, to fix and coordinate many variables, can current computations achieve better results.However, under the condition of bounded rationality, it is often not enough to adopt the institutionalized measures based on ex-ante rules to do these adjustments; in the environment of subjectivity and conflict, it is sometimes necessary to adopt the in-site and real-time adjustment methods by establishing a hierarchical organization through buy-offs or negotiations, where the individual leader commands the organizational members realtimely to work.This is the executive or administrative power that differs from the judicial system, and it can make up for the flaws of rules to a certain extent.The theoretical basis is that under the condition of discrete computing, due to the lack of neural connections between people that are only available within a single human body, the self-consistency of an individual is relatively higher than that of the group, hence the individual achieves some competitive advantages over the group.This is the root cause of power and even dictatorship.
The materialization of computation and communication results in limited and specific costs and benefits for a particular organization.As organizations change in size and function, the way to organize is not necessarily economical and desirable.Societies made up of dispersed free individuals can bring diversity, abundance, development, speed, and flexibility in the results of computations.Free individuals can also establish equal cooperative relations through consultation, contracting, voting, networking, and other means.Common sense, customs, morality, religion, education, media, associations, and other means are also helpful to achieve individual cooperation, but they are all based on the materialization of ideas, which is the guarantee that each of them has relatively independent logical and theoretical significance.The key to understanding ethics and morality, the "informal institutions", is to recognize their solidification, modularity (or patterning), and imperfection, which hence are ultimately also some makeshifts.Fairness and justice, like scientific knowledge, are revered not because it is absolutely perfect, but because it is relatively one of the most reliable parts of social engineering knowledge.The computing economy requires that such knowledge is neither too little nor too much.
However, no matter how the stock of knowledge develops, the degree of freedom left for current computations is great.
The barriers of limited computing power and big data cause that the coercive management from the authority can only be confined to limited scopes and aspects, impossible to cover everything.Because of the effect that a rising tide lifts all boats, this impossibility is endogenous and persistent.It is also obvious that radical changes in the stock of knowledge and the establishment of an absolutely stable order are impossible as well.Society is a "Neurath's ship" that sails in a controlled way into the unknown.
The limitations of public administration ensure that the market is the primary mode of organizing economic activities.The significant computing cost leads to the creation of money, which is used to simplify commodity valuation and trading.The materiality of trading activity leads to the effects that its scope and strength can only be limited, the meanings and uses of price information are also limited, and trading activities and non-trading activities, and thus economic activities and noneconomic activities, are both independent and interrelated.
In short, in the literature of extreme rationalism, the mind is in a weightless, "non-existent" special state.ATT is like giving weight and "volume" to the mind, so that minds "exist", and thousands of social phenomena, as well as logical and compelling social theories, can henceforth be produced.The Philosophy of Science and Methodology Finally, an outline of the philosophy of science and the methodology.
Distortions of thinking lead to thoughtful interpersonal differences as a primary and common Algorithmical phenomenon, and the forced closure of computing leads to the modularity, diversity, and plurality of knowledge.For example, in addition to cognitive knowledge, people will form a variety of engineering knowledge to solve practical problems, scattered common sense knowledge, witchcraft, and religion to answer ultimate questions of the world and their lifetime, culture and art to entertain and express their feelings and emotions, and so on.These types of knowledge were made with different methods, from different stages of mental activity, or in different areas; they vary in the degree of intimacy of their internal and interrelated relationships ("Soft Quantitative Analysis"), each has a specific and limited function, and differs in quality.
Together, they make up the whole body of human knowledge.
The combination of limited computing power and the thinking economy ensures that knowledge development is divided to a certain extent.In this system of division of labor, it is easy for us to understand the nature, role, and function of science.
Science is the cognitive knowledge that is of relatively high quality or reliability and is suitable for development and teaching by professional intellectuals.The knowledge of ordinary people is mainly for their own use, while scientific knowledge must be published and disseminated for use by society.The latter implies that science focuses on revealing the properties of objects such as universality, certainty, and constancy.But, like any other kinds of knowledge, the quality and quantity of scientific knowledge that exists in any era is limited and cannot be completely self-consistent.It must be based on common sense and certain philosophical assumptions.Its development is achieved through intensive investment, using a conservative strategy; that is, it does not go ahead if it does not meet a certain standard.As a result, a series of technical and detailed standards are established that distinguish science from other kinds of knowledge.
However, the core truth is that science must be differentiated from common sense and other knowledge to be sold to ordinary people.Ordinary people can learn from science, and then turn it into common sense, thus in the final analysis, there will be no essential difference between science and other kinds of knowledge, but the intensive R&D investment activities that make the intertemporal and technical differences between it and them, and these differences are secured only by means of dynamics.
Since the methods used by scientists can also be used by ordinary people in principle, scientific research cannot have a completely unique method, but only biases some methods against others.Common actors have been studying the world as well.Scientists and common actors complement each other, and compete and collaborate with each other, hence scientists become a particular category of actors.This means the integration of ontology and epistemology.From such Algorithmic conclusions, we can deduce the appropriate social science methodology, and at the same time, we can also know that the Kuhnian "paradigm" is nothing more than a collection of many relatively closely related elements in the scientific system, which inevitably contains subjectivity; And, the paradigm shift, or the "scientific revolution" 7 , like the worldview transformation of ordinary people, or similar to the change of the social system, can only occur occasionally and intermittently on the basis of marginal accumulation.

Conclusion
All of the above discussions are inseparable from Algorithmic Thinking Theory and Algorithmic Principles as the foundation.The reason why the theory and the principles of are so useful is obviously that they fill in many elements that have been lacking in the existing knowledge system.The main parts of the existing knowledge system are still valuable and important; when these Algorithmical elements are added, it is as if a catalyst is injected into it, and after a series of active chemical reactions, it is merged into a new whole.This "Algorithmic Approach" aims to make use of computer principles while remaining independent of computer science, and then become a special tool for philosophy, humanities, and social sciences, and a basic method for theoretical deduction in these fields.Readers who do not understand the principles of computers should still be able to use it.Moreover, I believe that by elucidating many of the non-traditional mechanisms and characteristics of the human mind, the theory may also be useful for the study of computers and artificial intelligence.
Hopefully, the use of terms such as "computation", "Instruction", and "Algorithm" will not make philosophical and humanistic scholars uncomfortable.In fact, while expanding the rationality and scientificity of relevant fields, ATT has also turn the social sciences humanistic.This synthesis and interpenetration can lead to a unified view of the world and society from some unprecedentedly interesting perspectives, thereby prospectively arousing a great deal of practical research work.It has the potential to make philosophy, humanities, and social sciences all become distinctly productive and creative, and will open up the space for our imagination in the new century.
Algorithm Theory can also be used to prove that in the infinitely developing knowledge system, philosophy with characteristics such as fundamentality, subjectivity, and fuzziness is indispensable; therefore, the answers to existing philosophical puzzles will finally give way to new ones, and hence the initiation of new philosophy.
Footnotes 1 Bin Li, a visiting scholar of Center for Urban & Regional Studies, University of North Carolina at Chapel Hill, used to be