Towards a trans-epistemic society

The article schematically presents four types of codes associated with four forms of intelligence of the living (genomic, sensorimotor, symbolic and digital). It is particularly interested in the feedback effects of each higher level on previous levels and the sustained trend towards externalization. The human species is now reshaping these various forms of intelligence. The boundaries between natural, cultural and technical are in the process of blurring. A trans-epistemic society is emerging, which expressly includes the full depth of its relations with the biosphere.


Introduction
How should we interpret the present digital and neurobiological revolutions? So far, reflection has been oriented towards technical mediations and their societal consequences. This article integrates such an orientation into a broader perspective, by articulating various types of codes and intelligences.
The notion of intelligence, in its most general sense, refers to the ability of an epistemic subject to process information in order to generate knowledge orienting its activity. Knowledge is not necessarily fair, relevant, useful or self-transformative; it can lead to maladjustment or even self-destruction. Intelligence has very different forms: symbolic, sensorimotor and, more basically, genomic (Piaget, 1967). For nearly a century, a new digital level is in the process of emerging. In this text, the term embodiment refers to the biological and neurological foundations of intelligences and the term externalization to their social, cultural and technical inscriptions.
Codes combine simple elements to designate more complex elements and processes. Such a formal definition has the disadvantage of being suitable for both the Morse code and the DNA code. However, we know that the former always performs the same function in a stable and conventional way, whereas the latter induces a gigantic deployment of life forms. In this article, attention focuses on codes that are associated with complexity leaps.

DNA code and genomic intelligence
Living organisms mobilize the informational properties of matter to generate their knowledge. How does genomic intelligence develop and apply new knowledge? The dominant answer is provided by the synthetic theory of evolution, which is regularly criticized and updated. (Laland et al., 2014). The central epistemic model of biological knowledge is expressed in terms of randomness and selective pressure, with an emphasis at times on the first pole (neutral mutations, advantageous or disadvantageous mutations of ordinary genes, genes matter are used to develop, store and mobilize knowledge. Physicalist approaches confound the two levels. Shannon's work (1948) on the loss of information in physical transmission systems has contributed significantly to blurring representations. Georges Chapouthier rightly criticizes this reductionist confusionalso nourished by Brillouin (1959) and more recently by Laszlo (2017) between the « amount of information per symbol in a message » (Shannon) and the notion of information as a « piece of knowledge » (2001,28).
The emergence of the living transforms the non-living. This feedback on the environment is particularly spectacular if we consider the atmosphere. The first unicellular cells (around 4 billion years ago) appeared in an oxygen-free atmosphere. For the cyanobacteria at that time, oxygen was a toxic waste... which much later, during the Great Oxygenation event (around 2.4 billion years ago), favored the appearance of multicellular aerobic life forms (around 2.1 billion years ago). We are here in a long-term dialogue between living forms and their physicochemical deployment frameworks. This dialogue has continued uninterruptedly up to its last current avatar, the Anthropocene.

Neural codes and sensorimotor intelligence
For multicellular organisms to exist, the units they are composed of must cooperate and therefore communicate. The following remarks leave aside internal and external communications in plants (boiling field, see Muday & Brown-Harding, 2018) and neurovegetative systems in animals. They focus only on sensory and motor interactions with the outside world and their neural processing.
The neural capacities of living beings bridge the gap between sensory receptors and motor expressions. Together, these three dimensions form the incredibly diverse sensorimotor intelligences, which have gradually redefined the biosphere, adding a level to the unicellular world and inventing all varieties of perceptual refinements, motor differentials and interspecies relationships.
The most elementary modality of sensorimotor intelligence consists of preconstructed reflexes that are inscribed in DNA and which command a given reaction for a given perceptual configuration. In most animals, this first level is combined with one or more forms of learning/unlearning (Pavlovian and Skinnerian conditioning, and more complex cognitive patterns). Reactions are therefore adaptable to life experiences, in ways that are distinct from epigenetic modifications. This adaptive finesse directly depends on neural developments, as well as the processing, storage and recall capabilities of each organism's biographical experiences.
The development of sensorimotor intelligences implies the development of multiple categorizations of the outside world: time, space, movement, light, texture, color, sound, smell, taste, etc. This epistemic inventiveness is coupled with sensory tools, neural processings and motor skills. Ethologists and neurologists have begun to identify the many ways in which the outside world can be shaped. Epistemology, traditionally limited to human knowledge, is now to be extended to all living beings.
Below, a few brief reminders about neural functioning. Rigid, computational representations of the brain are now superseded by a more flexible vision, which emphasizes neuroplasticity. It is made apparent, at both the meso and micro levels, by the production of new neurons, the modification of neural networks, the variation of neural activity within networks, the number and activity of synapses, the creation and destruction of synaptic connections, the modulation of synaptic receptors, the modification of DNA expression in neurons, etc. In short, the brain is a dynamic organ in permanent transformation.
These plastic properties of the brain, combined with the fact that neural networks include millions of cells that communicate with each other through thousands of synapses and receptors, make decoding these exchanges particularly difficult. Decoding was initially attempted on tiny nervous systems, such as that of the C. elegans worm (302 neurons), whose integral connectome (a map of neural connections) has been established. Recently, remarkable breakthroughs have been made: the code for the perception of human faces by monkeys has been identified (Chang & Tsao, 2017); and the neural space code in rats is being deciphered (Moser et al., 2014). In both cases, several types of neurons are at work, at times specialized in one dimension or another, and at times responsible for synthesizing or linking with other perceptions or memories.
Using the technique of optogenetics, a number of laboratories have been able to modify, delete and directly introduce memories in rats. The junction with computer simulation introduces a close interactive dialogue between in vivo observations and in silico reconstructions. In 2018, the Blue Brain Project completed a Cell Atlas of the Mouse Brain (Erö el al., 2018).
The gap between the functioning of genomic intelligence and of sensorimotor intelligence is striking. The first is largely dominated by chance and trial and error and the second by the learning/unlearning dynamics. The key difference lies in the flexibility of adaptation. Unlike learning, the DNA mutation is not reversible. However, let us not forget that sensorimotor intelligence is itself rooted in genomic intelligence. With a twofold consequence: on the one hand, it depends on the mutations that may affect perceptual, motor and neural capacities, and on the other hand, it participates in their selection according to its own logic. Thus, sensorimotor behavior becomes a driving force of evolution. A force that expresses the tendency towards externalization in two ways: by refining knowledge of the outside world, and by adapting its flexibility. The feedback on the physicochemical environment has resulted in a remarkable transformation of landscapes, which in turn has reinforced these trends; sand and rocks were gradually covered with humus and vegetation, which laid the foundations for great diversification of animals. The terms of « embedding » and « subsystem » come to mind to qualify the articulation of these two epistemic logics, provided they are connected to that of emergence, which emphasizes the irreducibility and relative autonomy of one towards the other. The present text accepts the idea of distinct levels of reality with distinct properties, but without claiming to decide what to attribute to our ignorance or to the nature of things (on the concept of emergence, see O'Connor and Hong Yu, 2015).
Finally, sensorimotor intelligence makes it possible to modify behaviors intergeneratively without changing the DNA, at least for a certain amount of time. For example a variation in the song or the nuptial parade of some birds can lead to a reproductive subset and possibly, in the long term, to a new species. Thus, when such a variation is learned and practiced by a particular « tribe » before it is integrated into the DNA, it is the first step to externalizing know-how into a transmissible ritual, that is embodied through learning. The autonomy of the sensorimotor leads to the even greater autonomy of the symbolic.

Codes and symbolic intelligence
It is necessary to distinguish two levels of the symbolic. The first refers to the ability to represent an object in its absence; the second refers to the ability to express a representation by means of a sign or a symbol.
The individual mental representation (visual, phonic, etc.) of an object in its absence was acquired well before the invention of symbolic codes. It implies a more or less strong permanence of the object independently of immediate perceptions. Consequently, the hic et nunc interactions recall, not only unconsciously but also consciously, a background of mental images and experiences, which enhances the spatiotemporal depth of relations. We can relate the first socio-symbolic categorizations (of kinship, dominance, etc.), technical inventions and their cumulative transmission and also rudiments of non-verbal language (Boesch, 2012;De Waal, 2005) to this initial level of symbolic intelligence. These elements, combined with the social configurations that are specific to each group, justify the use of the term « culture », at least in chimpanzees.
Verbal language implies new relationships between the individual and the collective. It provides a dimension of collective expression and communication. The phonic images expressed publicly will be associated on a regular basis with other images (of objects, actions or operations) that designate thereafter common referents. The sign establishes an articulation between individual and collective representations This implies their regular use in a group, their learning and their normative control.
Verbal language was not born all of a sudden. Most likely it went through a protolanguage stage with words and word associations (Bickerton, 1990;Everett, 2017). This first phase goes back very far in the evolution of our species, at least to Homo habilis, i.e. more than 2 million years ago. Language with more complex syntax, sentences and narratives appeared later, not without analogy to the child's acquisition of language (Dortier, 2004). This vision is also consistent with the relative autonomy between lexical and syntactic.
Words, sentences, narratives: these symbolic interindividual bridges rise from the inescapable sensorimotor rooting of the signifier. It is at the articulation between the signified and the signifier that our cultural autonomization vis-à-vis our biological substratum is played out. Such an autonomization is made possible by our neuroplasticity and its ability to accommodate any lexicon, syntax, category or institution.
The so-called arbitrariness of the sign is at the root of both the collective dimension of symbolic codes and the infinite plurality of cultures within the human species. This intercultural plurality is accompanied by an intra-cultural plurality, well documented by sociologists and sociolinguists. A degree of heterogeneity is the rule, even in the simplest conversational exchanges (Gonzalez-Martinez, 2017).
In addition to plurality and sociality, symbolic codes have a third characteristic, the autonomy between sign and referent. Children are amused when discovering this: they can call an elephant a mouse, for example. This disconnection is made possible by the decoupling of representations and perceptions. As a result, ordinary communication is composed both of relatively precise agreements, as well as of approximations, misunderstandings, errors, unspoken words, lies, doubts and requests for clarification. The luxuriant construction of reality and the interpretative plurality prevail over objective and univocal exchange. The distinction between « true » and « false » becomes the subject of endless debate. Our current « fake news » have distant ancestors… Whereas intra-species sensorimotor communications are mainly hereditary (pheromones, cries of alarm...), the uniqueness of symbolic communication is paradoxically the possibility of non-sharing, based on common codes, whether in terms of a distorted construction of reality or of interpretative violence. On a symbolic level, what is common is inextricably linked to the virtuality of non-sharing. This explains why socio-symbolic life is full of conflicts. And this is also why the normative dimension, that is dedicated to the channeling of possibilities, imposes itself through constant and sometimes violent power relations.
In terms of adaptive flexibility, the improvement is considerable: a single message (there is food there, X is dead or injured, etc.) can change the behavior of a single person or of an entire group of people. The variety of messages is virtually infinite and goes far beyond the cries of alarm or the dance of bees. Symbolic language also provides a considerable extension of social learning, previously focused on direct perception. However, these improvements are inseparable from the uncertainties of symbolic communication (errors, lies, etc.).
In the same way that a species evolves by micro-changes, language evolves through differences in sound and meaning first appropriated by minorities and then by the majority. These formally converging micro-macro dialectics are obviously not identical, with linguistic differences being the subject of tension concerning their legitimacy or illegitimacy.
Individual variations in symbolic experiences are also unavoidable and result in individual identities that are always marked by plurality and relative heterogeneity. These variations extend the processes of individualization already present in DNA and neural codes. Each semiotic individual at the same time resembles and differs from the others,an identity that varies throughout life, sometimes spectacularly.
The socializing dimension of symbolic languages is expressed in the massive externalization of knowledge and values in social practices, divisions of labour, techniques, rituals, stories and institutions. This externalization has resulted in major advances in civilization. The multiplicity and heterogeneity within each culture goes hand in hand with a multiplicity and heterogeneity of embodiments into individual brains. This correlation is ensured by the fact that learning and educational pressures take place within the framework of current social relationships and interactions. Thus, both social convergences and social divergences are transmitted.
The more knowledge and socialized memories expand, the more their learning must find a space in individual brains. Thus, the joint dynamics of embodiment and externalization require special attention. On this frontier, we should mention the interesting « social brain hypothesis », which postulates reciprocal influences between neurobiology and social processes, the latter shaping our neural developments and vice-versa. It should be noted in this respect that the hypothesis of a neural module of language has been considerably watered down. The brain does not start all over again, but it mobilizes previous structures, and in particular all the sensorimotor devices on which language is grafted.
Apparently, there is no pre-wiring to support the perception or expression of a particular range of sounds (about 800 phonemes listed among the 6000 known languages). Babies begin to learn a language by detecting frequencies of sounds (first vowels, then consonants) and associations of sounds (words, sequences of words) (Le Calvez, Pepperkamp & Dupoux, 2007;Kuhl, 2014). They learn regularities and their opposite, that of more or less exceptional figures. This way of entering into language through frequency-based detection is in accordance with the functioning of neural codes which, from the very first days, record regularities/irregularities of all kinds (Rochat, 2006;Dehaene, 2014). Instead of pre-wiring or pre-defined modules, neural networks are formed at the same time as language is acquired. This is also true for reading acquisition and practice, which also lead to « structural changes in brain connectivity » (Kolinsky et al., 2014, 178). Permanent wiring is indeed in construction, but in close connection with the learning and verbal interactions (Changeux, 2009;Duffau, 2016). If the context is deficient, the linguistic brain will most likely be deficient. Joëlle Proust (2011, 17) goes one step further: « Differentiation between cultures also implies a differentiation of brains. » The gradual transformation of Noam Chomsky's initial thesis, which gives increasing importance to the general cognitive tools children use in language learning  (Tomasello, 2003;Ghalebi & Sadighi, 2015), testifies to the decline in inherited modular designs in favor of interactive co-construction of brain networks during socialization. This flexibility allows for luxuriance and variations in symbolic intra-species subdivisions.
Should we relate the multiplication of symbolic categorizations to more fundamental pre-wired cognitive expectations that support primitive forms of relationships (domination vs submission, conflict vs cooperation, selfishness vs altruism, etc.) (cf. Kaufmann & Cordonnier, 2011)? Or should we consider their embodiment through learning in a manner similar to that applying to language? That is to say, less in terms of pre-wiring than in terms of wiring generated in connection with socialization, with a social brain that constantly reworks previous networks, in particular in terms of sensorimotor strength and cunning, opening them to new relational possibilities as development progresses (Grossi, 2017).

Engineering codes and digital intelligence
With the invention of writing, individual and collective oral memories found a stable, externalized material support that was more precise than any other artifactual expression (painting, sculpture, tools, ...). Writing has contributed to the constitution of large political formations, and thus to an increased division of labour and of correlative embodiments. It has also proven to be an incomparable tool for meta-reflection on current languages, on their lack of precision, their polysemy, sensitivity to contexts, etc. Creating languages that are free of these ordinary contingencies has generated considerable efforts, from which ultimately logic and mathematics derive. A number of mental operations were described and formalized, calculation tools were invented. Then, following the pascaline (1645), sustained efforts were made to translate them into calculating machines.
Writing ensures the regular capturing of already externalized oral cultural products. The mechanical calculators, and by extension the computer, make an additional step by opening the way to externalization of mental operations and their logico-mathematical derivations. The book describes procedures and results, without performing them itself. On the contrary, the computer applies modifiable programs, i.e. various sets of operations, to different data formatted to be compatible with the expected processing. Once again, these are codes that serve as levers to extend the externalization process. Binary codes began by transcribing letters, numbers and logical operations first, and then colours, sounds, space, etc.
The history of the computer explosion is of interest here. Presently, the most important developments are occurring in evolutionary robotics and deep learning (Fuchs & Reichert, 2018;Silver et al. 2018). Research aims to increase the degree of self-learning in computers (learning from big data, learning based on auto-reinforcement, groups of robots working in cooperation to achieve optimal search procedures). These kinds of software allow dynamic interactions with the outside world. This new stage in digital intelligence extends the externalization of our cognitive functions even further and raises the question of the autonomy of computers.
The autonomous robot is the fantasy of a complete externalization. In line with the perspective of this article, autonomy and its various degrees must be assessed in reference to biological organisms, which are capable of self-replication and self-transformation without any external engineering. Today, robots are far from attaining such complete and ideal autonomy. By multiplying the range of their sensory and motor possibilities (multi-purpose devices, tool-kits available, tool-making, manufacturing of parts for self-repair, autonomous research into energy resources, etc.) and by introducing adaptive feedback on the learning algorithms themselves, we will come closer to the autonomy of the living. However, we are still far from creating robots that are self-built, self-replicating and self-determined in their goals and means. The engineerin the sense of competence rather than diplomais omnipresent: he defines the sensory sensors, the motor skills and the computer processing. (Varela, 1993;Damasio, 2016). The resources he mobilizes come from symbolic intelligence and their unavoidable neural embodiment. The expression « digital intelligence » should therefore be reserved for the couple formed by the engineer and the computer and, more broadly, for engineers and interconnected computer systems.
Externalization is not a process that can be thought of only in terms of machines and mathematical algorithms. Putting the human dimension in brackets leads to an incomplete approach. The computer does not know what it is doing. Only do those who programmed it. The machine manipulates bits, the engineer mobilizes knowledgean admirable division of tasks of efficiency. Searle's critique (1980) of the Turing test is therefore correct (thought experiment of the Chinese chamber). However, his arguments share the same flaw as those of Turing in their limitation to the semantic level: neither raises the perspective of the intelligences of the living and their auto-constructive autonomy.
In accordance with the perspective of this article, we can also say that computers represent, through us, a second domestication of physical information by living beings. In the same way that living organisms have subordinated the laws of physics to the needs of their own codes and operating principles, we have, for the second time, subordinated the laws of physics to the needs of machines that operate according to softwares resulting from our inventiveness. With considerable advantage: hardware and software can be changed at an accelerated pace, in fact, if one considers the last few decades, at a frenzied pace. Anchored in physics, therefore on a plane that serves as a foundation for the living worlds, digital intelligence can act in return upon our previous epistemic worlds. This is one of the reasons why some believe that we are entering a second great era of evolution.
Digital externalization is better understood by considering the computer and its global networks in the framework of the history of techniques. Techniques operate as mediations of social self-construction and in particular they display the division of labour. Tools both extend the sensorimotor and symbolic capacities and guide learning (degree 1 of externalization). Machines integrate some of the craftsman's tools and tricks and shift the effort to the maintenance and design of new machines (degree 2). Computers move the externalization process to a higher level, by integrating large parts of our symbolic operations into algorithms (level 3) and by opening the door to interventions on genomic and sensorimotor intelligences (level 4). At this fourth level, the classical conception of the technique as an extension of body is reversed: here, living bodies are potentially becoming extensions of the technique. As a result, all life formsand the biosphere as a wholepotentially become bioengineering products.
In the same way as previous forms of intelligences, digital intelligence constitutes itself by constituting the world (Zarsky, 2016). Of course, new algorithmic rationalities incorporate « habits, desires, skills and values » (Jaton, 2017, 811). Computer impacts on socio-symbolic activities are the subject of constant research and commentary. We do not consider this here. In the past decades, digital intelligence has more or less penetrated all sectors of social life and we are only at the beginning of a long process (Jauréguiberry & Proulx, 2011; Meyer & Schroeder, 2015; D'Agostino & Durante, 2018). With social networks, algorithms give our exchanges hitherto unknown forms, they shift private/public boundaries, format messages, induce rapid flows of information, modify the tempo of our relationships, give virtual reality a greater place, strengthen the collective dimension of intelligence, etc. The impacts on genomic and sensorimotor intelligences are also significant. They are based on the link between computing power and discoveries in biology and neuroscience. The techniques of investigation, data-processing and intervention on living organisms integrate almost entirely the computer tool. Digital intelligence is a digital-scientific intelligence.
Genetic manipulations have become routine. For Craig Venter (2014), life is a game in terms of 0 and 1. Thanks to new softwares and mathematical models of cellular functioning, synthetic biology is moving from initial bricolages to a true engineering of life, with, for example, the search for a basic DNA for a minimal bacterium, the search for a alternative nucleic acid to DNA, the biosynthesis of multiple molecules, etc. (Hutchison III, 2016;Malyshev et al., 2014). Automatic genome sequencing makes it possible to revisit the history of living organisms and to access the innumerable variations of DNA within different species. One possible application is the individualization of medicine. George Church, with his Human Genome Project-Write, goes still further by planning to rewrite the entire human genome in ten years. The project is no longer limited to babies à la carte. It aims to create a new human species, a frontier that only a minority of people are currently considering crossing.
As soon as the neural codes will have revealed their secrets, neurologists will be able to implement effective interfaces capable of reading neural activities, interpreting them and connecting them directly to external devices. Brainreaders' accuracy is increasing; it is already possible today to make a paraplegic walk by interpreting his motor neurons' waves.
Optogenetics has paved the way for discoveries that open the door to deliberate interventions on the reinforcement and obliteration of memories. This technique, which activates/inactivates a neural network associated with a given memory, has made it possible to clarify the mechanisms of memorization, such as sharing neurons that are common to several memories or neural segregation of memories. Neurologists have inactivated and reactivated a specific memory in mice (Nabavi et al., 2014), created false memories that are as effective as real onesi.e. with the same behavioral consequences - (Ramirez et al., 2013), and made and broken a link between two previously distinct memories (Ohkawa et al., 2015;Cai et al., 2016).
The perspective of the engineering of remembrance and forgetfulness applied to humans is already emerging, even before all the mechanisms of memory have been discovered. In this respect, neuroscience will lead to a perspective of self-construction of oneself and others, for therapeutic purposes at first of course, and then for other more disturbing ones, such as accessing other people's memories, modifying them, suppressing certain beliefs, etc. When completed, the process of convergence between the in silico and the in vivo will make it possible to bypass perceptual organs (directly introducing images, sounds, etc., i.e. virtual worlds, as powerful as the real worlds), to lead to sensory prostheses that are more effective than natural organs, to exchange in a silent dialogue with a computer and to directly activate external systems. This is only partly science fiction… In short, digital intelligence is imposing its own epistemic logic on all previous forms of intelligence. This shift will probably be placed under the sign of interface codes between natural and artificial codesjanus codes facilitating exchanges and reciprocal transformations. Recent convergences between computer software and biological processes (artificial neural networks, artificial immune systems, genetic algorithms, distributed intelligence systems, ...) foreshadow such an evolution (Bruder, 2017;Delahaye, 2017;Yazdani & Jolai, 2016).

Conclusions
This article aims solely at suggesting a line of thought to understand today's historical turning point. The types of intelligence selected are very schematically described, their intermediate forms are neglected, their articulations barely sketchedall reasons to open the text to criticism. However, these limits should not obscure the central objective. The approach intertwines three main dimensions: the codes and the forms of intelligences they allow, the different relationships between the embodiment and externalization of intelligent processes and the feedback effects of emerging intelligences on preceeding ones.
Codes of the living are linked to emergence phenomena. Genomic intelligence can be deployed thanks to the DNA code and the marginal instability of the genome. Sensorimotor intelligence of multicellulars directly depends upon the coded neural processing of biographical experiences. Symbolic intelligence is based on phonological, graphic and, more broadly, on semiotic codes. Finally, digital intelligence uses information processing machines that can be programmed with different coded languages. In short, living codes are operators of complexity.
These different epistemic logics are intertwined and remain linked to the living body, with a tendency towards externalization, low at the sensorimotor level, massive at the symbolic level, integrated into operational machines at the computer level. Each expansion of externalization provides a capacity to feed back onto previous levels. The cyanobacteria modified the atmosphere and paved the way for other types of metabolism. Sensorimotor intelligence has guided the evolution of genomes and fostered the emergence of a multitude of plants and animals that have profoundly altered the Earth's surface. The symbolic revolution has led to the plurality and heterogeneity of cultures, while retroacting on our neurobiological abilities. Finally, digital-scientific mediation leads to the possibility of redefining previous embodied codes and inducing a broad automaticity in the transformation of the non-living.
At the outset, living organisms domesticated certain properties of physicochemical nature for their own purposes. At the end of an externalization process, which is also a new starting point, the engineer also subjugates certain informational properties of matter, which allows him, by entrusting machines with appropriate intelligent operations, to intervene on the ways in which the living domesticates the laws of nature.
Biotechnologies are undermining the transcendent ontologies of Nature. By becoming technical objects, living entities become extensions of economic and political issues (Lafontaine, 2008(Lafontaine, , 2014Le Dévédec, 2015;Turner & Knoepfler, 2016). The opposition between Nature and Culture is desintegrating, modifying our representations and challenging the scientific division of labour.
In this process, all knowledge is concerned. The shock is general in both the social and natural sciences, mainly in terms of ecology and sustainable development (Blanc, Demeulenaere & Feuerhahn, 2017). The deliberate feedback of the living upon itself is changing our relationship vis-à-vis knowledge, from the straight line of progress towards retroactive and inclusive loops. Dozens of scientific journals, mainly in English, testify to the immense effort that is now being made to reflect upon ecosystems in their interdisciplinary complexity. Human society is changing in depth by extending its field of activity to its own biological levels and to the whole planet's biosphere.
The coming society is a trans-epistemic society. This concept, proposed by Knorr-Cetina (1982) to designate the plurality of logics at work in scientific research, is extended here to all the intelligences of the living. This society, at the same time, uses genetically modified organisms and synthetic biology, manages the embodiment processes, redistributes the relationships between embodied and externalized operations and memories, improves and generalizes the interfaces between the in vivo and the in silico.
The thick borders between humans with consciousness and other living beings relegated to the state of biological mechanisms are gradually dissolving. Not only do we descend from a primate ancestor, but we also share epistemic processes with the most elementary cells of life. Epistemology cannot ignore such evidence. We are not alone in building notions of space, time, number and identity: ants, bees, birds, mice and others have invented solutions long before we even existed. The perspective now before us is that of an epistemology of all of the living, of which humans, with their specific characteristics, are solely one of its branches.
The situated articulations between humans and non-humans proposed by Latour have advantageously replaced the too static opposition between Culture and Nature and have given rise to interesting research. However, this appraoch initially ignored the distinction between living and non-livinga flaw now recognized by its author (Akrich, Callon & Latour, 2006;Charbonnier, Latour & Morizot 2017). It also has the disadvantage of maintaining humans at the centre of attention. Today, as we actively transform the various forms of intelligence, it is important to distinguish them clearly and to explore their articulations. In the same way it is important not to confuse knowledge with the physical quantity of information.
Technology as the extension of the body reverses itself into the body as an extension of technology. Therefore, the history of the living is being relegated to the background. The long-term historical contingency of organisms is replaced by a fast tempo facticity. The very definition of the epistemic logics of living organisms is altered, since genomic mutations and neuronal functioning become accessible. Another story is beginning, that of the living and the planet's biosphere as a field and expression of our technical interventions.
The social sciences, born in the upheavals of industrialization, are now facing an even more profound revolution, which encompasses all the epistemic logics of life. Just as Saint-Simon, Marx and Durkheim could not ignore the mechanical realities of their time, neither can we ignore the processes through which we transform the living and the non-living today, that at the same time are constituting a new society.
Bio-economy is presently an essential actor in redefining the relationships between the planet's various intelligences. It is a knowledge-based economy in a twofold sense: in that of mastering biological engineering and in that of intervening in the processes of plant and animal knowledge and inter-knowledge with market logics. However, these logics are more interested in modifying nucleotide polymers than in evaluating their long-term effects on the biosphere. The Anthropocene (Bonneuil & Fressoz, 2013) thus presents itself largely as a Capitalocene (Haraway, 2015), which, fortunately, has to cope with struggles and, unfortunately, benefits from insufficient ecological regulations.
The evaluation of the risks incurred in our trans-epistemic society is at the heart of normative postures and their justifications: high risk is associated with absolute condemnation of bioengineering, medium risk with conditional acceptance, low risk with utopian enchantment. If we set the objective of acting in full knowledge of the facts, we must subordinate the applications of the universalist approach of science to the largely contingent and singular requirements of our various forms of intelligence.