The Chinese Room (CR) argument outlines one of the core problems associated with formal symbolic logic: the symbol-grounding problem. That is, how do symbolic referents correspond with semantic information? However, symbolicism has grave implications for machine intelligence and natural language understanding. While human-like imitation may permit a machine to pass the Turing test, imitation alone does not permit a machine to experience meaning (as per the CR argument). An alternative to the Turing test, the Winograd Schema Challenge (WSC), illustrates the ways in which a human agent can outperform a machine intelligence when faced with problems of meaning that lack defined statistical solutions. The WSC supports the view that human language understanding cannot be fully explained using formal symbolic logic.
Supported by UC Berkeley’s Neural Theory of Language (NTL), I present an alternative. I show how human agents are able to solve Winograd problems almost effortlessly, and what we might achieve with machine intelligence if we reconsider the nature of natural language. I use evolutionary psychology to design a ‘thought experiment’ based on ‘evaluative perception’. Here I argue that meaning is grounded in embodied sensorimotor experience. More specifically, I argue that for a machine to experience meaning, we must first develop a parametric of value grounded in the material conditions of embodied experience. More crucially, I offer a dangerous solution: that a machine must know threat and death if it is ever to know what it means to really mean anything.
Meaning and the Machine: Beyond the Symbol-Grounding Problem
1. m e a n i n g
a n d t h e
m a c h i n e : Terry McDonoughBeyond the symbol-grounding
2. The Chinese Room Argument (Searle, 1980)
Precedents: ‘Leibniz’ Mill’ (1714) and Turing’s ‘Paper Machine’ (1948);
(i) Locked in a room with only I/O
(ii) Chinese characters received
(iii) Access to a set of ‘correlating rules’
(iv) Chinese characters returned
(v) I/O is coherent to external observer
(vi) I/O is incoherent to internal processer
Problem: symbolic theory of natural (modal) language
3. A Neural Theory of Language
How does the brain compute the mind? (ICBS/ICSI, UC Berkeley)
• Views language as ‘an embodied neural system’ (Feldman, 2015).
• ‘…we understand language by simulating in our minds what it would be like to
experience the things that the language describes’ (Bergen, 2012).
• Uses Embodied Construction Grammar (ECG) to specify ‘schematic idealizations
that capture recurrent patterns of sensorimotor experience’ (Bergen and Chang,
2005).
Solution: indexical theory of natural (amodal) language
4. Embodied Construction Grammar
• CxG: form-meaning pairs (usage based)
• Deep semantic specification (Semspec):
embodied schemas (incl. metaphor)
• Parameterize active simulations
• Lattice-based configuration
• Computational formalism: ECG2
Workbench (NLU)
• Stochastic Petri (neural) nets
• Linguistic utterance/s evoke enactment
of simulation
6. The Winograd Schema Challenge (Levesque,
2011)
• Machine intelligence test (with a prize of $25,000)
• Highest score: 66.7% (Liu et al, 2016)
1. Two parties are mentioned in a sentence by noun
phrases.
2. A pronoun or possessive adjective is used in the
sentence in reference to one of the parties.
3. The question involves determining the referent of
the pronoun or possessive adjective.
4. There is a word (called the special word) that
appears in the sentence and possibly the question.
When it is replaced by another word (called the
alternate word), everything still makes perfect sense,
but the answer changes.
7.
8. Test one: Reference and coreference
The town councillors refused to grant the angry protesters a permit
because they feared violence.
Who feared violence?
0. The town councillors
1. The angry protesters
9. Test two: Reference and coreference
The town councillors refused to grant the angry protesters a permit
because they advocated violence.
Who advocated violence?
0. The town councillors
1. The angry protesters
10. Grounding Meaning in Embodied Experience
• Ontological congruence: The agent/patient must be ontologically compatible
with its corresponding verb.
• Agent and Patient Properties: The agent and/or patient may have properties that
make it specifically consistent with a verb.
• Relationship Frames: Common frames specifying the relationships between the
agent and patient
• ECG Solution: (i) Enforce ontological constraints using grammatically marked
object property constraints and using the novel idea of bridging schemas. (ii)
Account for ontological categories of the referents and constraints imposed by
the evoked embodied schema.
(Ragurham et al, 2017)
…but, how can a machine intelligence intuit a human ontology?
12. The ‘Last Apple’ Argument (McDonough, 2018)
Premise:
You are starving in a wasteland. On the ground is a
fresh apple and a sharp rock. Another starving
wanderer arrives. You both know that it is the last
apple on Earth.
Question:
0. Do you share the apple?
1. Or not?
20. Homeostasis and the Hypothalamus
• Homeostasis is the powerful, unthought, unspoken
imperative, whose discharge implies, for every living
organism, small or large, nothing less than enduring
and prevailing (Damasio, 2018).
• Homeostasis has been the basis for the value behind
natural selection, which in turn favors the genes—
and consequently the kinds of organisms—that
exhibit the most innovative and efficient homeostasis
(Damasio, 2018).
• Hypothalamus (and the telencephalon) link the CNS
with the endocrine system. Regulation of drives,
motivations, and emotions (Damasio, 2018).
21. Homeostasis and the Hypothalamus
• Connectionist models often concentrate on
modelling higher cortical functions (perception,
motor control, memory etc.);
• Little attention is paid to the modelling of mid-
level cortical functions (hypothalamic regulation
etc.)
• For AGI, experience is seen as a property of
semantic memory (‘out there’ - symbolicism)
rather than a property generated by a more
‘primitive’ ontology (‘in there’ - indexicality)
23. Parameterising Death
• Death as a result of either natural entropy, failed
homeostasis, or exposure to external extremes
(i) conservation of energy (first law of
thermodynamics)
(ii) risk-reward assessment and risk avoidance
(iii) perception, avoidance, or annihilation of
threat(s)
• Parameterised value assignments as a ‘primitive’
ontologically-based schematic complex (or lattice)
• Simulated hypothalamic homeostasis
24. Resolving the Problem of Machine
Intelligence
• Parameterising death would (arguably) generate:
(i) Ontological congruence
(ii) Categorical constraints
(iii) Connectivity (bridging schemas) between
relational equivalents
• Symbolicism not required: instead, indexical
reference to initiate schematic complexes
• Meaningful interactions with I/O states
26. A Dangerous Solution..?
What if…
• A machine feared death (entropy, risk, threat)?
• A machine self-preserved?
• A machine intuited properties based on internal
parameters?
• A machine assessed attributes based on value
assignments?
Would we create a general adaptive intelligence (GAI)?