| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Weekly Submissions

This version was saved 15 years, 6 months ago View current version     Page history
Saved by PBworks
on September 11, 2008 at 7:35:53 am
 

---

 

Week 3

 

Alex Leon

 

The Chinese room thought experiment is meant to undermine the idea that a physical symbol system can be intelligent based only upon the manipulation of symbols; “the stuff matters” is again what Searle attempts to assert. One particular answer to this is simply that we have to look to the level of description that will best allow us to appropriately recognize a device (the Chinese room) as a “cognitive engine.” So, while the man inside the room doesn’t know a damn thing about the meaning of the odd squiggles he’s looking up and responding to, the room as a unit seems to have intelligence due to its reception of communication and output of responses that follow a socially accepted form of dialogue.

 

However, as outlined in chapter one, isn’t consciousness a key part to the definition of intelligence/cognition we are using? If so, is the room really capable of experiencing qualia? If we are forced to look at the room as a unit in order to (conveniently) assert that it is a cognitive, symbol-crunching device, then we are also forced to examine whether it, as a whole, can have its own experiences.

 

 

 

Shawn Brady

 

It has been posited that the mind entails a wealth of understanding which has not been, and cannot be, accounted for in a physical symbol system. For example, as opposed to the scripts in Schank’s program which were used to suggest certain behaviors for certain situations, humans seem to be able to respond to unlimited situations in an unlimited amount of ways. It is doubtful that enough scripts could ever be entered into Schank’s program so that it would know what to do in any given situation. Why are humans able to do this? Hubert Dreyfus’ answer is that the human’s know-how is derived from “a kind of pattern recognition ability honed by extensive bodily and real-world experience” (37).

 

 

But does it simply come down to this? Does a physical-symbol system fall short of exhibiting true intelligence merely because the mind and its broad knowledge base are flexible in this experiential way? Can a human’s experiences lead to and account for their every action and thought? What if someone is presented with a completely unique situation about which they have never learned, thought, or experienced? Would they simply piece together bits of information from related experiences so as to determine what the best choice might be? If so, how is that different from the “guessing” done by Schank’s program?

 

 

James Durand

 

Because our minds are software ran within our brain, it is not hard to believe that there is more than one process at work at any given time. Can't we walk and chat with our friends in the same manner as a computer can surf the internet while playing music? Truly, our minds seem to have a whole lot of sub programs running, one for keeping balance, one for scanning for dangerous or out of place objects and one for playing a chorus of a song over and over in your head. Each section is mildly autonomous, and can keep on doing without you concentrating on it.

 

Regardless of the autonomy of the subprograms, something in there is still deciding which subprograms get to be running. Thus this multi-mind theory does not explain what your mind (core mind?) is.

 

Ashley Murphy

 

Commonsense Psychology is the act of combining attitudes with propositions in order to describe the motivation for human actions (Clark 45). Everything is focused on ones’ desires and beliefs. It would be like someone going to a soccer pitch by his desire to play soccer, and his belief that there is a game to be played. This theory reminds me of the Euthyphro Problem, written by Plato. Socrates and Euthyphro are having a discussion basically on what is right and what is wrong, according to the gods. Socrates poses the question, “Is the pious loved by the gods because it is pious? Or is it pious because it is loved by the gods?” Are the pious loved by their desire to be loved and their belief that by being pious, they will be loved?

 

“Explanations presuppose laws…” This means that every explanation that a person has is to be accepted as law because it is so. People explain other people’s behavior with generalizations. He is upset because his team lost a huge game. This is an explanation to his down-in-the-dumps attitude and his irritable behavior. Everything we do is generalized just like this. There will always be an explanation to everything a person does, due to his experiences, beliefs, and desires. Can there be belief without desire? Can there be desire without belief?

 

 

 

 

 

 

Mark Gerken

 

According to the description given, Schank's program receives a brief outline of events, then is asked a general question about the story that wasn't explicitly covered. This sounds like it is coming to some conclusion about what actually happened, but it isn't. The program has a large set of “fuller scenario”s which it uses to fill in the wholes in the story. For example in the restaurant script, the lack of telling the program you ate your food is irrelevant because it sees you were in a restaurant and knows what takes place inside, according to its “fuller scripts”. To combine its previous knowledge and your story it could simply merge them together, creating one large series of events. After finding all of the fuller scripts that fit within the restaurant script, the program need only search for all permutations of a substring from your question. After which the program would respond accordingly.

 

 

Why is/was this considered artificial intelligence, when all he's done is automate the process of copy-pasting stories together into Microsoft Word, then hitting Control-F to find a word, or set of words.

 

 

Alex Hyatt

 

To try and concentrate on what exactly is understanding, various artificial intelligence examples are given as well as a famous example from John Searle. The example is known as the “Chinese Room” and creates a scenario in which a monolingual English speaker is placed in a room with papers filled with foreign Chinese symbols placed in front of him. The person then manipulates the symbols following English instructions. It is an example that follows the actions of a Turing machine. The point is that there is no real understanding of Chinese, yet the person appears to be able to converse in Chinese.

 

I think, however, that the person does have to have some level of understanding of at least where to put the symbols to create the conversation. Does there not have to be some understanding to read the English and use it to move around the symbols to make some sense? If not, what then is understanding? And also, does one need feelings or experience for understanding, even on the most basic level?

 

 

Andy Tucker

 

We start out saying that intelligence and understanding is exhibited in any entity that uses a Physical Symbol System, but come to refute it because it equates understanding and intelligence to too simple a method, that could not possible do everything of which our brains are capable (experiential learning). We then add some complexity until we have a model like the SOAR system where we add a single working memory and a database memory for which the system to pull information. This is again refuted because it is not complex enough. In the human mind it seems that we have many of these processes occurring simultaneously and producing a multitude of unpredictable outcomes. To reproduce this functionality something with many parallel systems communicating with one another would have to be created and intelligible behavior would have to be “coaxed” out of the system.

 

How is this intelligent behavior when earlier it is said that you must have deliberate thought for intelligence. It seems more like random chance that you would get an intelligent reaction.

 

Jenessa Strickland

 

Dennett describes folk psychology as the “intentional stance” (47), which we take when we treat something (or someone) as having beliefs, intentions, etc. A rational (or intelligent) system is, according to Dennett, a system whose behavior can be successfully explained or predicted in terms of having beliefs, goals, desires, etc.

 

Dennett, as always, is too behavioristic for my taste. Clark quotes him on p. 47 as saying that to be a believer “in the fullest sense” is to behave in ways that are explainable/predictable from the intentional stance, i.e. to display behavior that can explained by positing the existence of beliefs. But this seems more like what it is to be a believer in the NARROWEST sense. points out, Dennett’s test for believer-hood can be passed by a long list of things we would generally not want to call believers. It just seems obvious that there is something more to having beliefs than displaying a certain pattern of behavior. For example, when I wonder whether or not my dog has beliefs, I simply am not wondering if her behavior can be explained/predicted by the intentional stance. I am wondering about an element of her mental life, not her outward behavior. Isn’t there something to belief other than believer-like behavior?

 

 

Jenna Williams

 

As a criticism for Symbolic A.I, the problem of human experience is addressed in relation to stored knowledge. The philosopher Hubert Dreyfus is the primary reference; Dreyfus suggests that the use of strictly symbol crunching technology cannot accurately simulate artificial intelligence. However, Dreyfus feels that if a focus is placed on pattern-recognition software in order to imitate the process of human expertise, then a better A.I could be developed.

 

What other ingredients are missing from the A.I. recipe? Could Artificial Intelligence help us learn more about our own mental capabilities, how so?

 

David Murphy

 

Searle’s thought experiment of the Chinese Room was meant to display that the way that computational machines are used to help produce information for us does not fit into understanding. These machines use a physical symbol system to do whatever task is assigned them, and similarly, the man in the Chinese Room does the same. He has no idea what the Chinese symbols mean, or even that they are Chinese, yet he uses a guide, similar to the programs in our computers, to produce an intelligible reply. Searle says that even though the responses are correct, there is no semantic understanding.

 

How can we say there is no understanding? Sure, the system may not recognize it as Chinese and not understand it in the same way, but I think there is an understanding. The man may not get the symbolic meaning of what the symbols represent officially, but I think they would have some meaning to him since he has to recognize and then respond to them.

 

 

Matt Rockwell

 

In chapter 3, Andy Clark explores the method folk psychology uses in describing the behavior of adults by combining attitudes (“I don’t like to get wet”) with input propositions like (“It is going to rain”) to output a determined behavior (“I am going inside”). Fodor, Churchland, and Dennet have provided the three different theories Clark discusses in this chpt. Fodor believes that folk psychology’s method for determining behavior is correct not just pragmatically but, that the physical brain contains a syntactic structure and content structure that match folk psychology. Churchland, on the other hand, doesn’t believe that the brain’s structure will match the Representational Theory of Mind(Fodor’s Theory) and denounces folk psychology as incorrect. The final theory Dennet describes is similar to Churchland in that there will be no structure found in the brain that will match the R.T.M., but that folk psychology is good at predicting behavior and hence understanding how the brain functions.

 

 

 

If a person ingests a hallucinogen, the body releases certain hormones, a person has a healthy breakfast, or pain is occurring in a foot the human will have different states of consciousness. Could a lack of a universally accepted explanation of the human experience of consciousness (that in my opinion should contain the body as conscious) lead to this debate by compartmentalizing the minds functioning and in so doing removing the body as an integral part of decision making?

 

Mike Sterrett

 

The ability of the mind to effectively cope with everyday situations is perhaps the most intriguing aspect of the notion that our minds are merely software running on the hardware of our brain-meat. While the fact that pattern recognition is the way that we expertly deal with things is not altogether shocking, it is not exactly widely known either. Thinking back on things in my life which I am now an expert in, the learning process always began with following certain rules in doing things until they became second nature to me. What I realize after this reading is that it became second nature because I switched from simply following certain prescribed rules and began recognizing patterns. The question remains, though, is how does the mind goes from following a set of rules to reacting to patterns?

 

 

Armand Duncan

 

I think it is highly unlikely that there is any completely homogenous system, whether functional or structural, which would be able to offer a complete explanation of human intelligence and cognitive functioning. As pointed out in this chapter, human intelligence and interaction with the world consists of any number of different and, it would seem, disparate properties. These include sensory experience, emotions, physical and mental coordination, language, memory, and the ability to quickly and creatively learn and adapt. The more functions which a theory of intelligence is required to explain, the more complicated and multifaceted the explanation will have to be. This makes it extremely unlikely that a single model of cognition could be generated which will explain all of the abilities of the human intellect. It also makes it extremely unlikely that a single medium could provide the structural arrangements necessary to support such complex cognitive functions. Multiple models of cognition, each interacting and overlapping with one another, may provide a more workable idea of human intelligence.

 

Zach

 

Symbol system A.I has been attempting to recreate “the pattern of relationships between all its partial representations” (pg.41). The complex process each artificial way has waned in the attempt to accomplish this. While each pss has its benefits of recalling data, like SOAR, or having a “bag of tricks” that the machine can draw different inferences from; one item that appears to have been overlooked is the idea of rationality. Each A.I. must be given different was of computing information, like each child is given different cultural ways of learning. Culturally we are able to process symbols differently and often this goes against the hegemony of society, can then this irrationality of thought be seen as rational; and if so then can the irregular patterns of A.I. be seen as acting in accordance to different cultural interactions? I am not talking about the simple rationality of getting out of the rain when it is raining, because, often humans choose to stay/play/walk in the rain.

 

The question in this section is what constitutes intelligence? Are we able to replicate our intellience by using physical symbol systems. Physical symbols systems identify/pick out objects and produce responses based on our understanding of the object. We use our physical system to identify cognitive bands or production memory. Our cognitive band and production memory allow us to contimplate the appropriate decision based on our past experiences. Computers can manipulate their code similarly by using IF statements. These if statements create conditions for appropriate responses. A simple example would be a greeting to a computer. "hello computer" If computer is given hello respond with the same greeting. In this case a computer is able to comutate appropriate responses based on symbol recongnition. In the section "Everyday Coping" We see that it can be extremely difficult for computers to replicate our intellignce in everyday contexts. The depth of understanding of a computer is called into quesiton. If a computer can recognize symbols then do they understand what they mean with the same depth of understanding?

 

 

 

 

Jordan Kiper

I wish to press the issue of intelligence and engage those who have misgivings about intelligence as a physical-symbol system. According to Newell, Simon, and Clark, if a device is physical and contains both a set of interpretable and combinable symbols, as well as a set of processes that can operate on instructions, then that device is a physical-symbol system. Moreover, because such a device is computational, whereby its objective behavior underscores a function for which there is an effective procedure or algorithm for calculating a solution, that device is intelligent. Granted these premises, if an object is not a physical-symbol system, then it is not intelligent, since, empirically, a necessary and sufficient condition for intelligence is being a sufficiently computing physical-symbol system.

 

The strength of this argument, I think, resides in its definition of intelligence, namely, that utilizing an algorithm to calculate a solution is intelligent behavior. Yet two likely objections ensue. Firstly, intelligence is not just limited to the process of specific input yielding particular output, for processes in themselves do not understand their very own functioning (Chinese Room Argument). Secondly, we intuitively recognize that while organic beings exemplify intelligent behavior, an inorganic device, and its underlying processing, seems to lack anything resembling actual cogitating, feeling, aspiring, and so forth (Chinese Nation). But notice the ambiguity, if not impracticality, of both objections. Regarding the former, it begs the question of ‘understanding’ since the objection merely gainsays computation by assuming a commonplace definition of understanding--but what else could understanding mean besides the interpreting and combining of symbols to create solutions? If you say understanding is something more, such as (say) the ability to show insight or sympathy, then you ignore the fact that the ability to show insight relies on interpreting symbols that signify impending adversity; likewise sympathy is interpreting signs of another’s misfortune and outputting solace--or, for that matter, outputting smugness! In any case, each of our behaviors is apparently the output of an underlying physical-symbol system. Regarding the latter objection, indeed organisms seem different from inorganic things, like computers and their functions, but our intuition that a biological ‘inner-view’ is necessary for cognition is impractical; for consider what would be the case if we had to know that a device had an inner-view before we could attribute intelligence to it. Since we cannot know with certainty the inner-view of any being, we would never attribute intelligence to any device operating on inner symbols, whether a computer, a favorite pet, or a local pizza deliverer. My question is therefore the following: what else could intelligence be besides a physical-symbol system?

 

With classical symbol-crunching A.I. it is assumed that study of the human mind can be done without describing and understanding all of our minds functions on a neurological level. We know from studying the brain that there are multiple memory systems that work differently and are totally separate from each other. Thereare multiple algorithms to do the same thing so is it possible that we our brain has several algorithms of supporting the same mental state in different ways. It almost seems naïve with this evidence to attempt to model psychological functions without further studying the neural implementations of these things. We can also challenge the idea of uniformity and simplicity by Rosenbloom. It is becoming more accepted that our brain is more like a grab bag of knowledge. While now it seems nearly impossible to replicate the activities of the mind into a symbol-crunching system, it is not out of the question that someday there will be one that acts more like the human brain, especially as we continue to learn more about its neural implementation.

 

 

William Moore

 

I like the idea of the human mind as a “bag of tricks” for a few reasons. First off, it seems that it is feasible for us to have evolved in such a way as to have specialized “modules” in the brain that deal with specific functions. Secondly, it gets us away from the more controversial issue of whether or not machines can be conscious. Are there specific “tricks” (I.E. spatial reasoning) that humans have, which may correspond to some “module,” which cannot be reproduced effectively in a machine (or in software, if you prefer) divorced from issues of consciousness?

 

 

Whit Musick

 

Now that we have concluded that thought uses mental representations, or symbols (see my comment 1) to operate and produce 'Y' following an 'X' input, the question of what these inner mental symbols contain arises. That is to say, what is the precise definition of _King Lear_? How does _King Lear_ come to exist in a human being? How does one go about determining the whole of information contained within and composing _King Lear_?

 

A reasonable place to begin is to hypothesize that the mind is not structured in a way to receive one kind of input, but rather tolerate a variety of input – in the same way a newly discovered organic alien pocket calculator can process 2+2=4 but also chooses strong moves in a chess game. But how is it that the content of this alien calculator came to be? Well, there are two main possibilities:

 

 

Content is itself fixed by the local properties of the system, that is to say, they are intrinsic.

or

 

Content varies depending on broader properties such as the history of the system and its relations with in itself and relations with the external world.

 

 

 

JEFF PAUCKIn chapter 2, we are introduced to Newell and Simon's definition of a physical-symbol system. this definition syays that all PSSs "have the neccessary and sufficient means of general intelligent action." But this type of definition can have many weird results, including the chinese room example. Another unique outcome arrives when we consider the SOAR machine. SOAR attempts to replicate general inttelligence by perserving a large amount of symbols, facts, knowledge, etc, and remembering their functions/uses. When Soar arrives at a new or different situation, it examines every possible solution that it has stored than can choose the best appropriate descision. By doing this SOAR can complete both short-term and long-term goals and can do it relatively well, depending on how much is stored in its memory. So all-in-all SOAR is a physical-symbol system that can work at or close to a level of deliberative thought. But the question is does Soar real understand what it is doing, or is it just a more advanced form of mimicry? Is this even how we process our own facts and memories (neuroscientist would say no)? Can this single type of long-term memory that SOAR posseses ever be able to work at the effeciency level as well as the apparant randomness or grab-bagness of the human mind? And although there are many short-comings, what can we learn and use from SOAR?

 

Lex Pulos

 

Symbol system A.I has been attempting to recreate “the pattern of relationships between all its partial representations” (pg.41). The complex process each artificial way has waned in the attempt to accomplish this. While each pss has its benefits of recalling data, like SOAR, or having a “bag of tricks” that the machine can draw different inferences from; one item that appears to have been overlooked is the idea of rationality. Each A.I. must be given different was of computing information, like each child is given different cultural ways of learning. Culturally we are able to process symbols differently and often this goes against the hegemony of society, can then this irrationality of thought be seen as rational; and if so then can the irregular patterns of A.I. be seen as acting in accordance to different cultural interactions? I am not talking about the simple rationality of getting out of the rain when it is raining, because, often humans choose to stay/play/walk in the rain.


 

 

 

Week 4


 

 

Week 5


 

 

Week 6


 

 

Week 7

Comments (0)

You don't have permission to comment on this page.