| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

View
 

Weekly Submissions Archives

Page history last edited by Caleb Schmidt 14 years, 11 months ago

 

 

 

 

Lex Pulos

This chapter covers the material extensively, illustrating the complexity of personal identity. One aspect that interested me the notion of the person-stage: “a set of simultaneous experiences all of which belong to one person.” I wonder if this concept is even possible? Perry discusses the examples of how this cannot occur because we cannot remember everything. I wonder if this is even possible in a short-term conception? In the moment can we experience all of the ideas or thoughts tat we are thinking in the time from 10:01 to 10:03?

I would say that we can not, as there are hundreds of concepts that are directing us in different directions and the thought of our identity may shift several time from time A to time B. there appears to be too much emphasis on memory in the discussion, as there are multiple elements that add to identity beyond memory; situation, context, audience, etc…

 

Jenessa Strickland

In the first chapter of ‘Personal Identity’, Perry discusses Locke’s memory theory of identity. Locke argued to have a personal identity, one must have be “ an intelligent being, that has reason and reflection, and can consider itself as itself, the same thinking thing, in different times and places” (Perry p. 12). This strikes me as a pretty reasonable requirement to begin with. My question is this: this memory requirement is framed in terms of a capacity for reflection, but what if that capacity is never exercised? Do I still have a personal identity if I literally never reflect on myself as myself “in different times and places”? Or to put it in slightly different terms, Perry describes the possibility of body-transplants, in which one could possibly retain this capacity for reflection about oneself, even in another body (p. 4-6). Is someone suffering from amnesia, someone in the same body but with no memory (or even no capacity for memory) of her previous experiences, a completely new person? Along the same line of thinking, research has shown that, in general, the more we remember events from our pasts, the less accurate our memories become. Do our ideas about who we are as persons also become inaccurate the more we reflect on our previous experiences? (I don’t pretend to have any answers to any of these questions I’m posing; I just find them thought-provoking.)

 

I like all of these questions. I really don't have any of the answers to any of them either, but I liked what the author said about thinking that personal identity is in some way tied to the physical body. It seems like that would kind of fit with what we learned in Mindware. Even in Dr. Azari's talk with us, she mentioned that we have a front part of the brain that no other animal seems to have, and which seems to help us to develop the mental faculties and capabilities that we have. I really don't know a whole lot about the subject of personal identity, but so far I like the idea that there is a physical base involved with our identities. If this is true, I think it could go a long way towards helping to answer the questions you ask, but on the other hand, it would probably just bring up even more questions. Oh well...

 

AndyTucker

In the paper on dynamic systems it examines a number of systems that can be considered dynamic. It shows how these systems are quantified and how these quantifications in the form of equations can be used to predict the behaviors of the dynamic systems they represent. I am assuming that we are, in taking the mind to be adynamic system, trying to draw a connection between how the dynamic systems in the paper were quantified and how we could quantify the mind.Yet all of the systems in the paper were physical systems that could be observed and the mind it not such an observable system.

This brings about the question as to how we could go about quantifying the mind as these dynamic systems in the paper were quantified if the mind cannot be observed.

 

I think your concern gets to the heart of it, but I took the paper to imply that there are systems whose behavior cannot be predicted, but only modeled. They're quantifiable, but by no means concrete: <--All this is is a relatively simple function (; x=population, n=generation, r=growth rate) modeling population growth. (Read: Screwing like rabbits=chaos.) It goes from one answer to two to...lots. So where in that little equation does the craziness come in? And Intelligent behavior is a perfect example (read: unpredictability=intelligence). Even if we could account for all the inputs and outputs, all we could do would be try to make up a story relating the two. But it's not that the mind is not an observable system at all. We do have neuroimaging and behavior as manifestations of the physical system. The question is whether we can make any more sense of it all by thinking of things like strange attractors, which is just another way of talking about what kinds of overall patterns tend to emerge (note, however, the modesty of the plurality of both 'kinds' and 'patterns').

-Jared

 

 

Personal Identity

 

Shawn Brady

In the section titled “Locke, Quinton, and Grice”, the phrase “could contain” is used repeatedly. “[Locke} speaks, not simply of the past thoughts and actions to which we extend our consciousness, but those to which it could be so extended” (p 16). However, this seems to be begging the question. Sure, Wilson could contain the memory about the baseball game. But this only shows that it is not a-priori impossible. The real crux of the problem is whether Wilson does actually contain such memories.

For instance, let us say I went to a basketball game. After the game, my friend Rita wants to know if I am the same person as that she sat next to in the arena. So, she asks, “do you remember when Shaquille O’Neal dunked the ball in the second quarter?” I respond, “No”. According to Locke, Rita is allowed to conclude that I am the same person simply by doing the following. She could say, “Well, you were at the game, so it is possible that you could contain such a memory. Thus, you are the same person.” However, what is this “could contain” based upon? After all, I said that I did not remember the dunk. Is this begging the question?

 

Zach Farrell

Personal Identity: Chapter 1

“Event belonging to the personal history of a person” This idea is like that brought up by Locke where memories or experiences being criteria for identity. By reflection and introspection we place ourselves in different times and places but, only our experiences and knowledge will influence our decision. These memories and intro-reflection provides differences between us and our own personal identity. It seems that many memories we have are not accurate memories, and many memories are partial memories. If we are not able to vividly recall a situation, such as a repressed memory, how do we account for this in our decision making processes and how do these it affect our identity?

 
David Murphy
My question is only partially connected to Personal Identity. Instead it ties in with Mindware. In the first chapter, Perry brings up several different ideas that try to help us understand what personal identity is and what it constitutes. If our brains/minds really do use the environment as a dynamic partner in cognition, is it really possible to have personal identity? If the environment plays such a role in how we think, doesn't it kind of universalize our minds to some extent. Even if the universal qualities the environment supplies us is very very small, because they are there can we really say that we have our own personal identity rather than identity that shares with people in the same situations, but with some unique features too?
 
Jenessa Strickland (partially in response to David Murphy, above)
I think David's post is related to one thought I have had, both when considering identity and some of the material from Mindware.  It seems like one aspect of biological systems that might be related to identity is that living systems, even very simple ones, seem to defend their own boundaries.  But when we think about humans interacting with their environment, as discussed in the last couple of chapters in Mindware, it becomes clear that the self-boundary for humans is VERY unclear.  We do things to defend our self-conceptions (not just our physical, bodily boundaries), and this includes a lot of environmental aspects beyond our bodies.  So are these environmental aspects part of a person's identity?  If a person is transported to a completely new environment, do they have a different identity?  For anyone who has stepped completely outside of their comfort zone, it is indeed tempting to say something LIKE this has happened.  Furthermore, and as I think you're getting at, David, our "selves" involve not just interaction with our environment but with other people, so are those other people also part of our personal identities?  In other words, just how "personal" is your personal identity?  (I guess I'm simultaneously commenting on David's post AND posting a second question on Perry for this week, since I got ahead last week and posted on the first chapter in Perry then as well.)
 
Mark Gerken

While I agree that ones problem solving ability is increased through externalizing information, this experiment seems to me to be less accurate than it first appears. Mainly, as far as I understand it, people do not have photographic memories. Looking at the duck/rabbit image I could recreate it by remembering that its a representation of a duck and is looking left. I could not create a detailed mental replica, even with advance warning. However, since I know some of its traits I can approximately replicate it on a piece of paper. Once I have a hard copy I could find the second interpretation of the image. This shows the correlation above. However, I believe this to be greatly influenced by the way we perceive objects,. Mainly, the traits I'm looking for, or have time to find, while looking at the image. If I showed you a detailed car picture and asked you to tel me what color the sky was; you would not know, most likely. This is not because you need to externalize the information, but because the color of the sky was not one of the main traits you were looking for. Why is this not an equally valid explanation of this phenomena.

 

Matt Sidinger

 

Perry goes through many different propositions as to how one can define experience relating to a person. When he gets to the part about it being a memory, he continually argues that if someone had the experience they and can’t remember it they are still the same person. My question to that is what about people who have lost their memories. Are they still the same person? They have the same body; but what identity do they have?

 

 

 

I think that the person who looses their memory is definately the same person.  Whether they lost the memory or not they still went through the process of having the experience, developing the memory, and then loosing the memory.  Wheter the person remembers it or not they are still the same individual it just becomes a new part of who they are that they forgot the memory.  They still had it at one point.  So they no longer have the same identity but they are the same person.  I think that for Perry, having ever had the memory or having teh capacity for the memory is all that one needs to be the same person. -Andy Tucker

 

Andy Tucker

One idea that is presented by Perry is how Locke’s conception of personal identity is not a case of identity in the “strict and philosophical sense” of the word (p. 22). I don’t think that there is anything wrong with this. Just as Perry goes on to show, there is no possible way that a believable conception of personal identity could conform to this principle of the “strict and philosophical sense” because the person that we are today is does not have all the same properties that we had yesterday or will have tomorrow. Yet none of us would disagree that these three people with different properties are one individual.
If following the principle of the “strict and philosophical sense” of identity is too strict to produce a working definition of personal identity and we don’t want to follow too lenient of a principle, then what criterion should be followed for constructing such a definition?
 
Armand Duncan
My question this week is in relation to Locke’s theory that a person’s memories are the defining aspect of their identity.  A paradox arises when we consider that people’s memories change over time and that there may be times that they don’t have memories of, say because of amnesia or even consumption of alcohol or narcotics. The paradox is, does this absence of memory commit Locke that they remained the same person during these periods of memory lapse?   My question is, could memory be stated in a negative sense?  That is, could we say that the remembrance of an event which triggered a loss of memory be a kind of memory?  Could the loss of memory be a facet of a subject’s memory of itself?  This would seem to preserve Locke’s definition by defending him against some of his detractors.
 

 

 

 

Mike Sterrett

 

When considering what it is that defines a person's identity, there are three basic factors which must be considered: mind, body, and soul. For people who are not dualists, that eliminates the prospect of soul, since something which cannot be proven to exist must not be the basis of something which does. That only leaves mind and body. What about these makes up a person's identity? Some claim that it is the consistency of the two, or even of mind alone which makes up identity. There are those who place the whole of identity in the hands of the body, but that leaves out of account the memories and personality of the person who's identity in in question. Consistency of mind is therefore crucial in making up a person's identity. But what does it mean to be consistent? Does the mind have to be consistent with itself compared to a day ago, or is it a week, or an hour? A lot can happen in a day or a week, even an hour. Is consistency from moment to moment necessary? How strong does the consistency have to be? Are there any definite answers about what identity is?

 

Lex Pulos

    We have been discussing the notion of pain, and consistently this has been relate to the damage or the possible damage of tissue. Pain may also then have degrees to it depending on the type of damage that is incrued. What we seem to have avoided is the discussion on pain that is not physically damaging. Emotional pain is a type of pain that we feel and this can also cause physical pain or lead to it in some way; i.e. causing the belief of pain, damaging yourself physically to relieve the emotional pain, etc. there is this complex notion of emotional pain, which I have barely covered, that we have not yet discussed. How then might we discuss this type of pain, or bring it into discussion?

 

 

 
I think that emergence offers a viable model of cognitive behavior. Clark describes four different levels of emergent behavior: collective self-organization, unprogrammed functionality, and interactive complexity. Each of these properties of emergent systems could be used to describe different aspects of cognitive life. Interactive complexity seems like it could provide a model of how an organism receives input from its environment which in turn stimulates internal processes. These processes then provide feedback which stimulates further interaction with the environment.
The processes which are stimulated by the interactive complexity could possibly be described using a combination of both unprogrammed functionality and collective self-organization. The causal relationship between these aspects of an emergent cognitive system is an admittedly difficult problem to solve, but it seems as if a connectionist network could be used to describe such a relationship. I also think that an emergent interpretation of a cognitive system would speak against a folk psychological interpretation of cognition. Does emerging neuroscience reinforce this holistic picture of mental life? And if so, could the complexities of such a picture be worked and simulated in order to provide the predictive power necessary for the development of artificial intelligence?

Week 3

 

Alex Leon

 

The Chinese room thought experiment is meant to undermine the idea that a physical symbol system can be intelligent based only upon the manipulation of symbols; “the stuff matters” is again what Searle attempts to assert. One particular answer to this is simply that we have to look to the level of description that will best allow us to appropriately recognize a device (the Chinese room) as a “cognitive engine.” So, while the man inside the room doesn’t know a damn thing about the meaning of the odd squiggles he’s looking up and responding to, the room as a unit seems to have intelligence due to its reception of communication and output of responses that follow a socially accepted form of dialogue.

 

However, as outlined in chapter one, isn’t consciousness a key part to the definition of intelligence/cognition we are using? If so, is the room really capable of experiencing qualia? If we are forced to look at the room as a unit in order to (conveniently) assert that it is a cognitive, symbol-crunching device, then we are also forced to examine whether it, as a whole, can have its own experiences.

 

Kate Connelly

 

In Chapter two of his book Mindware, Clark claims that by programing machines with a physical symbol system, they can by programmed with knowledge, goals and desires. In addition to the requirements for such a task to succeed eg. the use of a symbolic code to store all the system's long-term knowledge, it is required that "intelligence resides at or close to the level of delebrate thought". This is a good definition of intelligence for machines, but it does not prove to me that they are any closer to thinking like humans. Much of humans thoughts are not deliberate. We are often powered by emotion and sometimes irrationality. So, using PSS- inspired inteligence, is it possible to program machines to feel?

 

I agree that if with all of the advances in various processes of artificial intelligence, we are still very far away from getting very close to thinking like humans. It seems that our thinking contains so many more variables than a plug and chug system to try and come up with outputs that would be adequate responses for situations or problems. I too am curious if it is possible for a programmed machine to actually fell.

 

---Alex Hyatt

 

Shawn Brady

 

It has been posited that the mind entails a wealth of understanding which has not been, and cannot be, accounted for in a physical symbol system. For example, as opposed to the scripts in Schank’s program which were used to suggest certain behaviors for certain situations, humans seem to be able to respond to unlimited situations in an unlimited amount of ways. It is doubtful that enough scripts could ever be entered into Schank’s program so that it would know what to do in any given situation. Why are humans able to do this? Hubert Dreyfus’ answer is that the human’s know-how is derived from “a kind of pattern recognition ability honed by extensive bodily and real-world experience” (37).

 

 

But does it simply come down to this? Does a physical-symbol system fall short of exhibiting true intelligence merely because the mind and its broad knowledge base are flexible in this experiential way? Can a human’s experiences lead to and account for their every action and thought? What if someone is presented with a completely unique situation about which they have never learned, thought, or experienced? Would they simply piece together bits of information from related experiences so as to determine what the best choice might be? If so, how is that different from the “guessing” done by Schank’s program?

 

 

James Durand

 

Because our minds are software ran within our brain, it is not hard to believe that there is more than one process at work at any given time. Can't we walk and chat with our friends in the same manner as a computer can surf the internet while playing music? Truly, our minds seem to have a whole lot of sub programs running, one for keeping balance, one for scanning for dangerous or out of place objects and one for playing a chorus of a song over and over in your head. Each section is mildly autonomous, and can keep on doing without you concentrating on it.

 

Regardless of the autonomy of the subprograms, something in there is still deciding which subprograms get to be running. Thus this multi-mind theory does not explain what your mind (core mind?) is.

 

I think the point of a lot of research coming out right now is that there really doesn't have to be a core mind. The best/simplest example of this that I know of is ant colonies. If you look at individual ant behavior, it seems impossibly stupid. Individual ants perform very simple tasks based on very simply chemical signals they receive from the other ants they encounter. They are extremely simple input-output machines. (Don't they have just one neuron or something?) But somehow all of these tiny simple parts add up to something very intelligent and complex. Ant colonies, as a whole, perform complex tasks of building, gathering food, etc. etc., even though there is no boss and no one is making any executive decisions at all. It's hard to imagine how this kind of situation in the brain could give rise to phenomenal consciousness (i.e. qualia), but ant colonies and other very simple systems do demonstrate how complex and at least apparently intelligent behavior can arise. To me, it's kind of like the evolution vs. design debate. Paley and others argued that you couldn't have design without a designer, but evolution by natural selection shows how you can. With consciousness a common intuition is that you can't have control without a controller, or organization without an organizer, etc., but we're currently working out theories to show how you can. -- Jenessa Strickland

 

Ashley Murphy

 

Commonsense Psychology is the act of combining attitudes with propositions in order to describe the motivation for human actions (Clark 45). Everything is focused on ones’ desires and beliefs. It would be like someone going to a soccer pitch by his desire to play soccer, and his belief that there is a game to be played. This theory reminds me of the Euthyphro Problem, written by Plato. Socrates and Euthyphro are having a discussion basically on what is right and what is wrong, according to the gods. Socrates poses the question, “Is the pious loved by the gods because it is pious? Or is it pious because it is loved by the gods?” Are the pious loved by their desire to be loved and their belief that by being pious, they will be loved?

 

“Explanations presuppose laws…” This means that every explanation that a person has is to be accepted as law because it is so. People explain other people’s behavior with generalizations. He is upset because his team lost a huge game. This is an explanation to his down-in-the-dumps attitude and his irritable behavior. Everything we do is generalized just like this. There will always be an explanation to everything a person does, due to his experiences, beliefs, and desires. Can there be belief without desire? Can there be desire without belief?

 

 

 

 

 

 

Mark Gerken

 

According to the description given, Schank's program receives a brief outline of events, then is asked a general question about the story that wasn't explicitly covered. This sounds like it is coming to some conclusion about what actually happened, but it isn't. The program has a large set of “fuller scenario”s which it uses to fill in the wholes in the story. For example in the restaurant script, the lack of telling the program you ate your food is irrelevant because it sees you were in a restaurant and knows what takes place inside, according to its “fuller scripts”. To combine its previous knowledge and your story it could simply merge them together, creating one large series of events. After finding all of the fuller scripts that fit within the restaurant script, the program need only search for all permutations of a substring from your question. After which the program would respond accordingly.

 

 

Why is/was this considered artificial intelligence, when all he's done is automate the process of copy-pasting stories together into Microsoft Word, then hitting Control-F to find a word, or set of words.

 

 

Alex Hyatt

 

To try and concentrate on what exactly is understanding, various artificial intelligence examples are given as well as a famous example from John Searle. The example is known as the “Chinese Room” and creates a scenario in which a monolingual English speaker is placed in a room with papers filled with foreign Chinese symbols placed in front of him. The person then manipulates the symbols following English instructions. It is an example that follows the actions of a Turing machine. The point is that there is no real understanding of Chinese, yet the person appears to be able to converse in Chinese.

 

I think, however, that the person does have to have some level of understanding of at least where to put the symbols to create the conversation. Does there not have to be some understanding to read the English and use it to move around the symbols to make some sense? If not, what then is understanding? And also, does one need feelings or experience for understanding, even on the most basic level?

 

 

Andy Tucker

 

We start out saying that intelligence and understanding is exhibited in any entity that uses a Physical Symbol System, but come to refute it because it equates understanding and intelligence to too simple a method, that could not possible do everything of which our brains are capable (experiential learning). We then add some complexity until we have a model like the SOAR system where we add a single working memory and a database memory for which the system to pull information. This is again refuted because it is not complex enough. In the human mind it seems that we have many of these processes occurring simultaneously and producing a multitude of unpredictable outcomes. To reproduce this functionality something with many parallel systems communicating with one another would have to be created and intelligible behavior would have to be “coaxed” out of the system.

 

How is this intelligent behavior when earlier it is said that you must have deliberate thought for intelligence. It seems more like random chance that you would get an intelligent reaction.

 

Jenessa Strickland

 

Dennett describes folk psychology as the “intentional stance” (47), which we take when we treat something (or someone) as having beliefs, intentions, etc.

A rational (or intelligent) system is, according to Dennett, a system whose behavior can be successfully explained or predicted in terms of having beliefs, goals, desires, etc.

 

Dennett, as always, is too behavioristic for my taste. Clark quotes him on p. 47 as saying that to be a believer “in the fullest sense” is to behave in ways that are explainable/predictable from the intentional stance, i.e. to display behavior that can explained by positing the existence of beliefs. But this seems more like what it is to be a believer in the NARROWEST sense. points out, Dennett’s test for believer-hood can be passed by a long list of things we would generally not want to call believers. It just seems obvious that there is something more to having beliefs than displaying a certain pattern of behavior. For example, when I wonder whether or not my dog has beliefs, I simply am not wondering if her behavior can be explained/predicted by the intentional stance. I am wondering about an element of her mental life, not her outward behavior. Isn’t there something to belief other than believer-like behavior?

 

 

Jenna Williams

 

As a criticism for Symbolic A.I, the problem of human experience is addressed in relation to stored knowledge. The philosopher Hubert Dreyfus is the primary reference; Dreyfus suggests that the use of strictly symbol crunching technology cannot accurately simulate artificial intelligence. However, Dreyfus feels that if a focus is placed on pattern-recognition software in order to imitate the process of human expertise, then a better A.I could be developed.

 

What other ingredients are missing from the A.I. recipe? Could Artificial Intelligence help us learn more about our own mental capabilities, how so?

 

I think there is tons of stuff missing. As it discusses in chapter 4 the main problem with these A.I. models we have looked at so far is that they dont give us a system that could deal with real biological application of cognition. This leads me to your second question. I dont think that the order of discovery will be us creating a successful A.I. and it teaching us about our mental capabilities, rather I think in order to creat a successful A.I. system we will have to have a much greater knowledge of our mental capabilities. The attempt at creating this A.I. will drive our exploration of our mental capabilities. Andy Tucker

 

 

 

David Murphy

 

Searle’s thought experiment of the Chinese Room was meant to display that the way that computational machines are used to help produce information for us does not fit into understanding. These machines use a physical symbol system to do whatever task is assigned them, and similarly, the man in the Chinese Room does the same. He has no idea what the Chinese symbols mean, or even that they are Chinese, yet he uses a guide, similar to the programs in our computers, to produce an intelligible reply. Searle says that even though the responses are correct, there is no semantic understanding.

 

How can we say there is no understanding? Sure, the system may not recognize it as Chinese and not understand it in the same way, but I think there is an understanding. The man may not get the symbolic meaning of what the symbols represent officially, but I think they would have some meaning to him since he has to recognize and then respond to them.

 

 

Matt Rockwell

 

In chapter 3, Andy Clark explores the method folk psychology uses in describing the behavior of adults by combining attitudes (“I don’t like to get wet”) with input propositions like (“It is going to rain”) to output a determined behavior (“I am going inside”). Fodor, Churchland, and Dennet have provided the three different theories Clark discusses in this chpt. Fodor believes that folk psychology’s method for determining behavior is correct not just pragmatically but, that the physical brain contains a syntactic structure and content structure that match folk psychology. Churchland, on the other hand, doesn’t believe that the brain’s structure will match the Representational Theory of Mind(Fodor’s Theory) and denounces folk psychology as incorrect. The final theory Dennet describes is similar to Churchland in that there will be no structure found in the brain that will match the R.T.M., but that folk psychology is good at predicting behavior and hence understanding how the brain functions.

 

 

 

If a person ingests a hallucinogen, the body releases certain hormones, a person has a healthy breakfast, or pain is occurring in a foot the human will have different states of consciousness. Could a lack of a universally accepted explanation of the human experience of consciousness (that in my opinion should contain the body as conscious) lead to this debate by compartmentalizing the minds functioning and in so doing removing the body as an integral part of decision making?

 

Mike Sterrett

 

The ability of the mind to effectively cope with everyday situations is perhaps the most intriguing aspect of the notion that our minds are merely software running on the hardware of our brain-meat. While the fact that pattern recognition is the way that we expertly deal with things is not altogether shocking, it is not exactly widely known either. Thinking back on things in my life which I am now an expert in, the learning process always began with following certain rules in doing things until they became second nature to me. What I realize after this reading is that it became second nature because I switched from simply following certain prescribed rules and began recognizing patterns. The question remains, though, is how does the mind goes from following a set of rules to reacting to patterns?

 

I think that a combination of neurobiological development and genetic algorithms could account for the eventual development of a neural network

which is complex enough to explain human cognition. This might be cited as an example of scattered causation: the extremely simple biological

and algorithmic structures which are present at an organism's birth could combine with natural physiological growth and environmental

input to build up a sufficiently complex connectionist network. This network could provide the underlying structure supporting forms of folk

psychology and symbol systems discussed by Fodor. - Armand Duncan

 

 

Armand Duncan

 

I think it is highly unlikely that there is any completely homogenous system, whether functional or structural, which would be able to offer a complete explanation of human intelligence and cognitive functioning. As pointed out in this chapter, human intelligence and interaction with the world consists of any number of different and, it would seem, disparate properties. These include sensory experience, emotions, physical and mental coordination, language, memory, and the ability to quickly and creatively learn and adapt. The more functions which a theory of intelligence is required to explain, the more complicated and multifaceted the explanation will have to be. This makes it extremely unlikely that a single model of cognition could be generated which will explain all of the abilities of the human intellect. It also makes it extremely unlikely that a single medium could provide the structural arrangements necessary to support such complex cognitive functions. Multiple models of cognition, each interacting and overlapping with one another, may provide a more workable idea of human intelligence.

 

Zach

 

The question in this section is what constitutes intelligence? Are we able to replicate our intellience by using physical symbol systems. Physical symbols systems identify/pick out objects and produce responses based on our understanding of the object. We use our physical system to identify cognitive bands or production memory. Our cognitive band and production memory allow us to contimplate the appropriate decision based on our past experiences. Computers can manipulate their code similarly by using IF statements. These if statements create conditions for appropriate responses. A simple example would be a greeting to a computer. "hello computer" If computer is given hello respond with the same greeting. In this case a computer is able to comutate appropriate responses based on symbol recongnition. In the section "Everyday Coping" We see that it can be extremely difficult for computers to replicate our intellignce in everyday contexts. The depth of understanding of a computer is called into quesiton. If a computer can recognize symbols then do they understand what they mean with the same depth of understanding?

 

 

 

 

Jordan Kiper

I wish to press the issue of intelligence and engage those who have misgivings about intelligence as a physical-symbol system. According to Newell, Simon, and Clark, if a device is physical and contains both a set of interpretable and combinable symbols, as well as a set of processes that can operate on instructions, then that device is a physical-symbol system. Moreover, because such a device is computational, whereby its objective behavior underscores a function for which there is an effective procedure or algorithm for calculating a solution, that device is intelligent. Granted these premises, if an object is not a physical-symbol system, then it is not intelligent, since, empirically, a necessary and sufficient condition for intelligence is being a sufficiently computing physical-symbol system.

 

The strength of this argument, I think, resides in its definition of intelligence, namely, that utilizing an algorithm to calculate a solution is intelligent behavior. Yet two likely objections ensue. Firstly, intelligence is not just limited to the process of specific input yielding particular output, for processes in themselves do not understand their very own functioning (Chinese Room Argument). Secondly, we intuitively recognize that while organic beings exemplify intelligent behavior, an inorganic device, and its underlying processing, seems to lack anything resembling actual cogitating, feeling, aspiring, and so forth (Chinese Nation). But notice the ambiguity, if not impracticality, of both objections. Regarding the former, it begs the question of ‘understanding’ since the objection merely gainsays computation by assuming a commonplace definition of understanding--but what else could understanding mean besides the interpreting and combining of symbols to create solutions? If you say understanding is something more, such as (say) the ability to show insight or sympathy, then you ignore the fact that the ability to show insight relies on interpreting symbols that signify impending adversity; likewise sympathy is interpreting signs of another’s misfortune and outputting solace--or, for that matter, outputting smugness! In any case, each of our behaviors is apparently the output of an underlying physical-symbol system. Regarding the latter objection, indeed organisms seem different from inorganic things, like computers and their functions, but our intuition that a biological ‘inner-view’ is necessary for cognition is impractical; for consider what would be the case if we had to know that a device had an inner-view before we could attribute intelligence to it.

 

Since we cannot know with certainty the inner-view of any being, we would never attribute intelligence to any device operating on inner symbols, whether a computer, a favorite pet, or a local pizza deliverer. My question is therefore the following: what else could intelligence be besides a physical-symbol system?

 

With classical symbol-crunching A.I. it is assumed that study of the human mind can be done without describing and understanding all of our minds functions on a neurological level. We know from studying the brain that there are multiple memory systems that work differently and are totally separate from each other. Thereare multiple algorithms to do the same thing so is it possible that we our brain has several algorithms of supporting the same mental state in different ways. It almost seems naïve with this evidence to attempt to model psychological functions without further studying the neural implementations of these things. We can also challenge the idea of uniformity and simplicity by Rosenbloom. It is becoming more accepted that our brain is more like a grab bag of knowledge. While now it seems nearly impossible to replicate the activities of the mind into a symbol-crunching system, it is not out of the question that someday there will be one that acts more like the human brain, especially as we continue to learn more about its neural implementation.

 

I think that the definition of intelligence is fundamentally flawed. A physical-symbol system is, essentially a static function. Thus saying F(x,y) = x + y; the function F(x,y) will always be x + y. This function is static, but by definition intelligent. It will follow its predefined rules, and can not change its self.
The Chinese Room Argument is broken at its core due to this. Yes it can read in a message and respond to it. But lets say the message is 'My name is Bob. What is my name?' The only intellegent response would be Bob. Since the interpreter is static, you would need an entry in the book for this message. To cover all of the possibilities like this you would need a book infinite in size. In turn you would never be able to respond, since you could be searching forever. Thus the Chinese room is not intelligent.
I think intelligence, more or less, is the ability to be dynamically alter ones self. Like the understanding of context. Context is just a minute layer, but a good example none the less. Even if you just implement context, you are still looking at an infinite number of entries that require context when related to the Chinese Room. -- Matt Sidinger

 

William Moore

 

I like the idea of the human mind as a “bag of tricks” for a few reasons. First off, it seems that it is feasible for us to have evolved in such a way as to have specialized “modules” in the brain that deal with specific functions. Secondly, it gets us away from the more controversial issue of whether or not machines can be conscious. Are there specific “tricks” (I.E. spatial reasoning) that humans have, which may correspond to some “module,” which cannot be reproduced effectively in a machine (or in software, if you prefer) divorced from issues of consciousness?

 

 

Whit Musick

 

Now that we have concluded that thought uses mental representations, or symbols (see my comment 1) to operate and produce 'Y' following an 'X' input, the question of what these inner mental symbols contain arises. That is to say, what is the precise definition of _King Lear_? How does _King Lear_ come to exist in a human being? How does one go about determining the whole of information contained within and composing _King Lear_?

 

A reasonable place to begin is to hypothesize that the mind is not structured in a way to receive one kind of input, but rather tolerate a variety of input – in the same way a newly discovered organic alien pocket calculator can process 2+2=4 but also chooses strong moves in a chess game. But how is it that the content of this alien calculator came to be? Well, there are two main possibilities:

 

 

Content is itself fixed by the local properties of the system, that is to say, they are intrinsic.

or

 

Content varies depending on broader properties such as the history of the system and its relations with in itself and relations with the external world.

 

 

 

JEFF PAUCK

In chapter 2, we are introduced to Newell and Simon's definition of a physical-symbol system. this definition syays that all PSSs "have the neccessary and sufficient means of general intelligent action." But this type of definition can have many weird results, including the chinese room example. Another unique outcome arrives when we consider the SOAR machine. SOAR attempts to replicate general inttelligence by perserving a large amount of symbols, facts, knowledge, etc, and remembering their functions/uses. When Soar arrives at a new or different situation, it examines every possible solution that it has stored than can choose the best appropriate descision. By doing this SOAR can complete both short-term and long-term goals and can do it relatively well, depending on how much is stored in its memory. So all-in-all SOAR is a physical-symbol system that can work at or close to a level of deliberative thought. But the question is does Soar real understand what it is doing, or is it just a more advanced form of mimicry? Is this even how we process our own facts and memories (neuroscientist would say no)? Can this single type of long-term memory that SOAR posseses ever be able to work at the effeciency level as well as the apparant randomness or grab-bagness of the human mind? And although there are many short-comings, what can we learn and use from SOAR?

 

Lex Pulos

 

Symbol system A.I has been attempting to recreate “the pattern of relationships between all its partial representations” (pg.41). The complex process each artificial way has waned in the attempt to accomplish this. While each pss has its benefits of recalling data, like SOAR, or having a “bag of tricks” that the machine can draw different inferences from; one item that appears to have been overlooked is the idea of rationality. Each A.I. must be given different was of computing information, like each child is given different cultural ways of learning. Culturally we are able to process symbols differently and often this goes against the hegemony of society, can then this irrationality of thought be seen as rational; and if so then can the irregular patterns of A.I. be seen as acting in accordance to different cultural interactions? I am not talking about the simple rationality of getting out of the rain when it is raining, because, often humans choose to stay/play/walk in the rain.

 

Leilani Howard

 

Proponents of the Physical Symbol System Hypothesis consider devices capable of the interpretation and manipulation of symbols, to meet both the necessary and sufficient conditions for intelligence. Current physical symbol systems boast stores of facts, possibilities for action, learning mechanisms, and the ability to make preference based ‘decisions’. Those who push the PSS hypothesis stress that such a system instantiated on a broad enough scale and endowed with specific operational time constraints would exemplify deliberate thought.

 

 

 

The text describes the PSS decision-making procedure as based on ‘retrieved information concerning relative desirability’. However, it seems that many decisions that humans make are not only influenced by past experience but by our constant perception of linear time and anticipation of the future. Time perception seems to stem from sensory observation of motion and the incessant stream of chatter that fills the human mind. Have we tried to program a computer to perceive the passage of time, and if we create a computer capable of continuous deliberate thought would it be able to perceive time? It just seems that something is missing…

 

Haha: "Leilani and time do not have a respectable relationship."

 

I can relate to this feeling that there must be something to our phenomenological experience--that 'something behind the eyes'--without which sentience loses meaning. I have three points/questions to raise that hopefully might focus the inquiry:

 

1) What about amnesics and those who don't perceive motion (and those with other kinds of dementia and whose qualia likely resemble some degree of booming buzzing confusion), who don't have a clear and distinct sense of time passing? What are we comfortable in denying them, in terms of intelligence and consciousness?

 

2) The answer to your question seems to turn on what you mean by "perceive the passage of time"; would it count to have a computer, hooked up to a camera, that 'observes', 'notices', and 'records' occurrences of visual events based on image recognition and coding?

 

3) If not, what would it take to meaningfully attribute to a system that it is witness to the passage of time? How similar would it have to be to our own, and how could we even compare? (Food for thought: A fly's perception of time is probably quite different from our own, or at least it would be if they weren't acting purely on reflex.)

Jared Prunty

 

 

 

Jared Prunty

Mindware

On p. 41, Arbib says that “no single, central, logical representation of the world need link perception and action—the representation of the world is the pattern of relationships between all its partial representations.” This holistic view paints cognition as an emergent property, whether co-opted or accidental, that seems parasitic on more straightforward modules distributed throughout the neural network. While it resists the temptation to look for a location of consciousness or a simple answer to its origin, it seems nonetheless to stop short of agreeing with Searle’s contention: that conscious intelligence is more than a sum of its parts, regardless of the organization. Still, I feel that I need a more robust understanding of Searle’s point. Granted, understanding is not the same as symbol manipulation, but the latter may nonetheless be sufficient to bring about the former.

 

I’m beginning to wonder whether it may be the case that we find artificial intelligence so difficult to conceive largely because the vast majority of the labyrinthine and baroque connections and operations taking place in our brain are hidden from our highest level of consciousness—precisely the rational part asserting its own uniqueness and intractability. The myriad things continuously going on in our neural ‘basement’—in the amygdala, in our inference systems, etc.—influence what it feels like to be conscious. And it may be the case that whatever is going on underneath the tip of the iceberg—the ‘stuff’ that counts, for Searle—may in fact be very straightforwardly deterministic and unmysterious. And this leads me back to my suspicions about the Turing Test and Searle’s Chinese Room argument: they both rely on our subjective evaluations. Neither puts forth any formal criteria for its own validity; only that it persuade the arbiter. So all A.I. has to do is seem intelligent/conscious, and this may in the end be reducible to its behavior escaping immediate explanation in terms of the mechanisms that produced it. I doubt it would be that hard to satisfy the criteria in the friendly challenge to provide “agent-SOAR” on p. 38. Ironically, however, I suspect that “showing us how” these complex operations are carried out might be precisely what would lead us to declare it primitive and unintelligent. (We can’t show ourselves how we do those things!)

 

 

 

 

 

Shawn Brady

Physical Symbol Systems (PSS) are an example of a “sense-think-act cycle” (p. 88). That is, PSS function via sensing (input, a.k.a. perceiving), thinking (computing, usually according to algorithms), and acting (output, a.k.a. implementing). However, Churchland, Ramachandran, and Sejnowski offer another account of how we might go about the computation and implementation process – that of “interactive” perception and reason. According to proposition (4) of this account (p.88), our perception of or reasoning about worldly events could be less like the “passive data structure” of PSS “and more like a recipe for action” (p. 93). Essentially, the claim is that we don’t function on a sense-think-act cycle, but instead on a mere sense-act cycle. Maja Mataric’s robot is intended as an example of this “interactive vision paradigm”.

 

However, I can’ help but wonder if this notion, of our inputs being “recipes for action”, is somewhat misleading. It seems to me that Churchland et al. have just made the sensing step of the process more robust. For example, Mataric’s robot still seems to entail a thinking step. While it may register landmarks according to sensory input and current motion, creating a stored “map” of its surroundings, it still is required to retrieve this map in order to return to a previous location (This seems analogous to a person who is an unfamiliar city and recalls the visual image of a map of the city in order to determine where they are and which way they need to go). Transforming the recipe into action still seems like a type of computation. Is the interactive perception and reason account taking something for granted? Is there more going on behind the scenes of Mataric’s robot than is being acknowledged by Churchland et al.?

 

 

Andy Tucker

One common theme that is constantly recurring in the discussion of finding a system which exemplifies the human cognitive capabilities is the problem of the complexity that the system must have. In all of the models discussed so far the increased complexity that has been attained has been at expense of the deliberately systematic structure of cognitive function. Even with the importance of regaining this systematic structure in future models there is also a great deal of work to be done in the form of the complexity of the system. This complexity is found mostly in the problem of the biological reality of the current systems. The systems so far have been generally small networks dealing with solving a specifically defined and discrete problem. This is very uncharacteristic of the system of the human brain which tackles ambiguous, complex, convoluted problems with an enormous network to work it through.

 

It is being said that the solution to this is to expand and tune our system to incorporate a wider range of features and dynamics as well as recognizing the role of external and nonbiological resources in the success of cognition. How will we go about making a system to exhibit real world biological cognition and what would such a system look like?

 

Can't we just say that the examples so far are uncharacteristic of the brain of a human because they are too limited? Programs which are programmed to do some specific task or another can be used to understand cognition if you think that such a program may only exist as a part of the human mind. The real mind seems that it may simply be a system of these "programs" working together, like a network of networks to create what we know as cognition? - Mike Sterret

 

 

David Murphy

Systematicity was an argument brought up by the text that counters the connectionist model. The text states this argument by saying, “Thought is systematic; So internal representations are structured; Connectionist models lack structured internal representations; So connectionist models are not good models of human thought.”

 

I did not quite understand this, so my question is how do connectionist models lack structured internal representations? Maybe I didn’t get the reading, but the connectionist models sounded very structured to me. I thought the connectionist model was about getting input, processing that input and then connecting it with other inputs that had been processed similarly. This seemed like a very structured way of connecting lots of small things into one big idea. If someone would respond to this to help me understand I would appreciate it.

 

I think that structured internal representations are supposed to be opposed to distributed representations (p. 66), which are employed by most connectionist models. A distributed representation makes use of superpositional storage, which I think can also be understood as the overlapping use of distributed resources (such as the weights used in the inter-unit connections of NETtalk). In contrast, structured internal representations are to be understood as something similar to what goes on in Physical Symbol Systems. Here, there is no "inner mush" caused by overlapping storage (p. 74). Instead, certain inputs lead to certain outputs through a certain type of computation. It is more direct. The claim that thought is structured internal representations is meant to mean that thought does not entail the use of overlapping storage.

 

 

Mark Gerken

The text mentions biological reality based problems. One of the claims is that the manner in which the network is set up to learn may lead to non realistic artifactual solutions. The proposed solution would be to use more biologically realistic input/output devices (cameras or a robot arm).

 

Why would this “biologically realistic” setup be more accurate? The learning process of the network can be independent from the implementation of its task. The text brings up an example of balancing building blocks on a beam that pivots on a movable fulcrum. Why is it not just as valid to have someone determine the validity of the systems output then give feedback, as it is to let the system actually perform the action and discern its result. The act of placing the blocks on the beam is not so tightly tied with knowing how to place the blocks. It seams to me that keeping the two separate until you know they work independently is in everyones best interest anyway. This way you clearly know which piece of the puzzle isn't working.

 

 

Jenessa Strickland

Connectionist models of cognition have been criticized for being biologically/ecologically unrealistic. In first-generation connectionist models, problems, inputs, and outputs were highly abstract and artificial (p. 79-80). Attention to biological realities seems critical for understanding cognition, considering that all of the genuine cases of cognition we know of are biological. What lessons can evolutionary theory and ecology teach us about our models? One of the oft-ignored (by at least old-school connectionist as well as PSS models) biological features of mind/cognition is neuroplasticity: a key feature of brains is that despite specialization, the roles of individual parts are not fixed; when one part is damaged, another part takes over that function. Similarly, in the case of humans at least, learning seems potentially infinite, or at least practically so. The brain seems able to adjust its “program” indefinitely, solving problems in new ways. What role do plasticity and learning play in biological cognition? Are they necessary for cognition? How is this plasticity accomplished? (It seems to happen "on its own", without directions from some executive set of instructions.) Can artificial models incorporate the high level of flexibility we see in the human brain?

 

- I was curious about the question of plasticity here. Does it only work in certain situations? Some of the neural disorders caused by damage to the brain that Jordan taught about in class seems to show that it does not work in all cases. I am just wondering if there is a limit, or if certain parts of the brain cannot have their functions they perform adopted by another part if they are damaged. If this is the case, I think that it could be possible to have artificial models be able to have similar or even better flexibility in the future than the human brain. I think it would be fun to talk about this in class. - David Murphy

 

Leilani Howard

Fodor and Pylyshyn reject connectionist models of human thought on the grounds that thought is systematic. According to them, thought stems from a structured base and individual thoughts are various compostitions of representations native to this base. Classical Artificial Intelligence, as a model of human thought, is compatible with their argument. Here, combinations of programmed stores of data comprise a system's responses.

 

However, once a system based on a connectivist model has run its learning algorithm, the resultant changes made to the system, are difficult to trace back to their origins. In fact, to reconstruct possible motivations for such a system's unique behaviors, researchers employ methods strikingly similar to those used to study the human mind. They hinder the functioning of isolated areas to see how it affects the system as a whole.(pg 68)

 

My question here is whether or not AI systems exhibit this property of rich complexity which makes it difficult to understand their internal operations. Though the commonality of this feature between the human mind and the connectivist model is not strong evidence that the connectivist model is a superior model of human thought, it is reasonable to suggest that a superior model of human thought would share this property with the human mind.

 

JEFF PAUCK

Chapter 4 of Mindware delves into the topic of computers and robotics that attempt to replicate human deliberation and action through a process called connectionism. Connectionist models use a complex set up of input strength, weighting, and then, if strong enough to fire, an output. My mind problem with connectionism is discussed in the “biological realties?” section. The objection is that these models are usually designed for very specific and singular tasks, and most, if not all, would fail to accomplish the integrated workings and processes of human beings. I agree with this statement and my question would be: even if we were able to build a connectionist model that can deal with the plurality of processes, would this really resemble human deliberation? Could any connectionist model?

 

Alex Hyatt

 

In chapter four, two theories of different processes for artificial intelligence are put forth: connectionism and systematicity. Connectionism seems to work, as I understand it, by using a system in which a bunch of different groups overlapping each other to try and target a certain response to a problem. The coding schemes seem to contain certain learning algorithms that would also help the program to appear like it was learning and solving problems.

 

My question is, a) am I close to understanding connectionism, and if not could someone explain it to me me, and b) what exactly is the difference between connectionism and systematicity?

 

I believe that you are getting at the correct idea of connectivism. There are more layers to the theory as the chapter indicates, 1 2 and 3. There are more subtle notion to language than just the words and this theory seeks to understand those by a learnig process, this is complicated through temporal relations. Systematicity on the other hand has a structure that language follows, this stes up our linear thinking process. While connectivism is more of a trial and error process. I hope this is close to correct and that it helps. - Lex

 

Ashley Murphy

Chapter 5, Genetic Algorithms

This idea was first introduced by John Holland (1975) as a computational version of biological evolution. He took chromosomes (bit strings) and subjected them to trial. They would be tried on variation, selective retention, and reproduction. These would then be capable to encode possible solutions to a pre-set problem. The ones that had the highest level of fitness, would then become the breeding stock for the next generations. This would then set in motion for crossover and random mutation, and the new generations would again be tested for their levels of fitness. Due to this process, the generations improve and end up solving the afore mentioned pre-set problem. This experiment has been used many times successfully in practical and theoretical purposes, mostly in robotic devices. How does this work if genetics are only found in living things? The bit strings act as artificial chromosomes but how can one get genetic variation and higher fitness levels when robots cannot reproduce?

 

Not robots, programs. Take one example: thousands of little 'ants', each with a randomly assigned 'genotype', in a computer program, that have to make their way through a 'maze'. In each round, each ant makes a move, sits still, or runs into a wall. It's all haphazard and random, at first. This goes on for a set number of rounds- a generation. But the ones that produce results closest to the predetermined target output (i.e. getting farthest through the maze) get a higher score. Only the best, say, 10% of ant algorithms, are allowed to reproduce and pass on the 'genes' that made them take that route. The rest of the duds are deleted and the next generation goes through the same thing. Hence, most of each surviving genetic profile survives and is passed on with minor variations-some good, some bad. This happens over and over, thousands of times. From a broad view, you will inevitably see the average 'fitness' level (suitability to the task) of the surviving ones increase, because every generation 'tinkers' with the foundation while preserving only the best of the best. Eventually, one makes it all the way through, and then more do. (The rat simulation in the book is like this, only capable of being conditioned by landmarks. So the algorithms eventually evolve to respond to the environment, which is essentially like producing ancestors that 'instinctually', say, turn left after passing two rights.) The larger point is that no one has to come up with the program that's best for the job, and the evolutionary 'winners' are likely to be programmed differently from one another, and with semantic opacity. It doesn't have to make sense to us, it just has to be efficient. ~Jared Prunty

 

 

 

 

Kate Connelly

One of the most thought provoking things about connectionism is the idea that connectionist processes can discover solutions to problems we did not think possible. I understand that human's have a tendency to act in accordance with the correlation between their beliefs and the propositions that take place. I also believe that in some ways humans have patterns of acting encoded by their culture and society which gives them a systematic way of working. These patterns make processing data more efficient because once we are used to understanding things in a particular way, we process them much faster. This, to me, is why connectionism is unrealistic biologically, it would be inefficient. However, despite the fact that connectionist processes are unrealistic biologically, I wonder how it's processes could affect the way we learn things, eventually increasing our ability to solve problems. I am reminded of the Einstein quote "No problem can be solved from the same level of consciousness that created it" and wonder if connectionist methods could affect our level of consciousness.

 

Mike Sterrett

The whole concept of a system which uses patterns and even patterns of patterns to do work and produce output is great because it seems to be getting closer to what it is that might make our own mind tick. The artificial neural network NETtalk does just this. By programming in algorithms, the designers of the program saved themselves the work actually programming in the response of the program for every single particular case. Instead they programmed an algorithm which causes the machine to actually learn based on experience.

If this is to represent the way that a person's mind works, then how is it that we acquire the learning algorithm within our own minds? Is it “programmed” into us while we are babies? Is it biologically inherent from birth? Will we ever find out the truth?

 

Matt Rockwell

 

 

Connectionism, explained in Chapter 4, gave an alternate explanation to the physical symbol systems that seems to overcome the problem of biological functioning, neuroscience actuality. It explains the ability to co-opt different areas of the brain that use seemingly similar types of data without the necessity of individual physical symbols to represent each input. The function of a whole network in order to represent a single input, distributed superpositional coding, not only matches the form of the brain, but it also creates the possibility to have complex and intrinsically rich sets of data without the need for individual symbols.

 

 

 

I wonder if by analyzing the brains functioning as “numerical spaghetti” misses the complexity of the brain (chemistry, synapses, networking, etc..) by trying to explain encoding as universal throughout the brain.

 

 

Zach Farrell Chapter 4: Connectionism

 

 

 

Connectionism is based on the idea of neural networks and internal parallel distributing process. This argues for the idea of materialism because we are able to produce results through nothing more than our brain and ability to interpret physical symbol systems through physical interpretation. The idea is that neurons are connected though networks of axons and synapses which are weigh neuron responses when interpreting symbols systems. When a threshold is reached in a neuron it fires sending a signal to a corresponding parallel network via synapses and axons. NETtalk is an example of a connectionist system. Through a learning algorithm it was able to learn how to create appropriate text to speech communication. This example also highlights how complex neural networks can become. With the 29 inputs of NETtalk created a neural network of over 18 thousand weighted connections. This shows how limited inputs with a simple learning algorithm can lead to complex/wide ranging systems for an appropriate external response. One problem which occurs from this type of input response based on weighting systems is the ability for someone to interpret the weighted systems as a form knowledge or understanding. One problem is the the limits of language to desribe the each systems function. Another such idea of interpretation comes from cluster analysis. Or the study of a neural networks sequence or which neurons it utilizes.

 

From this section it seems we are able to predict all human behavior through neural networks or through study of the chain of neural activation such as cluster analysis. How do neural networks in some people come up with responses which differ from other people if the body of each person is essentially the same?

 

 

Armand Duncan

 

I think that the connectionist picture of cognition has certain decisive advantages over the folk-psychological picture.

 

On the purely empirical level, it seems to fit more precisely with our knowledge of how the human brain is actually structured, that is, as it is built upon networks of neurons. Additionally, it allows for much more complex explanations of much more complex forms of human behavior than folk psychology can reasonably account for, and it seems to me that connectionist accounts would also be capable of absorbing additional information provided by physiological science.

I also think that the objection to connectionism of human thinking as a systematic process is somewhat erroneous, as human thinking seems to me to be much closer to the heuristic process of learning and adapting which connectionism has shown itself capable of modeling. Overall, connectionism appears to explain the subtlety and complexity of human cognition better than folk psychology does. Would it be possible, however, that connectionism could provide the structural foundations upon which a symbolic system of cognition could be built?

 

Jared Prunty

 

Does systematicity assume propositionality of thought? How much of our thinking is truly propositional in the first place? For example, most of the things I believe are provisional in nature. There is not a correspondent truth-table whereby each node of content is either the case or not. So how is this reflected in the neural structure? Via gradational activation of pathways, I would assume. Which, again, tempts me to compare thought to emotion--less robotic and more nuanced.

 

**Lex Pulos****

Connectivism forces information into a small space by exploiting a highly structured syntactic vehicle. This allows for generalization as an old pattern of response will be placed with a new pattern. If this occurs an overlap will be created and thus learning can be created. as the waves of this theory continue more temporal element are added until we finally achieve Dynamic Connectivism. While this theory has illustrated some success from the earlier PSS, I question the ability of this? While processes are overlapped it would appear that in different contexts a completely wrong association may be used. The simple action of looking into someone’s eye has different meanings depending on persons involved, environment, culturally and so on. It would then appear that even when we have learned the proper encoding of a eye look that there are other influences that would change that one instance, a guilty feeling. Is it then even possible to learn the “sensible” output?

 

 

While many physical actions can have multiple meanings based on the emotions of the parties involved, they may not be necessary. After-all, two real people can have a misunderstanding at this level as well. If i stare into someones eyes with feelings of love, who's to say they are not looking back into mine with disgust? What i mean to say is that the meaning associated with an action depends on the two people involved and how they see the situation individually, its completely subjective. Because of this, why does it matter if the Connectivist approach has the wrong reason, so long as it's reached the correct state. - Mark Gerken

 

 

 

Week 5

 


Mark Gerken

 

According to the text, the "sense-think-act" cycle is not an accurate representation of how the brain works. Shortly after this, a "bag of tricks" explanation is given, and held to be a possible solution. My question is, when they mention the "sense-think-act" cycle are they referring only to the specific implementations in robotics, or the more general, literal, interpretation? I fail to see how a "bag of tricks" doesn't still operate under the "sense-think-act" paradigm. You need some input to do anything, so you need to "sense". After taking in the input you have to decide what to do with it, most likely by finding the most suitable trick in your bag, this requires some computation and thus you "think". After this happens the system will make some movement, give a response, or just "act". The current implementation of what has been dubbed "sense-think-act" may be flawed, but that doesn't mean the underlining idea is incorrect.

 

 

The impression I got about the “sense-think-act” idea and the “bag of tricks” idea was that the “sense-think-act” is more restrictive in a way. The “sense-think-act” system seems to mean that whatever input you get is converted into a single stream of information that your brain can understand. The process of thinking about this stream leads to a single action based on all the little nuances that are contained in the stream of information. In other words, for any input that someone receives, there can be only one action that the person does for that situation. With the “bag of tricks” idea, there is more freedom in the possible actions. This idea is that we get input and convert it into information to think about, just like the “sense-think-act” idea, but the way the input is converted to information is broader. Instead of a single coding that leads to the best result for every situation that is exactly the same, there are a variety of ways that the input could be processed, all of which can lead the creature to a positive action, even if it’s not the best. Basically, there is an allowance for multiple types of reaction to the same input based on looser guidelines to be fulfilled. I am not sure if I am right, but this is how I understood the “bag of tricks” idea. -David Murphy

 

I completely agree with what you are saying. I believe that the sense-think-act outline is the base for which the brains function is built. When an action is taken by an individual it is easy to pinpoint the basic sense-think-act outline from which the act comes. This is why we see the concept pop up in so many peoples interpretations of how things work. Why i think people are so quick to dismiss the concept is that it seems too simple. It feels like there is something else that is happening when we act. I think that if we take the basic foundation of the sense-think-act concept and stack some other functions on top of it then we will be much closer to finding an accurate representation of how things work. We just cant be too swift to dismiss the simple models. Andy Tucker

 

David Murphy

I like the idea at the end of the chapter where it brings up the possibility that programs like the virtual ecosystem Tierra have life. I thought about the argument that because the digital organisms could be alive because they exhibit the capacity to do the sorts of things that simple organisms actually do is actually pretty good. These organisms evolve and reproduce in order to best maintain their position in the computer. This has shown that there have been certain varieties of these digital organisms, and it they are a good variety, they "survive" and pass on their coding, while the bad ones are taken out of the competition. This seems to be very similar to how life probably started and diversified on this planet. I have just never thought about simulations of this type being "alive." We have always seen life as being purely biological, but why can't it be artificially created like this? The idea of gene therapy that we see in sci-fi media is getting closer to being possible, and this is just a form of programing, but in living beings rather than computers. At the beginning of the course I came to believe that it would be possible to create a cognitive being in a computer, and if a mind can be created in a computer, why can't life? My question is simply is if possible to have life in a computer? I didn't get a very specific definition of life from the reading and I was wondering if anyone knew of a definition of life that could either support of refute the idea that these programs could be considered alive?

 

 

 

Andy Tucker

 

  • Clark presents the question as to whether life could “be actually instantiated in artificial media such as robots or computer-based ecosystems. He uses the example of the virtual ecosystem called Tierra in which digital organisms compete of CPU time. In this system the organisms interact in the typical ways in which any ecosystem interacts. The organisms compete, change and evolve with one another. Some of the organisms even became parasites and their hosts came to exploit the parasites as parasites themselves. Systems such as these exhibit characteristics of what we attribute to live including evolution, self-replication and flocking. This could be a drastic blow to our view of what it means to be alive if systems such as these could be considered as living.
For something to be considered alive is it necessary for it to be organic or is it only necessary for it to encompass the behaviors normally associated with life?

 

The question seems to be "for a thing to be considered alive, does it have to be alive?" This is a strange question because it seems that there is no real necessity for a thing to be "considered alive" if it is in fact not. If the question is "should these virtual beings have the same rights as living beings?" the answer is no, and I prescribe killing them immediately.

-Will Moore

 

It seems to me that the real question is: what counts as life? What the examples of artificial life demonstrate, in my opinion, is that we are pushing the limits of our concept of life, and there seem to be no non-arbitrary criteria for categorizing some system (for lack of better word) as alive or not. At most, I think we can have a working, pragmatic definition. For example, we might use one definition of "life" for making ethical assessments and another (but preferrably not inconsistent) definition for other kinds of questions. --Jenessa Strickland

 

 

Zach Farrell

Chapter 5: Perception, Action, and the Brain

             This chapter begins by illustrating a by examining a programming which is designed to resemble neural organization and structure. David Marr suggests that we may analyze and understand information in a way that is completely independent of the particular mechanisms. One AI programming approach is, “sense-think-act”. In this simple machine resemble the human brain by using sensors to identify different inputs when it encounters a particular purpose, ex. like to pick up a aluminum can, it associates the object with a code and fulfills the corresponding mechanical movements. This seems like a very logical approach to programming. One major program that comes from this is single object orientation in the brain and the ability to account for foreign objects. Biological evolution has led to the adaptive ness and ability to handle new situations without having a preconceived mental state (or associated program), this could be biological evolution and our ability to handle new situations with simplicity.  Although the complexity of sets and subsets of information can be attributed say to create an AI response, our neural mental states are affected by or conception of value and or basic human instincts for things such as survival or what is best for itself. Although a similar organization of sensor inputs may create different processing results and thus different mechanical results.

 

 

If a machine does not have sensors to detect water, such as sonar, then would it be able to determine water would make it malfunction in another way or would it take interaction?

 

 

Armand Duncan

 

I think that emergence offers a viable model of cognitive behavior. Clark describes four different levels of emergent behavior: collective self-organization, unprogrammed functionality, and interactive complexity. Each of these properties of emergent systems could be used to describe different aspects of cognitive life. Interactive complexity seems like it could provide a model of how an organism receives input from its environment which in turn stimulates internal processes. These processes then provide feedback which stimulates further interaction with the environment.

The processes which are stimulated by the interactive complexity could possibly be described using a combination of both unprogrammed functionality and collective self-organization. The causal relationship between these aspects of an emergent cognitive system is an admittedly difficult problem to solve, but it seems as if a connectionist network could be used to describe such a relationship. I also think that an emergent interpretation of a cognitive system would speak against a folk psychological interpretation of cognition. Does emerging neuroscience reinforce this holistic picture of mental life? And if so, could the complexities of such a picture be worked and simulated in order to provide the predictive power necessary for the development of artificial intelligence?

 

Matt Rockwell

 

 

The “bag of tricks” seems pretty hard to argue against in so far as the current understanding of the form and function of the brain as seen in neuroscience. The problem, as Clark describes, of integration of the different parts is answered three different ways. The first answer signal processing comes in two forms. The first describes the physical form functions by using a general all purpose code and the second processes the signals by inhibiting or encourage the uses of other brain structures. The second, global dissipative effect, believes in the convergence of structures and the overall control or integration of different “tricks” of chemistry by the chemical release of neuromodulators that dissipate throughout the brain until the normal state is regained. The third theory, external influence, which places the balance and control of the different “tricks” on the external world that imposes the necessity of the use of a specific “trick” (ie..you are sitting next to the train tracks when it blows its whistle. The Db level is what forces the use of the ears which ultimately then determines the action to cover and protect ears.)

 

 

 

All of these possible solutions to the integration of the different “tricks” seems very plausible from a logical standpoint which could be instantiated on a computer (maybe the chemical global dissipative effect could be translated into heat for the analogy of a pc). However, I think that trying to explain the brain in terms of computer without fully defining and understanding the phenomena of consciousness and it’s biology is putting the cart before the horse. How can we try to explain what causes a certain phenomena of consciousness (ie wanting, sensing, thinking) if we don’t even understand the phenomena of consciousness. It is like trying to talk about how a computer works without a GUI and hence no way of really determining what is actually being done.

 

 

Alex Hyatt

 

In chapter 5, I focused on tinkering versus engineering. Tinkering is to take something that already exists and to improve on it, while engineering a solution to a problem would be to build it completely from scratch. As a result, the engineered solutions look very efficient and more practical. The tinkering is seen in the human body, such as the lung, which according to Andy Clark, “is built via the process of tinkering with the swim bladder of the fish.” He goes on to say that if an engineer were to build the lung from scratch it would probably be a better design.

 

After analyzing this, I cannot help but think that we could have the ability to build robots that are actually better designed than the human person. But at the same time do we want to? Or is it more like one of those things where we do it because we can?

 

JEFF PAUCK

In chapter 5 of Mindware, Clark discusses the development of how we have come to study and understand the mind. At first, by not giving my consideration to the biological mind at all, mostly due to Marr’s three levels, to now looking at the brain from a biologically evolutionary perspective. Progressing to understanding the brain with evolutionary blinders has helped us to understand functions of the brain and neurons, such as the Monkey’s fingers example, and has also helped us to see perception as essentially linked with action. My question is concerning the genetic algorithms for robotic “evolution.” If given the evolutionary history as well as an external aspect that may encourage an evolution, could a genetic algorithm predict the next evolution of a species with any favorable probablity?

 

Shawn Brady

 

On page 118, Clark mentions Godfrey-Smith and his thesis of strong continuity - that "mind is literally life-like". In other words, mind and life itself have similar organizational and functional features. Furthermore, a genuine understanding of the organizational principles upon which the latter is structured will lead to a genuine understanding of structural principles of the former. So, for example, the mind would be life-like if we were to discover that the "basic concepts needed to understand the organization of life turned out to be self-organization, collective dynamics, circular causal processes, autopoiesis, etc., and if those very same concepts and constructs turned out to be central to a proper scientific understanding of mind."

 

I am a bit confused as to what is meant by "life itself". What is it that we are considering when we consider "life itself"? The ways in which the world is organized? The principles of organization of particles? The principles that give rise to possibilities of particles forming life? Is a rock considered life itself? Can our minds actually be compared to rocks?

 

 

I dont think that it is that kind of organization that Clack is talking about in his writing. I think that he is talking more about the types of life like properties exibited in the computer-based ecosystem. The type of evloving and self sustaining properties that we normally attribute to life. I dont think that he would say that a rock could be considered as life itself or that our minds could be compared to rocks. It dosn't seem like rocks actually exhibit any of the prosesses or systemic organization that Clark would attribute to life itself. Andy Tucker

 

 

Jared Prunty

 

Understanding the human mind requires looking through the lens of environmentally situated and evolutionarily derived capacities. Because human intelligence came about as a gradual solution to the greater problem of perpetuating a form of life, it is thus helpful to conceive its features in terms of the selection pressures that may have formed them. Some examples from cognitive and evolutionary psych. include emphasis on the functions that made our ancestors competitive and well-adapted: hunting and gathering favored the ability to obtain, retain, share, and elaborate on information about the environment. But what probably set us apart the most (and drove our accelerated cognitive divergence from other animals) was our exploitation of cooperation: hypertrophied social intelligence, understanding others' intentions, a taste for gossip, adaptations for social exchange, evaluation of trust, and coalitional dynamics. These are also clever strategies for maximizing benefits from iterated Prisoner's Dilemma-type interactions, and our capacity for language enhances the effectiveness of each. However, as with the crickets, functioning in these ways does not presuppose rich inner models or conscious awareness. A great deals of the inferences we are wired to make in interacting with one another and the environment occur below that level. Our capacity for decoupled representations (i.e. processing information beyond what is currently the immediate object of sensation) also enhanced the other abilities, and consciousness might be an emergent epiphenomenon parasitic on fulfilling such functions. Dennett elsewhere has offered an interpretation of supernatural thought along these lines: possessing hyperactive agent detection apparatus was likely beneficial from the beginning in avoiding predators and noticing prey, along with the innate tendency to take an intentional stance towards phenomena that did not lend themselves to intuitive physicalistic explanation. Therefore, if we really want to understand our minds, we need to frame our questions in terms of what their precursors' functions may have originally evolved as a response to, and how its modules may have been co-opted from more primitive ones. This will also require that we be prepared to acknowledge that vestigial aspects are likely retained over the millenia of tinkering with the process. (What patterns of acting and behaving might be analogous to the appendix: defunct and maladaptive?)

 

 

 

 

Jenessa Strickland

(Sorry this is a little long and perhaps not well-articulated.)

Continuing the theme from my comment last week, it seems that the closer our models mimic nature, the better they are. PSS models seemed too simplistic and too structured. Chapter 4 shows that connectionist models and neural networks improved on PSS’s by allowing for nonlinear functioning and showing how extremely simple components, following extremely simple rules, can yield surprisingly complex behavior. Connectionist models were also an improvement because they took context into account: biological cognitive systems are situated in an environment and faced with evolutionary pressures. Genetic algorithms (discussed in class and on pgs 97-100) take the next step by actually mimicking the processes by which biological systems solve problems. These genetic algorithms build as little as possible into the system from the beginning, basically just giving it a goal/task but allowing it to generate its own potential solutions. The system then tests its solution-programs, selects those that are most successful, and generates a new “generation” of solution-programs based on these successful ones. These systems “learn” their own efficient solutions, but(!) it is often nothing like what we would predict. In nature, cognitive systems evolve. They don’t engineer solutions to problems from scratch; they build on their available resources, including their genetics and their environments. Evolution is a generally unpredictable process, so we should expect our models to reflect that.

 

 

 

My question is this. The course taken by evolution is always unique. As in these genetic algorithms, a genetic population starts out at random (more or less); the starting points are different, and the solutions to problems are different. As a result, evolution (at the level of populations, at least) never takes the same path twice. What implications does this have for cognition, in general, and for our models of it? If we want to model human cognition, building a system that is similar to us at the beginning and allowing it to “evolve” may not generate anything like human cognition. Across the phylogenetic scale, species that are closely related genetically and that have evolved under similar survival pressures may have a lot in common. But two species, even if they begin as one species, will diverge radically over time if their environments and survival pressures are different. So how much do we need to build into our model from the beginning, before we let it “evolve,” in order for it to result in anything like human cognition? Do our models in effect evolve their own kind of cognition? What criteria can we use to compare evolved robot cognition to human cognition? If we want to build a model of human cognition, is there a point at which we just have to create an exact replica, including replicated neurotransmitters and synapses, replicated sensory experiences and interactions with the environment? If that is the case, what would we even learn from building such a model (other than that we are very capable of replicating minute details)?

 

This chapter discusses the interaction of an artificial entity and its environment. More notably is the interaction, by internal signaling, global effects and external influences (p. 100). If an artificial entity is going to interact with its environment then it would have to understand that there is an actual environment to interact with. I vaguely remember a theory of an evil demon, in an attempt to understand what is out there a philosopher creates the idea that an evil demon had created what he know as a lie and all he knew was that e could think (I hope I am getting this correct). If an artificial entity is created the demon would become reality as its world had been created and it would have no thought out side of that demons. How is it then possible to create an entity that knows its world is comprised of a lie?

 

- Lex Pulos

 

 

 

 

Matt Sidinger

 

My question is a combination of what we have been learning in class and kicking a dead horse. We’ve been learning about the neurons and how each one activates and triggers another (in very simple terms). This is all done in parallel. Somehow this manages to not only allow us to function, learn, and evolve; but it gives us self consciousness. Now I’m a little confused as to how we discounted Chinese Nation as not being able to form a conscious entity. Yet, isn’t it along the same lines as how the brain actually works?

 

 

 

 

Mike Sterrett

 

I believe that the cycle of sense, think, act is not quite right as presented here. While the mind does seem to do each of these actions, it is not always in this cyclical order. It seems more correct to me to talk about the mind doing each simultaneously. A person's mind is constantly doing these actions. At the very least it is always sensing (as long as it is conscious) and it tends to be thinking quite a bit as well. The results / data from each of these parts is always influencing the other parts and their actions. For example, the sensations that a mind gets while taking action influence the way that the mind is thinking. I believe that the relationship is quite dynamic, as opposed to cyclical.

 

 

 

 

 

Week 6

 

Zach Farrell Chapter 6: Robots and Artificial life

 

Chapter 6 begins with an examination of phonotaxis by a Cricket. Webb argues that we are able to mimic the behavior of a cricket (recognition of the sound, localize the source, and proceed to find the source) in a robot. Through a series of inputs which take into account how quickly the sound reaches each sensor on the robot, and measurements of the amplitude of the sound the robot is able to identify where the sound is coming from and how close it is to the sound source. This example shows how external inputs do not need to be interpreted by a “inner state”, instead they only need to be connected to various motor outputs. I think this much like touching your hand to the stove. Without really thinking “Wow I wonder if this stove is burning my hand we immediately react pulling out hand away from the hot stovetop.” Craig Reynolds has done considerable work on migratory flocking of birds. With simple goals such as to maintain constant velocity and stay with the group the computer simulated birds followed the patterns of wild birds. The idea that multiple symbol systems constrained by simple rules is able to create lifelike simulations is astonishing. We further see this idea in the idea of the termite nest. The termites lack many symbol identification systems such as sight, yet they are able to construct elaborate nests which can support millions in population. The “air conditioning” process of the nest to farm ample food is a prime example of the complex structures built with smaller neural networks and a lesser ability to interpret physical systems. Scientists were able to replicate termite behavior (to an extent) in a simulation of stacking woodchips. All of these simple rules can come into problems with faced with problems in the external environment. The ability for a termite to determine appropriate behavior although something unexpected may occur. The ability for a return of appropriate behavior thus is a major problem when designing robots to mimic life.

 

Can we create a robot which is adaptive to to new situations? Is there a way of programing a robot to automatically move in an ideal form without a long evolutionary process?

 

I think that it is possible to program a robot which adapts to new situations. One of the ideas in this book is that our cognitive abilities are really only able to be attributed to the physical workings of our brains. If this is the case, then these processes should be able to be replicated in a robot using similar physical tools. I just think we do not have the technology, or maybe even a level of understanding that allows us to do this yet. As for the second part of the question, I think that there is a way to program a robot to automatically move in an ideal form without a long evolutionary process. According to the book, it actually seems like this is the way we are currently programming, which runs into problems with cognition because these programs cannot develop without our help. For instance, the book talks about the brute force used in chess playing computers. These computers make their moves based on the brute force of having a stored information allowing them to see the consequences of every move. This means they just take the best path in every situation without really knowing why. So, I think the problem is mostly with the fact that these computers have to have a lot of code in order to do this, whereas our mind seems to build on simple processes very quickly to create an answer in a different manner. - David Murphy

 

Why is it necessary to avoid having a long evolutionary process? If by long you are refering to the number of steps taken, I dont see the relavence. Two organisims can solve the same problem in differant ways with a differant number of steps, who's to say which organisms answer was more right? If, instead, you are talking about the time needed to do the necessary computations, then this is also somewhat irrelavent. The computational speed of any program is not only related to the mannor in which it was written (ex. Multi-Threading), but also the hardware its implemented on. The speed of computer hardware is constantly increasing at a very fast rate, see Moore's Law (http://en.wikipedia.org/wiki/Moore%27s_law). In either case the idea of a long evolutionary process seams to be a mere annoyance and not a true problem. -Mark Gerken

 

 

I agree with Mark on this and i think that it can be streched to fit the quest to discover a working theory of mind. It seems like in the quest to discover a working model of how cognition or brahvior or live work we are trying to find an all inclusive model as quick as possible. This is a dangerous way to go about trying to make a working model of anything. Think about how long it took for someone to work out a way to control light without having a fire, or how long it took to discover that the earth was in fact not the center of the universe. All of these things take huge amouts of time with an immense evolutionary path. By trying to jump to a conclusion quickly you risk skipping necessary steps in the evolution of the theory. It is important to be pacient and not over eager in discovering how a process works. Andy Tucker

 

 

 

Andy Tucker

In chapter seven they propose that a dynamic system is best fit to describe the complex, real world, real time, way in which human cognition actually works. They say that in order to better understand cognition we need to look at physical law instead of software law. By looking at the physical law of our actions we can discover the processes that govern our cognitive actions as they interact with the world and environment. This provides the best model because it allows us to look at what processes give us our real time, real world cognitive actions. This seems to present a good model for discovering how physical actions work and how we can explain reflexes but it seem to fall short when it comes to our higher cognitive abilities that seem to be unexplainable based on these purely physical actions.
How can you go about developing a purely physical dynamic description of how we produce such vivid, sometimes unfounded, thought?
 
JEFF PAUCK
In chapter 6 of Mindware, the prospect of building robotics that can essentially learn and evolve is discussed. In it such robots as the robot cricket and programs the emulate flocking of birds or fish are shown and explained how from very simple pre-programmed laws and features they can learn to do such things as recognize a mating call. The section that interested me was the discussion on emergence. The discussion differentiates emergence from collective effects and attempts to create a precise enough definition while keeping it general enough to be useful. But I find that it becomes a problem to combine such tasks as termite nest building and the robot cricket with very complicated calculation and importing much complexity. My question would then be: can we create a definition of emergence that contains both cases of self-organization and cases of few heterogeneous elements without complex formulation?
 
David Murphy
Again, at the end of chapter six there is a whole section about life, especially in regard to computer simulation. I just came up with another question about it. The ideas in this book and the ones we have talked about in class seem to say that cognition and our "mind" are merely results of the physical processes in our brains. If this is the case, does that mean anything that shows these kinds of processes is cognitive? There has been talk in the book and the class about emergence in groups of animals, but at what point would we say that cognition emerges in living things? I don't know much about biology, but if the physical symbol system and it's variations are correct about the mind, wouldn't most living things qualify to some degree? Most living things have a nervous system of some sort (I don't know about plants, or if there is a definite nervous system in things like bacteria), so should we have a gauge of cognition to decide where something gets to have a "mind?"
 
 
This seems like a tricky question to me because sometimes when we talk about "mind" or "cognition" we are actually thinking about consciousness. The distinctions between these three concepts are often vague. One thing we have seen in class is that it is in fact very difficult to develop a working definition of cognition. The PSS definition seems to be a very minimal definition, and it does attribute cognition to a huge range of systems, from humans to thermostats. We have essentially defined cognition behaviorally (i.e. intelligence = intelligent behavior), and in that sense, certainly groups of animals are "cognitive." But keep in mind that this is not to say that an ant colony is conscious (has experiences, qualia, etc.). We haven't yet discussed potential working definitions of "mind", so maybe that would be an excellent thing to discuss in class. Is it a mistake to take a behavioristic stance (like Dennett's intentional stance) to the concept of mind, even if it works for cognitition? -- Jenessa Strickland

 

Ashley Murphy
 
In chapter six in the portion titled Life and Mind, Bedau states that the definition of life is "supple adaptation, the capacity to respond appropriately , in an indefinite variety of ways , to an unpredictable variety of contingencies..." which "allows events and processes subsisting in electronic and other media to count as instances of life properly so-called" (Clark 117). Really? As a biologist, I would have to disagree. Biology is the study of life, of living organisms and how they interact with eachother and their environment. Biology examines the structure, function, growth, origin, evolution, and distribution of all living things. Cell Theory says that all living things are made up of at least one cell, which is the foundation and basic unit of all living things. Cells only come from pre-existings cells through a process called cell division. Robots and other electrical media do not have cells... How can they be alive? Which brings me to another concept my Bedau which states that "life is a so-called cluster concept, involving multiple typical features none of which is individually necessary for a system to count as alive..." (Clark 118). All I have to say to that is bull sh--. Going from Cell Theory to Gene Theory, which states that biological from and function are passed from generation to generation by means of genes, with out these fundamentals of life, the thing cannot be truly alive. The DNA, or genes, are passes from the genotype to the phenotype which are the observable physical or biochemical characteristics of the organism. Only living things have these genes. Another important process of a living organism is homeostasis, the act of maintaining one's internal environment, despite his external environment. Can a machine or robot do any of these things? Do they have cells and genes? No. So can a thing lacking the basic units of life, the very definition of life itself, be really, truly alive?
 
You're begging the question by arbitrarily presupposing that "cells" and "genes" can only be possessed by the things we already agree are "alive". We have to distinguish between the part of biology that is merely descriptive, and the theoretical part that asserts what is essential and necessary. We can even work within the paradigm requiring life to conform to our concepts of Cell Theory and Gene Theory, and still come up with some model or simulation that satisfies those requirements. You want cells? Give the robots cells. You want reproduction and genetic transmission? Give them that too. Make it run off of peanut butter and mayonnaise. It doesn't matter. Granted, computer programs and robots don't tend to satisfy even the most forgiving ideas of "life", but that doesn't mean that they cannot. More importantly, many available models clearly exhibit at least some of the criteria. The point is: what does "life" (or "cells" or "genes" for that matter) essentially mean, and which arbitrary biological dogmas are dispensable in defining it?
~Jared Prunty
 

Hi there, Ashley and Jared. I just read your comments after I'd posted mine and realized that what I wrote was in line with what you two discussed above. I'm glad that you both see, or at least Jared does, the point of my question. However, I sympathize with Ashley. Her intuition that the contemplation of the 'essence' of life is a deviation from her discipline is dead on. As she points out the difference between living and non-living molecules *is* information according to current scientific understanding. It's likely a far fetched scenario that scientists will study the 'essence' of life in a laboratory anytime soon. ;-) Further, it's hardly an 'arbitrary' presupposition that cells and genes are the required building blocks for life, instead it's an observation based on vast amounts of data. In other words, it's a very reasonable conclusion given current and available knowledge. The burden of proof seems to lie with the philosopher here... how do we found the claim that life has an 'essence' or 'theoretical' component?

 

It's not the activity of the scientist to exhaust all possibilities in some kind of random trial and error process, but to explore reasonable avenues of inquiry. Thus, if reason is in the domain of philosophy, and we'd like to marry science and philosophy in this study then let's offer arguments that give scientists reason to consider the exploration of new and interesting territory. It is probably more prudent for the philosopher to continue to forge together the findings of biology and computer science through carefully crafted analogy until the defining lines of each are softened enough that information regarding the one translates to the other then to jump headlong into metaphysical topics that science is not equipped to tackle. If philosophers wish to offer guidance to scientists in this study then they should make sure that the content of the arguments they build is relevant to the science they desire to appeal to.

Leilani -

 

I think many (most?) contemporary philosophers and scientists would agree that no one is going to study the 'essence of life', or even that there is no 'essence of life' (I don't really think that's what Jared was suggesting). Nonetheless, I think we're at a point where we can design robots (or something) to DO all of the things living organisms do, so how important is the material/substrate? I think we're all pretty prepared to agree that all of the examples of systems we want to say are alive are systems composed of cells, have DNA, etc. It's not a matter of whether or not these comprise some kind of essence, but whether we can give reasons for delimiting our concept of life the way we (philosophers, scientists, so-called 'ordinary' people) do. To say that all examples that we intuitively say are alive have these traits is not enough; why do we want to say these systems are alive? Why do cells, DNA, etc. mean something is alive? It seems we have no answer to this besides intuition. Or our definitions do seem to become circular: How do we know x, y, z are living systems? Because they have cells, etc. How do we know having cells = being alive? Because a, b, c have cells, and they are alive. How do we know a, b, c are alive? Because a, b, c have cells, etc., OR because intuition tells us that a, b, c are alive. If our answer is intuition about particular cases, we ought to ask ourselves why our intuitions exclude artificial 'life' simply because they lack cells, etc.? Can we revise our intuitions when faced with problem cases that seem to blur the lines of our concepts?

 

As for the relationship between philosophy and science, science must take certain concepts (like life) for granted, and there is nothing wrong with that. One contribution philosophy can make to scientific research is the examination of these concepts, because philosophers don't need to take them for granted. We can ask, Do we have reasons for framing our concepts this way? Are these good, relevant, or at least good-enough reasons (since slam-dunk conceptual definitions are pretty rare)? Can we refine them in ways that open avenues for interesting scientific research? In this particular case of life, I have no objections to the current standard definition of life, mentioned by Ashley, above. It obviously serves us well in scientific research. But it is worth asking (in philosophy) if it is the material (cells, etc.) on which this definition is based is really of primary importance, or does the functionality matter? If the answer is that the material is primary, studies in artificial life have little to contribute to biology. But if the answer is that functionality is important, then we have a problem with our working definition, and rethinking this concept in light of research in artificial life has plenty to contribute to biology, and vice versa. This seems like an important implication for science. --Jenessa

 

 

 

 

Hey Janessa- Tone is difficult to convey in writing sometimes, and that said, my comment about the ‘essence’ of life being studied in a laboratory was a (failed?!) attempt at humor… that’s what the winking, smiley face was supposed to signify.

 

Thanks for the response though. It’s a great one!

 

Instead of stating that material that is composed of cells and has DNA is alive, but really it’s only intuition that tells us this, I think most scientists perceive that they began with an intuition of life and after much hard work, dedication, and careful analysis found that, on a molecular level, matter intuited as living is observably different from that intuited as non-living. Now we have a substantial distinguishing feature of life, as it is known, to replace crude intuition. Also, this is a beautiful scientific definition. While it fortifies the scientific assumption of uniformity in nature it doesn’t require common behaviors to exist between life forms and thus accommodates enormous behavioral variety amongst living things. Here, every plant and animal DNA that is viewed acts as another reason why the current scientific conception of life is framed the way it is. Gorgeous!

 

Jared said that the scientific argument for life ‘begs the question’ and you point out that it’s circular as well... I guess I don’t view their argument to operate the same way that you two do. I believe that it leaves room for correction/growth/improvement in precision, as it must for science to continue to claim that it is a self-correcting discipline. From my vantage point it goes: Is x alive? If x has DNA ( and something like: that is being utilized) then x is alive. X does have DNA (") therefore x is alive. However, scientists are logically restricted from making any absolute claims that run the other direction. Thus, they can’t claim from the same premise that x is alive therefore it has DNA. (Though the fact that all of Earth’s flora and fauna exhibit this feature does tempt one to want to claim this… it’s a pretty inductively strong argument.) It’s because of this that philosophers can use the emerging inventions of computer science and robotics to give science a run for their money regarding their definition.

 

Of course- this is where it gets tricky because in the absence of DNA, our sole indicator of life, how can we know something is alive? Now the majority of what you beautifully state in your comment comes into play!!! However, I don’t think that we can go back to intuition because that’s square one for the biologist. Honestly, I don’t think that they’ll seriously query life again until computer science has created something so damn close to DNA that it may as well be DNA… and the philosopher will have the privilege of building an argument for identity between the two.

 

 

 

 

 

 

 

 

What are your thoughts?

Leilani-

 

 

 

 

 

 

 

 

 
Leilani M. Howard
Sorry... fell behind a touch, but here's my question for six:

At the conclusion of chapter six, Clark visits the question of how to demarcate between an actual instance and mere simulation of life. The question is prompted from the production of computer ‘organisms’, within a virtual ecosystem, that reproduce, adapt, evolve, and even share a type of heritable information. Does such a program qualify as an example of life?

Unless life is reducible to a bundle of qualities this question takes on a numinous flavor. An example of unembodied life is preceded by the notion that life can exist without a body. Thus, instead of answering what life is, where life is part or product of a specially organized system, the text seems to ask indirectly what life is, where it is an essence. Though these questions are cogent from a philosophical perspective, they appear to approach territory that is outside of the domain of scientific inquiry. Does anyone else see this?

 

 

Mike Sterrett

 

The emergence of “consciousness” from many small parts coming together is incredibly fascinating. Bees living in a community, termites building incredibly wonderful mounds, schools of fish and birds flocking are all good examples of this. In terms of the human mind, this is quite interesting. Billions of neurons all come together in such a way that they are greater than the sum of their parts! This seems to me to be a good model of how the consciousness of the mind works out. The neurons all work together in such a way as to be able to organize thoughts, plan things out, and learn new information. Just like how some termites are specifically builders and others are fighters, etc. some clusters of neurons compute visual data, others audio data, others think logically. The parallel is intriguing and not without merit, I believe.

 

 

 

 

 

Andy Tucker

 

In chapter 8 Clark focuses on the strong role which the environment and our interaction with it play in the problem solving and cognitive abilities that humans possess. In his section on the bounds of self he states that when we use a crane to help us lift something very heavy we do not attribute the cranes strength as being an increase in the strength of us as an individual. He also uses a similar analogy when referring to pens, paper, and diagrams in helping with the problem solving of an argument. Yet I don’t see any reason why we should not attribute these increases in performance, which are gained by using such augmenting items, to the individual which uses them. These items are all things that are constructed by humans and therefore are directly connected to our abilities. There is no difference between saying that I can lift a steel beam despite needing the use of a crane and saying that I can solve a math problem despite my need to write it out on paper to do so.
What are the reasons that we do not want to extend out concept of self to include things outside of our body that contribute to our abilities.
 

 

David Murphy

In chapter 8, Clark talks about Alzheimers patient's and how they use cognitive scaffolding to help to manage their disease. They, or their families make their environment more accessible and organized by labeling and scheduling a lot, which helps to counter the degenerative disease to a small extent. My question is, is it possible to use cognitive scaffolding in this fashion to help other people in other ways? For instance, if we found a way to make the classroom environment easier to learn through more of these scaffolding techniques than we already use. Even if the scaffolding only has a small impact on someones cognitive ability, maybe we should try to further organize or construct our environments if possible to help us mentally.

 

Would you characterize the brainstorming exercises (from some other class) that decorate (litter?) the walls in our own classroom as a useful example of pedagogical environmental offloading?

-Jared

 

Jared Prunty

 

1) In Ch. 8 (p. 142), Clark contrasts our automatic offline inference systems with our online cognition with "Good at Frisbee, Bad at Logic." He proceeds to describe part of the difference as lying within our capacity to use environmental offloading and to manipulate chunked symbols representing conceptual complexities that needn't be hashed out in order to be implemented. This is essentially a description of what language does for us, and it is so fundamentally linked to our intelligence that it represents a unique class of cognitive technology (or even a unique "perceptual modality"). The extent to which our mental life is both embodied and inherently linguistic is even more fundamentally illustrated by the finding that we possess two distinct ways of thinking about numbers: "exact calculation is language dependent, whereas approximation relies on nonverbal visuo-spatial cerebral networks." Cognition, then, seems in large part to be a matter of compounded nesting functions. Meanwhile, intelligence has something to do with operating at increasingly nested levels, i.e. meta-cognition, while also being capable of freely moving between levels, i.e. self-evaluation. Conversely, I think the mark of unintelligence is the lack of introspective examination and plasticity. The same heuristics that so enrich our mode of living impoverish it when reified. If that's on the right track, it makes me wonder just how much of a disparity the average human exhibits between potential and actuality...and why. So, borrowing Louis Mackey's cynical query, "What is the most universal human vice: fear or laziness?"

 

2) I'm sure I'm not alone in being fascinated by the possibilities of "cyborgs and software agents" (Mindware, p. 155). I have a friend at U of Washington who's working on developing retinal overlays, or basically contact lenses with microscopic circuitry that provide a heads-up display and could eventually enable the wearer to enhance biological vision in all kinds of crazy ways. Basically, it would (could) be like in the Terminator movies, when you see things from the terminator's 'perspective'--not just enhanced perception, but supplementary processing too. This opens up all kinds of possibilities, but the point is that science-fiction is losing the fiction. The phenomenological implications are especially relevant to this Nagel paper ("What's It Like to be a Bat?") we're reading-- just imagine how cool it would be to acquire a qualitatively new perceptual modality... But it's already even more than that: I looked up what Kevin Warwick has been working on recently, and his latest paper is "a discussion of neural implant experimentation linking the human nervous system bi-directionally with the internet. With this in place, neural signals were transmitted to various technological devices to directly control them, in some cases via the internet, and feedback to the brain was obtained from, for example, the fingertips of a robot hand, and ultrasonic (extra) sensory input and neural signals directly from another human's nervous system." Um, like telepathy? Obviously, this could have huge implications for the supposed limits selfhood, too....

 

 

Zach Farrell

Chapter 8: Cognitive Technology

This chapter brings of complexity of which humans interact with their environment and how that has affected the brain. We have seen that our development as a organism is biological complex and efficient. One of the critical is our real-time interaction with the environment. In this I mean both immediate responses and evaluated responses. We also interact with our world with a very quick response and real-time interaction. “We use environment-exploitive strategies” (140) which contribute to both our reason and off-line reason. When awake, we have sensory information coming in constantly, not necessarily processing the entire room again but only difference in the room or sudden changes. Although I do not doubt the ability of machines to process information quickly but I do doubt their ability to create appropriate responses, let alone environment exploitive response to increase its own survival or purpose.

 

 

Can we replicate a conception of self in a machine, if so would machines be able to understand its relationship to the environment or  require a purpose?

 

 

Mike Sterrett

 

Clark's point that a crane is given some credit for lifting a girder but a pencil is not given credit for an artist's sketch is truly puzzling. When asked, “how did you move that beam?” the construction worker will say, “I used this here crane.” On the other hand, when someone asks an artist, “How did you make this drawing?” the artist will talk about his state of mind at the time he drew it and what inspired him. He'll probably leave out the pencils, since the person can see the pencil marks on the paper. We, as people, operate within the world in a certain way. The things we can interact with on a day to day basis in our environment influence the actions that we take. In an environment without writing utensils, poets and writers will not write. In an environment without cranes, people will not build skyscrapers. In this way, the environment has a huge amount to do with what it is that we do. That means that our consciousness is also influenced by the environment. In what way must we take that into account when thinking about our mental processes?

 

 

Perception, Action and the Brain-Caleb Schmidt

     The brain is a crystal skull of mystery that until recently was clouded in a fog so thick, it was seemingly impenetrable. Its true secrets veiled an understanding that has only now been gained by precise observation aided by advances in technology. These improvements in the scientific method and the reinforced masonry to our levels of knowledge have lead to small hairline fractures in this thick-skinned entity, and ultimately holding our hands on this quest for a complete and continuous understanding. As engineers and cognitive scientists have begun a daunting quest to observe and reverse engineer what we call life, there have been bumps on the road, to say the least. One approach to incorporate and engineer perception leading to action begins by attempting to tirelessly program every action and thought into an artificial entity in the hope that someday we will be afforded the ability to create autonomous self learning life forms (artificial or biological, maybe a mixture of both) and in that way allow control to emerge. This idea is fictional but in many way monotonous and muddy. The philosophy I feel I identify with the best in that of an interactive 3-dimesional environment, allowing the entity (whatever its title or name may be), to interact in an instantaneous and alive manner. This process would allow the brain, instead of trying to figure out answers from preprogrammed algorithms, to write and rewrite its own algorithms on the fly, while gracefully interacting with the environment around it. This idea may sound complex, but in many ways I see that in the long run this computational model becomes exponentially more streamlined. I believe this ability of constant adaptation to be a major requirement for an entity that is truly alive.

 

 

Robots and Artificial Life Forms-Caleb Schmidt

     These ideas of the 3-demensional environment modeling from my viewpoint is the only possible way of creating robots and artificial life forms that can constantly adapt to the complex and constantly changing world of men. Now I realize that the ultimate goal (at least how I see it) is to create something more than an artificial entity, striving to make something more alive and biological. I believe cognitive scientists and engineers to be on the right track, looking to biological evolution as a blueprint for ways in which we (biological life forms) act and react. Another point to be made is that of learning; I consider this to be another major requirement atop the check list of cognitive understanding and engineering. If an entity cannot learn from observations, mistakes and its surroundings, how will it have the ability to create and adapt. All life forms must have this ability to learn otherwise they will forever be stuck at go and will never travel back to collect $200.

 

 

Dynamic Systems-Caleb Schmidt

     I like to look at all things from a micro reductionist viewpoint and they turn around and accent to a peak to see what a system or many systems look like on a more macroscopic level. Affording myself the chance examine all the parts as well as he whole, the sum of its parts or more is a powerful system for understanding and thought manipulate(so far). In dynamic systems, this method holds true and breeds understanding, its basis is a reliance on initial conditions with respect to how a system evolves and changes over time, self regulation of a system becomes interesting and a key to understanding. It is at this point that the parameters of the system come into play, governing the way in which patterns emerge; often these patterns are recognizable and future conditions of the system are able to be forecasted, but only though precise observation and careful hypothesis. In tracking and analyzing dynamic systems pattern recognition is extremely important. Pattern recognition through mathematics is allows precise measurements and projections to become applicable and understandable. The ability to interpret interaction of entities overtime in differential equations displays patterns of chaos and predictability, while in many cases the chaos exhibited becomes predictable and pattern like. These so-called postulates, again along with mathematics afford us the ability to interpret the world in extremely interesting ways. If logical patterns are seen and analyzed the world become a phenomenal and wondrous place and understanding, manipulation and application become possible.

 

Comments (1)

darkair@... said

at 11:26 am on Oct 14, 2009

The three stigmata of palmer eldrich has raised some interesting questions and analogies for me. The most important being the implications or possibility of induced evolution. We may infer that encephalization allowed us as a species to prosper but this doesnt correlate strictly speaking to the type of evolution discussed in the text. Merely enlarging areas of the brain would not lead to new novel arrangements of neurons possibly conferring a selective advantage. In a way it is not the quantity or size of your brain but rather the quality or arrangement of your brain that matters. Another minor issue was that of the layouts being necessary to enjoying the intended effect of can-d. If you are truly internalized in a drug, im not sure that the physical world matters much to you just as when you are passed out. Also how would these individuals be able to communicate and experience together while both under the influence of some drug.

You don't have permission to comment on this page.