| 
View
 

Weekly Submissions

Page history last edited by Jennifer Holdier 15 years, 3 months ago

I accidentally deleted Evan Brennan's post. I was trying to edit the page and post my commentary, but I guess I'll just add it to the comments section. This is Evan's original post (Sorry, Evan!):

 

 

'I thought it was interesting in how the first chapter began by noting the brain as a "meat machine," but by then explaining it as it not being about the material the brain is made out of, but by the way this "meat machine" collects and organizes information into thoughts and ideas. The chapter also went on the discuss the thoughts produced by the brain to be nothing more than computation. For example, someone sees a car crash and immediately runs to a pay phone to dial 911 for help. In this example (x) caused (y), and it was the observer's instinctually computed response to call for help, or as Clark put it, "...interpretations thus glue inner states (of the brain) to sensible real-world behaviors" (15).'

Comments (130)

Dcwalter said

at 10:37 am on Aug 28, 2009

The appendix that we read for this week was relatively informative without making much argument. Our author gives us a condensed, nutshell version of dualism, behaviourism, identity theory and machine functionalism. It does a good job at setting the stage for the class, and proceeding readings. I look forward to finishing the last few pages and to the rest of the readings. Without the use of much argument or even the use of any such controversial language; there is very little to say about a summarization of a few philosophical positions. The poster about if computers think(the one on the pbwiki) is a much more interesting bit of information to digest.
(i assume that this system will tell you who submitted what but)
Dillon Walter
phil 415

Jennifer Holdier said

at 7:08 pm on Aug 28, 2009

Chapter one is an introduction to the mind as a machine, or a “meat machine.” Clark goes on to explain how the mind has come to be viewed as a machine; first, the power of formal logic systems grew to be more appreciated, then the field of computation grew, started with Turing, and finally, a computing machine was actually built. The software in a computer is analogous to the mindware in our brains. In the second part of the chapter, Clark discusses four different topics that will recur in the book. The topic that I find most interesting is the “Mimicking, Modeling, and Behavior.” Clark gives examples of two computers that are built to mimic human behavior, PARRY and Deep Blue. PARRY is an imitation of a paranoid schizophrenic while Deep Blue is a chess computer. The conclusion is that these are not thoroughly convincing representations of humans, but they are pretty close. The discussion is closed by mentioning the Turing Test, a test that an interrogator would give to a machine or a human, and if it is able to have an open-ended conversation, it would pass as an intelligent creature. Since it seems that machines are getting closer and closer to being able to pass that test, what will happen if and when they do?

prakowski said

at 7:22 pm on Aug 29, 2009

Can anyone help me out? The copy I ordered online hasn't come yet, and I haven't been able to find one at the bookstore or at Barnes and Noble. Does anybody know where I might find one, or have one I can borrow tomorrow? I appreicate any help.

thanks,
Pete

vincent merrone said

at 11:26 am on Aug 30, 2009

While reading the text all I can think of is comparing the various approaches of studying phenomena and mindware to that of failed sciences of sociobiology. I do not mean that the studying of mindware is a fruitless task, but I get an hint of the errors of sociobiology. An example of this is appropriate to make my point clear. During the Clinton administration there was a proposed bill to be passed that planned on providing serotonin increasing drugs to aggressive/violent populations of inner cities. The whole reason this bill came about is because there were studies on macaques (I think macaques, but definately a primate) and their aggression. A primatologist made the correlation that the macaques were aggressive because of low serotonin levels. Thus, the primatologist figured that his study could be translated into explaining human agressive behavior--which would lead to curbing aggression in our populations. Obviously we can see many problems here, and so did many other thinkers, leading to the bill being rejected. But the point here relating to mindware is that I feel that certain assumptions are able to be made about the mind/conciousness/etc. that may not lead to a positive direection of the study (or "facts" that may be unture). However, I feel that this is on the tips of the tounges of those studying mindware, as hinted at the end of appendex II, where a new kind of approach to studying the mind may be needed. At the end of appendex II the author questions the "tricks" of science that we use (both use and go into our study taking for granted that MAYBE every tool/trick cannot tighten ever bolt). Here I agree totally that new tools and tricks may be needed in studying something as "allusive" as the mind. If that is the case, that a new way of doing science must be constructed to study the mind, than I'am very curious as to what ways this could be done and what results will follow from such procedings.

Alex Edmondson said

at 11:22 pm on Aug 30, 2009

So far it’s difficult to say much about the text since it’s just a lot of information and not much argument. I do find it intriguing about our consciousness being a computer that is constructed by our culture, as Dennet has proposed. But, can we even trust that the external world is not a manifestation of our internal minds? I could see it possible being that our surroundings is what shapes our minds/consciousness, but there are a lot of questions that are unanswered with this theory (involving cultures, what if we lack one of the mind tools? etc). Would it be that the surroundings are changing the chemicals in our brains that tell us how to perceive or would it be that the surroundings are changing the way we perceive whatever it is that the chemicals in our brains “want” us to perceive? Perhaps, say we are just computers, the reason we cannot create, ourselves, a computer that is just like us in everyway is simply because we lack the software to create such a thing. Perhaps this is the reason why we still don’t understand our consciousness fully, and likely never will because no one can update us.

Caleb Schmidt said

at 1:24 am on Aug 31, 2009

To question consciousness is to run parallel in step with the heartbeat of humanity and delve with the mystics into a world full of unprecedented wonder and phenomena. Consciousness is the fundamental question encountered in this human path we are all blessed to follow; for without consciousness we could not questions existence. There are two elemental approaches to this issue, one is access consciousness, a reactionary form, and this is the easy question to answer. From this view, a conscious being can be purely the sum of its parts, programmed to interpret the cause and react; a purely informational model. Though it is not hard to see consciousness is so much more, consciousness is the perfect operating system, so simplistic in the ways it deals overwhelming complexity, that being purely the sum of its parts is oxymoronic. I know nothing of consciousness, I know what everyone else knows, the experience. The answer will not be found in reductionism nor will it be achieved in any mechanistic sense. In the future, machines will possess the ability to process pain in order to achieve self preservation, compute decisions based on the highest or lowest rate of probability and most likely have the ability for independent thought, but in order to understand consciousness, a being must be able to know “what it’s like?” What it’s like to experience a sunset or make love to a beautiful woman, Artificial Intelligence will never achieve this in a humanistic way. Although the question of consciousness and its novelty is positively stimulating in thought and theory, what kind of a place would the world be where the question answered? The wonder lies within the journey.

Josh Egner said

at 8:47 am on Aug 31, 2009

I would like to speak to the general method I see being employed to discover consciousness. From what I have read so far it seems we like to measure and alter physical stimuli to see how it affects consciousness, but I believe this can only tell us half of the story, the access consciousness part. I feel that perspective consciousness on the other hand works in the opposite direction, from consciousness down. It seems to me that this type of conciseness may not lend itself to scientific inquiry, because there can be no controlled consciousness. Even if we grant that consciousness is formed by the experiences and stimuli of normal life we would have to have a comprehensive account of all such stimuli, and any internal reflections that may have further altered the consciousness in question. This seems like a more than improbable prospect, and we haven’t even started discussing a controlled change of consciousness.

hdmartin@... said

at 8:59 am on Aug 31, 2009

The reading for this time, although interesting, I would have to agree is not very argumentative. The most interesting part I found was towards the end, about the Zombie. If indeed our surroundings shape our perceptions/our experiences of the world then the Zombie would be the clear way to see how this actually happens. Consciousness is more then just something that can experience, it has to do with something that does experience. Sure we can create a computer that can react a certain way, which one can argue that it is going through a certain experience, but as of now we cannot make a computer actually experience an event. This may have something to do with the fact that humans are not fully tapped into their consciousness, nor do we actually control it, therefore we have not fully understood it. Along with that, we can not test our own consciousness objectively being as it is our own, nor can we test another persons, because we can not jump into their mind and see what is actually happening.

Benjamin B. Walker said

at 9:16 am on Aug 31, 2009

The point Clark makes on pg. 21 of Mindware about the curiosity of silicon that seems to have intelligence, curious in that it is not similar to our brain, is an important and revealing one. Immediately he follows this pondering on silicon with the familiar curiosity of carbon as an intelligence-harboring material, and how it seems weird to humans if anything other than carbon has intelligence, since that is the “original mode” so to speak. He then relates the two thoughts with a statement about the unlikelihood not of intelligence in silicon, but of intelligence in any material. He warns against anthropocentrism multiple times, which brings to mind an interesting possible implication: If we were to encounter a conscious, intelligent, thinking alien population that was not carbon based, and maybe not even matter based (I know this isn’t a science fiction class, bare with me), and we started studying the philosophy of cognitive science of both of our species, wouldn’t we risk being too human-alien-centric. What I’m getting at here is the question, “Is it possible to study intelligence using examples as evidence? Or will we run into infinite regresses trying to escape example-specific themes, such as those illustrated in the difference between silicon ‘intelligence’ and human ‘intelligence’? Is there a bigger picture to intelligence that is obscured to us by our being subjects to it?”

Benjamin B. Walker said

at 9:20 am on Aug 31, 2009

And well put Caleb! You just blew my mind!

Daniel_Cronin said

at 11:11 am on Aug 31, 2009

In reading the second appendix, the notion of representationalism caught my mind. Something I have wondered about myself is whether something like pain exists whether or not I know about it. We have all probably had the experience of not knowing about a cut until we see it, at which point it starts hurting. Is the pain there before we know about it, or does it stem from our knowledge of the wound? As Clark puts it, can you have pain without the thought of pain? A few pages later Clark brings up Daniel Dennett's distinction between the human mind and other animals minds. Dennett argues that species minds differ in their organization of information. He (in my opinion rightfully) assumes that for a being to be concious, it must have some method of organizing all of the information it may need to function. Going back to what I was talking about previously, about experiencing pain, we assume that other animals can experience pain as well. But how would an animal, with no concrete idea of "pain", experience it? We could in theory make a robot that mimicked pain, say if you punched it it would recoil and tend to the area that was hurt, but the machine has no idea of what its doing, its merely acting on its programming. We, on the other hand, have an understanding of why we take such actions after being hurt. Where then would an animal fit in? If can feel pain, which a robot can't, but does not understand it like we can. Would that animal fall somewhere between us and the robot on the consciousness scale? Or would it be the same as us but simply with less understanding?

Mike Prentice said

at 11:26 am on Aug 31, 2009

I found these readings to be very interesting. Behaviorism I think gets a lot less credit than it actually deserves. Granted there are some major problems with accepting behaviorism completely but when forced to chose between this and dualism or functionalism, the choice is easy for me to make. From the outside studying this subject for the first time I think it’s easy to side with rationalism even though it lacks a lot of information and argument within the description. With this said I believe that people give to much necessity to consciences. Many believe that if we prove that consciousness exists or how it works than we will lose what we consider to be ourselves. The fact remains that we will never lose what makes us us by simply understanding how are consciousness works. Understanding how a car works doesn’t make it any less fun to drive a Ferrari.

Rob Conary said

at 3:54 pm on Aug 31, 2009

Something a bit off topic, but I was reading through some Frege and found this a terribly interesting statement on perhaps an inherent separateness in consciousness and lends to the idea that each unique conscious agent will have a sort of unique set of conceptions and potentially ideas. That while we may have an idea of the same objects or senses of objects, the conception will be a unique component of our particular conscious state.

"It is indeed sometimes possible to establish differences in the conceptions, or even in the sensations, of different men; but an exact
comparison is not possible, because we cannot have both conceptions together in the same consciousness." Sense and Reference, Gottlobe Frege.

Matt Stobber said

at 9:05 pm on Aug 31, 2009

I find the concept of functionality very interesting in this book, specifically in the case of Deep Blue. If you build something that has the same functionality as something else, what exactly does that mean? It is obviouslly still different, especially if the implementation of that functionality is different. I think this really points to how intelligence arises. Where did our intelligence arise? If we had the technology to replicate evolution using tiny nan-robots instead of cells, would a being eventually evolve that would have our kind of intelligence? Or maybe it can be said that Deep Blue and humans are both equally intelligent(in terms of chess), except each entities intelligence is implemented differently, Deep Blue's being a computer program, and ours being "cellular" intelligence that arises through evolution. I think that Ghost in the Shell is a very good example of this. If you replaced your brain with a machine, but you still retained all your memories and emotions, would you still be yourself, even though the implementation of "you" is now different?

Megan Holcombe said

at 9:36 pm on Aug 31, 2009

The excerpt from Bisson's sci fi intrigued me by forcing me to imagine our consciousness from outside our own consciousness. Considering our communication limitations, a comparison made of our own and foreign consciousness is a difficult one to rationalize.
I agree that these readings didn't really present any argument but more gave a background of where we have come with computer systems and their logic. I think we want to believe as seen in chapter 1 that our awareness is more than information and hardware.

Erin said

at 1:19 pm on Sep 4, 2009

I found it interesting when Dennett was quoted as saying that P-consciousness is simply something which is influenced by a person's society and the entire social traditions of the world into which they are born. In one way I can see how this theory could have some merit. For instance, members of different cultures will have different opinions as to what foods, clothing styles, and music are the most pleasing. I’ve seen a film of a village in South America whose members enjoy roasted tarantulas as a delicacy. Even if people of our culture could overcome their squeamishness of such an idea, I doubt the roasted spiders would taste as delicious to us as they do to the South Americans. However, I can’t help but feel that there are some qualia so powerful and ingrained that even an individual with no social or cultural experience would react the same. For instance, would a person born and raised in a cave, with no contact with other living things, experience the same pain and fear when stabbed? It seems that they would, but Dennett’s response may be that survival traditions run deeper- on an instinctual level- as opposed to merely one’s individual experience.

vincent merrone said

at 11:38 am on Sep 6, 2009

Dennet's point of view on the human mind seems awkward and missing. I do understand what he was trying to do in showing how the mind is much more complex than what we intuitively deem it to be, and that there are aspects of the mind/brain that we are unable to control; i.e the pictures changing and reconizing where it changed, color changes, etc. It seems like this is just something that is built into the brain/mind. If my washermachine typically washes colored clothes I'm happy. But if it washes my whites I'm not surprised because that is also a function of what the washermachine does. Same goes for the brain/mind. Why should I be surprised that there are aspects of it that I have limited control over? I'm not mindboggled (well I'am) by the fact that my heart beats by itself as I veg-out on the couch not even concious of it doing so. If I could hear some of Dennet's evolutionary views on the brain/mind then I may be more on board with him. But I heard a lot about the evolutionary views of the brain/mind and many of them were weaved stories or paradigms/bliks being expressed of the time period they originated. There is still more to the class so we'll see how new arguments unfold throughout the semeseter.

Dcwalter said

at 8:19 pm on Sep 6, 2009

Chapter one of the text was extremely interesting. One objection that skeptics bring up is the affect of chemicals and such things on the workings or our consciousness. The argument is that these types of neuro-chemicals and such things are a variable that sepperates our meat-machine from other such computational devices. Clark straightens out the skeptic by properly defying the question at the root of the problem; "what is the role of all the hormones, chemicals, and organic matter that build normal human brains?" The answer is that while we may not know precisely how these types of things affect our brain, it does not change the fact that they are included in what proper cognitive philosophers consider the mind. The mind is more than the gray material of the brain, and it is more than all the other sorts of brain chemistry we know of. I think that one way to begin understanding how the mind and brain are truly different is to consider the possibility of using ones mind to "log on to" and manipulate the hardware actions of another system. That is to say, think of the mind as a "User" and the brain as merely hardware. If it turns out that the mind/body problem is really that simple the future is going to look very wonky.

Jennifer Holdier said

at 9:45 pm on Sep 6, 2009

Chapter two is all about physical-symbol systems. Physical-symbol systems are artificial intelligence systems that are encoded with various symbols, and are able to act in situations according to the symbols. Clark gives the example of a system going to a restaurant. It is encoded with symbols that tell it to walk to the table, read the menu, order, pay the check and tip the waiter, among other things. I think that this is amazing. However, I agree with the objection in 2.2 B, “Everyday Coping.” Clark presents a criticism brought forth by Hubert Dreyfus about how the systems cannot possibly be programmed with the immense knowledge that humans have, and learn everyday. It seems like there are an infinite number of possible situations, and to be able to program a system to cope with every single one would be incredible. There is something about being human and being able to cope with daily life that would be very hard to program into an artificial intelligence system.

prakowski said

at 10:55 pm on Sep 6, 2009

I sensed a pattern in the reading where Clark presents examples of artificial intelligence, then objections that point out why these forms are not really intelligent, then an explanation that while the objection has some validity, the form of AI can be improved so that the objection is less relevant. For example, the Chinese Room objection seems a good one at first, but when Clark suggests that a "finer-grained formal specification" is possible, it is true that our (or at least my) intuitions begin to shift. When you take away the semantic transparency and consequently some of the shallowness, and replace it with "a much finer-grained formal description" (36), I'm not as sure about the distinction between performing functions and true understading. Similarly with the everyday coping problem, it seems impossible to equip a machine for this by simply adding explicit codes for every possible situation, but if we can "use powerful inference engines to press maximal effect from what the system already knows" (37), it doesn't seem so impossible (although Clark seems less optimistic than me). The machine wouldn't have to hold on to an infinite amount of information and experiences, it would need to keep only the general inferences it has drawn, like we do.

hdmartin@... said

at 2:43 pm on Sep 7, 2009

Behavior is a major part of this chapter. It seems to suggest that behavior is what recognizes patterns, which then is the cause of the human ability to act during different and new situations. Machines can be programmed to learn, however generally do not perform well in a new situation. This could be one of the major differences between humans and machines. The human mind can stretch and find unlikely patterns between situations that is a direct link to their behavior. Where machines are limited; machines can be programmed to predict the future, i.e. Deep blue, however is limited to one topic. To make an artificial intelligence that is like a mind of a human, it needs to recognize patterns between different topics instead of being limited to separate topics.

Travis Schneider said

at 2:31 pm on Sep 8, 2009

Searle's China room perfectly demonstrates what intuitively feels wrong of any functionalist claim- I believe his illustration is dismissed too quickly by Clark, who says something about "right but for all the wrong reasons," or to that effect. His room can speak Chinese- it gives the proper output for any input. But it can't know Chinese, Searle would say, and yet it has been rejected that the room does in fact know Chinese. How could that be? Searle assertion is that to know something entails an intentional perception of it. In other words there's qualia, as Clark refers to in Appendix II. The room can't have this- it couldn't be so. Society as a whole doesn't have a consciousness, living, thinking. Yes, there's an intricate linking of individuals- but what would that mean, to say a society is conscious? Nothing tenable. Nor do I believe any amount of shrinking, "finer-grained formal specification" could provide this, yet posit as Searle a separation between the functional/physical workings, and the intentional aspects.

John Dunn said

at 6:34 pm on Sep 8, 2009

Clark describes the mind (as the materialist sees it) as: “Nothing but the fancy combination of ordinary physical states and processes.” If we accept this assumption, that the mind is a complicated organization of lots of fairly simple physical parts, then we might expect the mind to behave like it's a conglomeration of mental parts. We might assume that we can access different “functions” or processes by altering the input or possibly the order or format of the input. On the surface this seems to be an acceptable representation of how/why people learn differently, or maybe how people appreciate music or art. The way the information is presented might react differently to similar but distinct “programing” in different people's brains. Seems like all we have to do is figure out what the different parts of the mind are and how to differentiate between them. Once we've done that we just create a map of a persons mind and maybe we can do all kinds of fun stuff.

~ John

Alex Edmondson said

at 11:08 pm on Sep 8, 2009

I think one dilemma in accords with viewing our minds in the system of symbols sense is that it doesn’t account for things when something out of pattern occurs. Humans are able to adapt to new situations without necessarily being programmed by something. We’re able to function and stay alive, usually. But perhaps we’ve been programmed far more advanced than any other type of machine that we have created. It seems that machines lack the adaptation we have, unless that is because we are only used to the machines we have created so we’re not capable of making anything beyond simple. About recognizing patterns and linking to behavior, what about when we are on hallucinogenic drugs? We see things that are completely out of pattern, yet our machine doesn’t malfunction (e.g. shut down), it still manages to adapt. Are there a bunch of patterns stored away back in our brains for experiences like these that are only unlocked with drug activation that tell us how to act when on them? It seems very unlikely, so what goes on then?

Caleb Schmidt said

at 11:59 pm on Sep 8, 2009

As Clark delves into the murky waters of the human conscious, our boat rides low in the water crushed by the weight of this undertaking. To put it simply, there is little more in academia that requires such levels of complexity in thought and open-mindedness in approach. Although I do not feel that a purely scientific answer will ever become evident in the field of cognitive science (I believe the fields of mind, matter and their interwoven connections encompass a much grander set of philosophies); however my ears ring a sweet chord of harmony to the notion of the brain as a processor and the mind as the ultimate software(“mindware”). Clark uses the term “meat machine” well, describing humans as nothing more than the sum of their part, and in some ways his argument holds strong. Recognizing patterns between inanimate object, animals and human and how they problem solve in various and similar situations( well maybe not so much in the case of a rock). As astute as this pattern recognition may be, I see it more as fundamental laws of logic and reasoning in the universe, and not as much of a self evolving software is preloaded into humans. Reductionism breeds overly scientific views of the world that only allow the microcosm to be seen without the ability to read and interoperate the meaning and context in the macrocosm. Like many things in life such viewpoints exist without balance and must strive to unite at a point of equilibrium, only then can our worldview evolve past the somewhat primitive state in which humanity current drifts, floating along the river of life.

Daniel_Cronin said

at 10:38 am on Sep 9, 2009

As Clark delves into the world of A.I. the book, in my opinion, becomes harder to read. Much of this I feel can be attributed to the fact that he must gloss over many of the A.I. systems he describes. Even though this is the case, I do find A.I.systems facinating. Being a computer science major, who is taking a course in A.I, I understand the need for Clark to only touch on the tip of the iceberg. Reading chapter four on Connectonism, he goes into artificial neural networks. One point he does not bring up (or at least that I couldn't find) is moors law. Moors law states that the power of computers doubles in roughly 18 months. He brings up NETtalk early on. NET talk used a couple hundred neurons with about 20,000 connections between them to preform simple tasks. Compared to the massive amount of neurons in our brain, about 100 billion, this is nothing. Within the last few years there has been work to simulate a insect and mouse brains. Last I read about it they had been able to create a *brain* of about 1000 neurons that was capable of basic task learning. The question I see in all of this is is we were able to simulate a human brain, assuming we had the ability to not only simulate enough neurons, but connect them properly as well, would that simulated brain have any form of consciousness? The question goes back to what we were talking about in class last week about slowly replacing a persons brain with artificial connections. Would this person loose their consciousness, or would it simply transfer to a different form?

Benjamin B. Walker said

at 10:44 am on Sep 9, 2009

Searle’s thought experiment concerning the Chinese room (Pg 34) brings up what I think is the central issue for this class: the distinction between mimicry of intelligence, and the actual thing, is quite a large one, although subtle. Although the human agent possesses normal intelligence (so we assume of humans), she does not possess understanding of the syntax of the system in which she is participating, so in a very important sense, we can say that she is not actually contributing intelligently. I compare it to using the formal system L. In PHIL 210 we manipulated countless incarnations of P, R, S, Q, all strung together by symbols that we agreed meant so-and-so. Although we rarely had an idea of what ‘P’ or ‘R’ stood for, we pushed them right through all of our operations, according to the rules of L. So, similar to the human in Searle’s room, we had no actual understanding of the context of our actions, but we fully understood the process of arrival at our conclusions, whatever they were. So now since we have a real world example of this situation, we must decide whether we were behaving intelligently in that class, or we were just behaving obediently. Is it intelligence if there is no understanding? And can we ever be sure that our electronic friends and perhaps future leaders don’t understand what they are doing now, this very minute?

Mike Prentice said

at 11:27 am on Sep 9, 2009

I would, like many other students, like to talk about the Chinese room experiment. The ability to recognize patterns over an extended period of time is what produces intelligence not the ability to look at a word in a language that you don’t speak and know immediately what that word means. If we continued to feed this agent the same patterns, as well as other information about the patterns, she would eventually learn what the patterns mean and learn how to interact with them by herself. This is in no way different than how a child learns his or her first language. The child is put out in the world not what any of the symbols (words, letters, etc.) mean but in time the child is able to connect the pattern and realize that mom means mom. The computer is like a child, if we continue to feed it patterns it will eventually be able to recognize theses and LEARN what different symbols and patterns mean. Just because a computer doesn’t think like we think (yet) doesn’t mean that it doesn’t have intelligence.

Rob Conary said

at 10:35 pm on Sep 9, 2009

@Mike

I would actually disagree. I would say that the human mind has a very intricate set of very basic logical programming that gives it this sort of learning capacity. It's sort of the challenge as I view it in the creation of "artificially" intelligent computers. The goal doesn't seem to program a sophisticated machine endowed with the knowledge of the smartest man, but rather to replicate the very basic set of code that allows us to progress. It's sort of like the modeling we saw today in class. You can start from a very specific model where you program every detail of action, but that turns out to be useless. What we need is a sort of very basic, almost primitive level that starts with nothing and then adds onto itself up to, at least what we would consider, a very sophisticated and complex system that mimics the real world.

Jerrod Nelson said

at 10:54 am on Sep 11, 2009


I like the idea Damasio presented on page 177 that explains all subjective notions of good and bad, failure and success in the world as directly tied to “innate goals and biological systems of reward and punishment”. This theory suggests that the body works as a “marker system” that ties visual representations to physical reactions, thus creating memories. Is it possible that all of our memories or furthermore the very notion of the self is nothing more than a set system of biological “markers” aimed towards learned traits of successful behavior? Damasio would say yes, that the self is “a mental construct based on present and past body representations, autobiographical memory, semantic knowledge, and current perception”. But by this theory is consciousness completely dependent on the senses? In that there can be no consciousness without an initial sensory experience.

Megan Holcombe said

at 11:36 am on Sep 11, 2009

I can’t help but relate this to a video I recently watched about the study of Radio-Hypnotic Intra-Cerebral Control (RHIC) for mind "programming," which relied on the idea that: “When a part of your brain receives a tiny electrical impulse from outside sources, such as vision, hearing, etc., an emotion is produced—anger at the sight of a gang of boys beating an old woman, for example. The same emotions of anger can be created by artificial radio signals sent to your brain by a controller. You could instantly feel the same white hot anger without any apparent reason.” Though the factualness of this is quite debatable to me, this possibility of controlling behaviors would fall in line with the idea of the brain as being nothing more than its physical processes.

Dcwalter said

at 7:55 pm on Sep 13, 2009

I tend to agree with the perspective demonstrated by Marvin Minsky. It seems that we as humans are capable of performing actions it would lead one to believe that our minds are able to perform a huge amount of computation in any single moment. The example in the book of the party walker, holding a drink, talking to friends, etc. demonstrates how widespread the range of our minds "sub-agencies" are able to extend. While it is impressive to actually count aloud the processes that our mind must go through in order to carry out seemingly simple tasks, it is less rewarding to count the processes that are merely taken care of without conscious thought. It seems like the standard method for the path towards artificial intelligence, symbol structure systems will never be the proper way for machines to use for the means to grasp what it means to be conscious. Another method must surely prove effective.

Daniel_Cronin said

at 11:03 am on Sep 14, 2009

On the subject of the crickets/termites/whatever, I see their input-computation-output description of their actions exactly as I would say, a robot. If insects like these are merely following simple rules and matching input patterns to output actions, who is to say that their is any difference whatsoever between them and our artificial crickets and other mini robots. Clark mentions the distinction between concrete inputs (the number of windows, a beam of light) and non concrete input (the wrinkled shirt example). Where do we get out ability to process non concrete observation from? Is it simply an extrapolation of our brain from the crickets, in that we simply have more raw computing power? Of is their something altogether different and more abstract that the cricket lacks?

Matt Stobber said

at 11:48 am on Sep 14, 2009

I really find discussion of the different kinds of intelligence fascinating, especially between artificial intelligence vs. human intelligence. Searle’s chinese room experiment I think really points to this issue. Do you need to understand the process in order to produce the right functional output? Of course not, and that is what is so interesting. The human in the Chinese room didn't understand the process but still produced "intelligent" output. I think that we don't consider machines intelligent because we understand their "thinking" process so well. Computers think in binary, they only 1's and 0's. And since this is so simple, we believe it to be non-intelligent. Then we look at our own brains and consider ourselves intelligent because we barely understand the human brain at all. It reminds me of the video we watched about consciousness when the man said that consciousness if explained is no longer "magical" so it must be wrong.
I really think that we assign experience too much to intelligence, and that's what leads us to believe that artificial intelligence isn't actual intelligence, because we know the robot doesn't "experience" intelligence like we do.

Evan Brennan said

at 4:53 pm on Sep 14, 2009

Chapter 4 of Mindware speaks about connectionism which basically models the brain as a neural network composed of many simple processors interconnected by mass wiring. It then goes on to talk about NETtalk which was an artificial neural network intended to turn written language into coding for speech. In the discussion part of this chapter it speaks about the leading oppositions to the connectionism and the most interesting of these was whether or not these neural network model could recreate a biological reality of how the actual mind works. It explains that the experiments performed with the original models were tainted by the experimentalist's choices of what was to be modeled, choices which were called very artificial. Also early connectionists models used small resources of units and connections to take on very specific problems.

Jerrod Nelson said

at 8:50 am on Sep 18, 2009

In Mindware the idea of the mind as a meat machine is presented. This led me to think about what if we replacated this machine and its hardware to create a workforce. Is the sooul something seperate form the mind? and if so, would our replacated hardware believe that they have souls? And if one is in complete belief that they have a soul but in fact do not, is it wrong to force this being to do our bidding?

vincent merrone said

at 4:09 pm on Sep 19, 2009

The idea of one hemisphere of the brain controlling reasoning and the other controling the story of that reasoning seems like a kind of hermineutics. In hermineutics we have the notion of understanding the parts in relation to the whole and the whole in relation to its parts. Is this almost not the same idea as the hemispheres of the brain? Well, yes. I cannot recall a time where I reasoned one way but told a story that did not correlate. For it seems that their are certain checks and balances that prohibit such beliefs from occuring outside a vaccum. People concur with a story froma rationalization, I fit pieces from my partifular story into the general whole of many stories--and the pieces fit. I understand how this becomes of interest in the lab where eyes and ears can be covered to test one hemisphere with another, but there seems to be too many checks and balances outside the lab that prevents me from making up my own little fantasy. I'm not too sure, but I could be wrong, for are not opinions, perspectives, points of view, etc. kind of that story? If that is the case than that means multiple stories can be thought of by multiple people and how would we determine which one would be the correct one based off of rationalizations at hand? Are republications just fantasizing pervs or are liberals fantasing? Don't know.

Jennifer Holdier said

at 9:24 am on Sep 21, 2009

I find the notion that the brain is nature’s version of the computer very interesting. Clark explicates this in Box 1.5, on page 17. He defines basic computation as something whose inputs and outputs are able to be interpreted into something meaningful to us. An example of basic computation is the calculator. A further definition of computation is given by Chalmers, who says that computation is “a physical device implement[ing] an abstract formal description…just in case ‘the causal structure of the system mirrors the formal structure of the computation.’” Thus, the brain is a good example of a computer; it takes inputs and gives us meaningful outputs.

hdmartin@... said

at 9:45 am on Sep 21, 2009

The idea that the brain in like a computer is compelling, however it seems that a computer can be more efficient. To explain, when a computer has software on it, when the data is entered it automatically will spit out a response and work the way it should (depending on what computer is used, some may be more efficient). The human brain can be less efficient in this way because, people forget. A person is being stimulated for all senses all the time and may be distracted or just not paying attention to details. Meaning, people can have data inputted into the brain but not remember it for many reasons. A computer on the other hand has to deal with all the information that is poured into it and act accordingly. Computers have evolved from running slow to process simply commands to running instantly with a flow of commands. Along with this, a computer cannot choose what to do with the information, it has to follow the software, and humans on the other hand have free will (or we like to believe we have free will) to decide what to do with information. This idea brings that fact that when we give other humans information, they may not act as we would like them too, and cannot be 100 percent certain of what the person will do with it. This makes a computer more efficient because we know exactly what will happen when we input data. When thought of it that way, computers and the brains are not where near each other on information processing, yet at the same time, closer than anything else.

prakowski said

at 10:52 am on Sep 21, 2009

When Clark writes about "isolating the cognitive band" on p. 39, it is a good reminder to me that most of the forms of artifical intelligence we have heard or read about seem to do that. Just mimicking human cognitive activity is a difficult enough task, without taking into account the way the myriad other factors, like emotion and memory, come into play. So I think what we may view as inefficiency in our own computing (and in a way, it definitely is very inefficient) is the result of our machine being much more complex. It seems foolish to design a machine so complex and inefficient that it often gets in its own way, but maybe that's the true task of artificial intelligence. Although it is a difficult task, it doesn't seem impossible to me, immediately. With a proper understanding of all our machinery, we could be just as predictable as calculators. Clark then points out that outputs might be realizable through multiple paths, but I'm not sure if this would actually make the task easier--it might just make it more complicated.

Mike Prentice said

at 11:43 am on Sep 21, 2009

In the movie that we watched during class on Friday the human police officer asked his cyborg partner how she knew that something was going to happen and her reply was “my ghost is telling me”. This is kind of interesting because it is more along the lines of what us humans would call an intuition. So what is this intuition/ghost? The best that I could think of was that it’s what our brain/processor is computing for us subconsciously. I say this easily discussing or brain, taking the assumption that there is a lot that we do not know about our own brains but I don’t know if a computer could not know what was going on with its processor if he or she so desired. We know what color we enjoy like a processor knows what equation it enjoys. What gives the processor information is the hard drive so it seems as though this is where the intuition takes place, our memory of past experiences and the hard drives data on things that have happened in the past.

Megan Holcombe said

at 11:23 pm on Sep 22, 2009

I must agree that I am most intrigued by the "Chinese Room" experiment in Chapter 2. If a machine could produce all the correct syntax who is to say it is not understanding as a human does. We can only really know how we, our individual self, is understanding. We cannot reason that our understanding may very well be as foreign as a machine's to another human. The information we receive and observe from another person leads us to believe they are human based intuition, which relies on the fact that their syntax is correct.

Evan Brennan said

at 1:40 pm on Sep 25, 2009

Bogdan Kostic lecture pertaining to the three levels of retrieval information was very interesting. The test which asked people if they remembered the words that were asked based on what the context the words were in reinforced the thought always replaying in my head about how we're all just animals (monkeys), just a bit more advanced. I think in a case of actual survival however, many of us would be doomed unfortunately (not counting myself out), due to our high level of dependence placed on our cars and homes and comfy lives... But it was just a reminder to everyone, no matter how cool, or smart, or not cool, or not smart, or fat, or skinny, or ugly, or beautiful... We're all nothing more than somewhat more intelligent primates....

Matt Stobber said

at 1:51 pm on Sep 25, 2009

Bogdan Kostic's lecture on memory was very interesting, and somewhat humbling in a way. I think ignorance is what causes us to believe that we are much more special and intelligent than every other living thing. I also think ignorance causes us to create this intricate systems like "consciousness' etc to enforce the idea that we are unique and special, when in reality, the answer may be that we are nothing more than slightly more advanced primates. As science advances I believe this illusions will start to fade, just like old superstitions like magic faded with the advent of science.

Jennifer Holdier said

at 11:33 pm on Sep 25, 2009

Chapter three is all about folk psychology, and if it represents our inner states well. Folk psychology is also called commonsense psychology and seeks to explain our behavior. Clark presents three views: Fodor’s, Churchland’s, and Dennett’s. Fodor is of the mindset that folk psychology does represent our inner states well. Churchland, on the other hand, thinks that folk psychology is shallow and meaningless. He criticizes it in three ways: it does not work in every domain, like sleep and creativity; it has questionable beginnings, and it does not fit in with hard science. Dennett views folk psychology as having nothing to do with how our inner states work; but it simply acts as a good framework. It works because we act in predictable ways. Dennett seems to have the most coherent view of the three, at least as Clark presents it. I also think that Churchland makes a good point when he questions folk psychology because it has yet to change or evolve, whereas other sciences have become better with time.

vincent merrone said

at 11:42 am on Sep 27, 2009

The. Chinese room thought experiment does seem like an accurate refutation to the Turing test. The reason for this lays in the possibility that the entire room that is putting together the chinese messages does not understand essentially what it all means as a whole. Yes, the room may be putting together grammatically meaningfull sentences, but they are lacking comprehension. I may be taking a Turing kind of test and the machine iam interacting with may be responding accordingly and I may think it is human, but the machine does not comprehend/understand it outside the schematic of input/output formulation. It reminds me of that A+ student that simply sees the question on a test, diggs into their memory, and records what was stored there. However, if you were to dialouge with this person they would minimally be able to understand what the content of the subject was. I know plenty of people like this, they know the input/output of a subject but do not understand what the content is. The Chinese room thought experiment can be correlated as such: a machine that does input/output with the chinese caligraphy, but has no notion of content that may be contained in those sentences. I know Chomsky is not a proponent of the Turing machine or finite phase structure, and this I agree. Could a Chinese room described in the readings identify the surface structure of it's sentences-yes, but I do not think of can identify the deep structure of a sentence and what it means because there is a lack of content understanding one the Chinese room's part. Please pardon any spelling or grammatical errors, I'm typing from a mobile source.

Caleb Schmidt said

at 6:54 pm on Sep 27, 2009

Traditional logic rests heavily on the foundational axioms of cause and effect, and in that way can be seen to describe human behavior as nothing more than a cyclical pattern of interpreting effects of the world and in turn co creating causes that follow the circumference, creating effect. On another level is the concept of need, for I believe “need” is all powerful when looking at the behavior of life. In this case “need” is used in the broadest terms possible. Not only examine the needs of humans per say, but the instinctual drive to fulfillment other systems (biological and artificial alike). The only difference in artificial systems is the preprogrammed loops and evolutionary algorithms for advance. Biological matter has these same mechanisms but evolution, instinct and social learning are the major contributors, as opposed to artificial programming. Artificial systems aim to mirror biological systems, in terms of system complexity. Agents (which are sums of complex systems) interoperate effects, and through their individuality (loops, algorithms, instinct and programming, etc.) and needs, co-create causes, which have effects of their own, completing the circle or spiral ( depending on how time is visualized). I believe these simple axioms can be used to analyze all intelligent behavior, and patterns that arise in individual and even collective behavior can be simplified by looking at the complex web of cause and effect, needs and individuality.

Caleb Schmidt said

at 6:54 pm on Sep 27, 2009

Within this complexity, life (on a lower level, i.e. intelligence) is born, and because of the connectionism of the parts of a system. To sort through the connections of intelligent life (biological and artificial) it is easy to use a computer and its interactions to simplify. For the human brain is a much more complex entity. A computer has many levels of interaction; it has the machine level and runs the gamete clear to the user interface, all the while creating levels of organization in order to properly function. These levels then interact with their peers and often jump from level to level in order to perform the task at hand. An example would be a human working at a abstract, higher consciousness capacity and needing to revert to a lower level (more animalistic, instinctual) to perform a task.
Thinking through complexity is a difficult task and reverse engineering it is even harder, because there are simply too many variables to equate. But the more levels we build and understand and the more information we gather, this arduous task become paradoxically more simple and much more unimaginable complex simultaneously.



Dcwalter said

at 8:46 pm on Sep 27, 2009

Towards the end of this chapter we are introduced to a very weak notion of “smart agent.” Almost everything one could image could fit into this category with enough argument. But the loose definitions are that they respond to the world in non-random ways that coincides with some hard-wired goals. I am not sure why this mode of being is important in Dennett’s discussion, but he quickly moves on to the quote about how human-brain-like thinking may have had to wait till human-language-like talking emerged. That is to say that the way our brain interacts with itself now is very different than how it did before the creation of language. It seems to me that my mind mostly works in a way that I can understand as fairly coherent sentences, with structure. But at other times it is vibrant imagery that cannot even be grasped by any language I am familiar with. Surely, part of our mind was made available to us before the creation of language but I would have no real idea as to what kind of form that took, or is taking currently.

hdmartin@... said

at 10:15 pm on Sep 27, 2009

The Chinese Room thought experiment is my favorite example that reflects what
a turing machine is. The idea that input and output can happen without any form
of comprehension. This not only happens with computer systems, but people as well.
One good example is math. When I was a math major I received the input and knew what
the output should be, however I had no idea exactly how certain equations worked only
the output. Mathematics is not the only area of where people do not comprehend the
main idea/concept of a system. I mean, there are words, ideas, sentences, and subjects
that people use daily, yet have no idea what they truly are/mean.

Daniel_Cronin said

at 9:43 am on Sep 28, 2009

In Dennett's paper, he gives a remarkable concise view of his own opinions. It turns out I share a lot of those with him. The question, at least as I see it, is whether or not there is something intrinsically "human" about us or if we are simply a super complex system responding in purely physical means. Dennett (and I) hold this idea as true. In the part of his paper he talks about "perfect imitators" and how they are indistinguishable from the real thing. If that were the case for A.I and robotics, would that mean the only "perfect imitator" is one that is indistinguishable from humans. In other words, an exact copy of a human. If this were the case, artificial intelligence would be impossible, because once we achieved it, it would be what we consider to be "actual intelligence".

John Dunn said

at 9:57 am on Sep 28, 2009

As we go about our daily routine, our left brain comes up with stories, sometimes true about why we do what we do. This seems like a translator on the other side of the Chinese room. If symbol systems accurately describe the processes of an intelligent mind, then we truly don’t know exactly what is going on, just a meaningless stream of data, which we later try to explain as a rational story.
But why do we insist on making human thought processes the pinnacle of intelligent thought. Sure it’s the best example out there of rational, scientific, reasoning beings. But like undergrads reading Nietzsche, we don’t get to assume that we’re the smartest beings out there. We may be just an evolutionary stepping stone on the way to a more intelligent world.

prakowski said

at 11:45 am on Sep 28, 2009

Clark's presentation of the "triangle" of debate regarding commonsense psychology was helpful for me. At the base, on opposite sides, are Fodor and Churchland. Fodor says that we should be able to identify inner states that closely match the reasonings of our everyday language, Churchland says this is unrealistic. At the top is Dennett, who with Churchland thinks our inner states are probably very different, but maintains that nonetheless folk explanations are valuable. A strong argument he gives is the pattern recognition argument on p. 53. The formal rules of the inner state will give you that flasher--that configuration fluctuating back and forth--but it won't make the important recognition that this will go on forever unless something outside interferes. I guess a question might be, how important is that, really? Another question could be couldn't we make our formal rules more sophisticated (but not translated into a folk explanation), so that it will recognize this pattern?

Erin said

at 1:30 pm on Sep 28, 2009

The discussion of the Chinese room led to me think a lot about what we mean when we question 'intelligence'. It seems that the term itself needs more definition.. at least for me to think about the scenario. There does seem to be some sort of intelligence residing in the room, not only in the seemingly obvious sense of the output processes matching appropriately with the input processes but rather with the book itself- the device which has been constructed to make the internal processes happen. No, the book does not understand Chinese and therefore does not possess an understanding of language as we think of it, however it seems that this book must have been constructed by someone or something which does have this understanding (otherwise how could the instructions coincidentally provide correct results). We might say that in this thought-experiment it doesn't matter where the book came from, all that matter is what is taking place in the room and whether or not the ROOM is exhibiting intelligence. However, I think it's an important point that the operations inside the room must have originated from some source of ultimate understanding (an understanding of the language), and this alone leads us to believe that the room is intelligent when really it is not. It seems that anything which possesses intelligence must have been constructed by something with understanding, and when we lose sight of the understanding somewhere along the line, what is it we're really losing?

Benjamin B. Walker said

at 9:37 pm on Sep 28, 2009

The comment in class today about the internet gaining consciousness stopped me flat, then I thought, there are already sytems of millions of parts tranferring information right now, and they have been around for many years. Telephone poles: Do you think AT&T is conscious yet?

Matt Stobber said

at 9:50 pm on Sep 29, 2009

I find Fodo's representational theory of the mind to be very simple and clean, but it is hard for me to come to the conclusion that RTM is correct because of its simplicity. While it is true that it does explain the success of folk psychology, this doesn't prove its validity. It is possible it is mere coincidence that RTM seems to be explanatorily useful. That is why I tend to side with Churchland's view more, because until we can truly see the underlying mechanics of the mind and understand them on a deeper level, we will never be able to conclude with certainty how the mind actually works. I side with Churchland more because there is no explanation of how sleep, creativity, and mental illness work from the perspective of folk psychology.

Jennifer Holdier said

at 8:26 am on Oct 2, 2009

Box 4.1 and 4.2 are very interesting. Box 4.1 talks about NETtalk and Box 4.2 talks about how something like NETtalk learns. The author compares it to trying to find the bottom of a large basin. If we were blindfolded inside the basin, we would take small steps and judge whether we went up or down. If we get to the place where taking a step in any direction has us going up, then we have found the bottom. NETtalk works in a similar fashion. It works step by step, and layer by layer, to come up with an output with the lowest error. It would be interesting to learn more about how NETtalk works. The illustration in the book is nice, but it does not help me picture what it actually looks like.

Josh Egner said

at 11:42 am on Oct 2, 2009

I find the connectionist views very interesting. They seem to go beyond analysis of the cognitive band and attempt to show how this cognitive experience is produced through the interaction of simple units in a very complex network. I also appreciate how their systems bear some relation to how our neural systems have been found to work. Such as excitation or inhibition of neighboring units or the completion and recognition of partial stimuli presented. Both of these have been shown to be in effect in our neural processing by psychological studies. I would not go so far as to claim that such supporting evidence is proof that their approach is right, but it sure helps in building its case.

Upon reflection on the connectionist scheme I began to wonder how much this really tells us about consciousness. Sure it offers a way in which we may be able to perceive the world and think about it, but it doesn't really say how this is brought to consciousness. In its explanation it takes for granted that the collective activation of these different units is synthesized into a conscious experience, but how it that synthesis achieved? What part of us combines and interprets all of these activated units? These are questions that may not be so easy to answer (as if the other ones are!).

Megan Holcombe said

at 11:46 am on Oct 2, 2009

In reading about NETtalk it does seem impractical in the connection ism to go about finding a correct solution by trial and error method, or a gradient descent learning, but it seems there is a way to imagine that humans do this also. Though while a computer can go through the computations quickly and have a design in moments, we have done it over the course of millions of years. How did we discover what foods were good to eat, for example? We only now know because for thousands of generations early humans and before tried plants out and discovered those that were poisonous and/or deadly. It is not something that was already in our system but rather an input give to us by society/evolution. It makes me imagine the rate of growth a system of computers could have versus our human race could turn out remarkably.

prakowski said

at 11:48 am on Oct 2, 2009

Chapter 4 talks about distributed superpositional storage schemes, which as artificial intelligence seem far superior to the physical symbol systems we read about in chapter 2. They are able to generalize and complete patterns, which make them less susceptible to damage (which reminded me of what we talked about in class Monday, how in some systems damagaing one part will make the whole system useless. From the biological perspective, it still has far to go, as Clark points out some major problems, even for the more advanced 2nd and 3rd generation schemes. Still, generalizing, completing patterns, and learning are definitely steps in the right direction.

vincent merrone said

at 3:25 pm on Oct 4, 2009

vincent merrone said

at 3:43 pm on Oct 4, 2009

Sorry about the blank post, typing on a mobile device and I clicked the wrong button. To the tedtalk we viewed in class, I want to ask what pyschological state classifies as being happy--and if I would want to be on such a state long term. Let Us ask, does the psych-state of X (as described in the tedtalk) count as the most desired and wanting psych-state (or even optimal)? Let us say that a composer creates her best work when in a solemn psych-state; and the composer takes much pride in her work; but when she is "tedtalk happy" she creates average work she is not proud of-- but she is happy. So which state should she chose? It is a pain to type on my mobile, but you can see the dilemma, it is a shadow of a sort of Argument against utilitariansim. Another important point is that psych-states are not static; they are changing (can comps account for this?). Once again pardon my typos or grammatical errors, I'm typing on a mobile device. P.S Giants 4-0

hdmartin@... said

at 9:16 am on Oct 5, 2009

I also want to talk about the tedtalk we watched in class. The discussion we had about taking a pill to always be the happy doesn't seem far fetched for people to try, and I'm not just talking about people who have depression. In our culture there are pills for just about anything. Although these pills do not work all the time, there are many people who turn to pills to fix some part of their lives. For example there are already depression pills (which is just an easy example) that try to balance out the moods of the people who have to turn to take them. However these pills have consequences, side effects (that seem to cause more problems to the initial problem), and a lot of the time do not work. If there was a pill that actually worked, I'm sure alot of people would just on board, even with the idea they may become dependent on it. The idea that always being in the state of happiness will cause for that stage of happiness to not be as good (we would become use to it and it will seem as a normal state) seems like a copout argument against it. For people who are truly depressed, this might be the only solution to have, and the consequences do not seem as bad. As in, for the happiest feeling of a person to turn into their normal state, it is better then being in the low state which they experience. The idea of never being in that low state again, and having a cap to how low they can become, may sound like paradise. I know that one can argue that this "happiest" state can then become the lowest state the person experiences, but maybe just the memory of how the person use to feel will be a reminder of how happy they are.

Mike Prentice said

at 10:43 am on Oct 5, 2009

So when I actually started studying what intelligence is, I originally thought that it was the recognition of patterns. The problem with this is, as we have seen in class, computers can do this much better than humans can. So then the question became, what separates us from the computers? The answer to this is enabled us to think of and build these computers in the first place, our creativity. So I am standing true to the fact that a necessity of intelligence is the reorganization of patterns but there are other factors that play into it as well such as creativity. Upon this I think that another important but non-necessary factor would be the ability to communicate and share these creative ideas and understand other peoples creative ideas. Beyond this I’m certain that there must be more but I figured that this would be a good starting point.

Erin said

at 12:14 pm on Oct 8, 2009

I was interested by chapter 3 in mindware when it discussed the argument between Fodor and Churchland regarding commonsense (or folk) psychology. I feel myself being continually torn with the notion of the folk psychology, while on one side it seems only logical to me that all mental behaviors have the capability to be reduced to a set of predictable rules, and yet a part of me seems to want to fight against this notion. Why? To preserve some sort of personal freedom? Churchland claims that folk psychology cannot account for certain functions of the mind such as creativity, mental illness, and infant and animal thought. While this premise makes a lot of sense to me, I can’t help but feel that there is no reason why these phenomenons couldn’t potentially be reduced to some sort of very complex functioning code that we simply aren’t able to tap into. Another of Churchland’s objections to folk is that “there is no sign as yet of any systematic translation of the folk talk to hard neuroscience of physics.” This raises a warning sign for me; just because humans are not able to find it is not sufficient to discredit the possibility.

Matt Stobber said

at 12:25 pm on Oct 8, 2009

The video we watched about the Neuroscientist was fascinating to me. I really do believe that we need to allow empirical findings guide us in our thinking. The problem of understand the brain, and then from that the mind, is akin to trying to understand how a computer works without any schematics, using only an MRI. The problem seems almost impossible, and our current direction seems almost feeble. All we can do is map functionality to parts of the brain. It feels like this approach will never truly get us to where we want to go. It's like we are just waiting for someone to come up with an ingenious solution to this problem. But is such a solution possible with our current technology? It will be amazing if we can see the solution to this seemingly impossible problem sometime in our lifetime.

vincent merrone said

at 1:00 pm on Oct 11, 2009

Once again, I'm still typing from a mobile, so please pardon my spelling and grammatical errors. The tedtalk involving the phantom arm and phantom pain is pretty wild (not just because it was on House) but because it opens up even more questions about the nature of the brain, the mind, and the relation between the two. To speak quite colloquially, it seems like the brain is doing its thing completly independent of the mind. Yes, the brain does work in involuntary ways, but not in such an impressive way as that of the tedtalk. The brain is inducing discomfort and stress to the mind via a body part that does not exist. The brain does this when I stub my toe or fall off my bike, but the phantom limb situation would be like feeling the pain of falling off my bike when there is no bike. I get this odd feeling of an even more seperation of mind and brain, but they are interelated---so how do we account for so. Also, for whomever knows the sense datum argument, think about that in conjunction to the phantom limb talk. Do this and you get a feel of the brain being in a vat where all apperances, sounds, etc. Are just epistemically in your head/brain being sorted out by the mind. Imagine a tv that would think (but no other could hear, see, etc. The TVs subjective self). Ok that sucked but you get the point. It's all in the head ( yes attack on you putnam).

Caleb Schmidt said

at 3:54 pm on Oct 11, 2009

The old saying that a picture is worth a thousand words kept cyclically returning itself to my mind as I watched Ghost in the Shell. That is not to say, I find the written word is any less appealing and grand, it is merely a comment on a vivid vision detailing what may soon come to pass. It frames the world in a way, that only few have successful in grasping. Such a story jumpstarts your mind, leaving questions to simultaneously acts as cause and effect. Questions like these begin to flood my world like a hurricane, tantalizing questions, questions that before this point never existed to me, and past observations from my time on this planet begin to untangle in their complexities like speaker wire. One question I find most fascinating is called singularity theory. Singularity theory (in mathematics) deals with the study of points and sets, and how the unfold and behave over time. When applied to the context of artificial and human intelligence, there is a tested theory that projects that at some point in the near future human intelligence will be surpassed by artificial intelligence to create a new being more intelligent than human can comprehend. In turn this point will allow machines to create new forms of intelligence by their own design that surpass (by a huge margin) the constructs and ideas of human engineering. In early October of this year, a conference on this subject was held in NY City. This conference was called the Singularity Summit and brought together in one place many of the brightest minds on this subject, and so the dialogue continues. The way I interoperate the singularity is as follows, at this point in time (i.e. the singularity) machines will have the power and knowledge create machine superior to themselves, laying spark to a cycle of extreme intelligence; and portrayals such as Ghost in the Shell not only interpretations but forecasts of things to come.

hdmartin@... said

at 10:16 am on Oct 12, 2009

Once again, I will talk about the tedtalk, because I find them entertaining and amazing. The idea that a phantom arm can be cured with mirror suggests that the conscience of a person's physical sense of his/her self is beyond the actual physical self and has a lot to do with the mental image of how a person views him/her self. In other words, a person does not view him/her self just as the physical "shell" of a body, but there is a sense of self awareness of what one believes themselves to be. For example, in my self awareness I believe myself to have two arms. Which is also true for most people, even those who have had their arm cut off. Having the physical arm cut off, cuts it from the physical body, however does not cut it off from the sense of self--which is apart of the "existence" of a phantom limb. Therefore people are more then just a physical body and the mind encompasses more then just abstract entities.

Daniel_Cronin said

at 10:46 am on Oct 12, 2009

I agree with what Matt said above me. With our current technology, there really is no way to map the brain. And even is we could, we don't know enough about what is going on to make much sense of it. Taking an MRI of the brain and seeing that part of it lights up given some input is akin to saying the hard drive in a computer makes noise when you open a file. All we can do today is describe what we are seeing and make interpretations from that. I'm a computer science major and I happen to be taking a class in A.I. One of the topics we discuss are neural networks. They were also mentioned in Mindware a bit. I have heard it said that if we have enough computing power we could mimic the human brain in a computer. While this is true, they miss the point that we also would need a complete understanding of the brain to mimic it. Unfortunately, we are not close to either of these goals. Our fastest computers get bogged down with even the simplest and smallest brain and neuron simulation. And these simulations are for the most part, poor, simplified representations of the real thing. I feel like both humanity and technology have a long way to go before we are able to comprehend ourselves in that scope.

Mike Prentice said

at 11:58 am on Oct 12, 2009

Yo man, so here’s the deal, if we are aging mentally but not physically can we call that aging at all? Is this one of those cases that we learned about in logic called the "if and only if", were we are not aging if we are not doing so both mentally and physically. In the book that we just read there are two ways it seems to speed up mental aging with little or no sacrifice to physical aging. Through the use of E-therapy, in which the stimulation of some gland forces evolution within minutes instead of centuries. Or, through the use of the drug Chew-Z, were your mind evolves in a world created by you for as long as you would like while your body is trapped in “reality” not moving forward through time. So if I gave a baby Chew-Z the baby could theoretically have the intellectual capacity of a 50 yr old within a second still maintaining the body of a baby. WTF mate? In any case would this be a proper way to feed a societies drive for intelligence? I think it would but under the strict regulation of the government which would of course be abused. So maybe It shouldn’t, maybe it should just be available to me, leaving the rest of you to scurry about with the lack of intelligence that is found in a 21st century human.

Benjamin B. Walker said

at 10:42 am on Oct 13, 2009

Good point Mike. But, would the baby develop normally, without the influence of others? Or would it speak some weird language, have odd thought processes, or what? (throws hands down, palms facing the ground and shrugs shoulders). I think there is a question of nature and nurture with that question. Furthermore, I do not think it is aging to learn. Aging in my opinion is a purely physical phenomenon. Old age doesn't ruin brainpower in itself, it is diseases of the brain material that interfere with thought in old age. Alzheimers, dementia, senility, all of these things seem to me to be fleshly; not degeneration of the mind as it is separate of the brain. I think Chew-z is quite awesome. For now.

vincent merrone said

at 5:24 pm on Oct 18, 2009

Accodmidation is a concept that one takes in information and molds it to his or her personal self. Now with a neural network we can see that it learns after many trials by strenghting the "neural" connections (as percentages). We can also assume that if a preworked neural network was eastablished it would be able to accomidate new knowledge. But is the artificial neural network accomidation the same as human accomidation? Animals accomidate and their accomidation differs from that of humans. Also accomidation differs between individuals. So, how does one draw the line? Or must we concede that accomidation has two different meanings per system? That human actions do not paste perfectly onto another system. Could we not have an avenue where neural network accomidation is defined as Z and human accomidation defined as G. They may have the same undelying foundation but differ on the surface. Also, how do we incorporate the postmodern idea of agency into A.I? Dunno, just talk.

Jerrod Nelson said

at 10:43 am on Oct 23, 2009

To think of humans a a complex neural network like that of the fish who are constantly becoming more efficent eaters, we must first discover what it is that humans are becoming better at. In the example of the fish the top half of the population survived to create the next generation, whereas in humans though the ammount of resources stays the same or even lessens, the population continues to grow. What we seem to become consistantly better at is sustained life, an increase in numbers, and an increase in consumption which seem to be a self defeting neural network.

Jennifer Holdier said

at 9:37 am on Oct 24, 2009

I think that perceptual adaptation is very interesting. Clark cites studies where participants wear lenses that flip the world upside-down, and after they get used to the lenses, the world appears to be normal again. This is wild. The findings of the studies seem to say that, if we can get used to the world being upside-down, we can get used to any changes whatsoever. However, he goes on to describe other studies where participants wore lenses that shifted the scene off-center, and the participants had to throw darts or throw a baseball at a target. The only hand that adapted to the perceptual shift was the dominant hand, and that only in over-arm throwing. The non-dominant hand did not adapt to the shift, and neither did the dominant hand adapt while under-arm throwing. What accounts for this? If we are right-handed, does a different part of our brain develop than if we were left-handed? And why does over-arm throwing adapt while under-arm throwing does not?

Jennifer Holdier said

at 6:32 pm on Oct 24, 2009

At the beginning of chapter six, Clark describes how crickets hear and how termites nest. Crickets do not have ears like we do; they have two ears, one on the left foreleg and one on the right foreleg. The ears are joined internally to two openings, called spiracles, on the top of its body. So, sound reaches it directly and indirectly. Clark says that the robot cricket emulates the real cricket well. The robot cricket does not have the complex internal system of a real cricket, but it is able to recognize the sound of its own species and move toward it. When termites are constructing their nests, they roll up mud balls and insert their scent into it. They deposit the balls wherever their scent is strongest, thus making a nest. Clark says that robot termites are also capable of doing the same thing. Clark concludes that complex systems found in real animals are not needed to simulate the same activity in robots. This is interesting. Does this mean that the robot cricket or termite is better than the real cricket or termite, because it is simpler?

Jennifer Holdier said

at 7:33 pm on Oct 24, 2009

I think that the most interesting part of chapter seven is where Clark talks about learning to walk. He says that from birth to two months of age, babies make stepping motions if held in the air. However, from two to eight months, that phenomenon goes away, unless the baby is being held in warm water, or being held on a treadmill. The phenomenon reappears again at eight-ten months of age, and the baby starts being able to walk on its own at around one year of age. One explanation for the disappearance of the phenomenon may be that the leg weighs too much for the baby to pick up. An explanation for the treadmill initiating stepping is that the stretched-out leg acts as a spring, and the treadmill helps it to recoil. This suggests that the right mix of factors have to be present for a baby to learn to walk.

Erin said

at 12:37 pm on Oct 25, 2009

In chapter 4 a biological criticism of first wave connectionism is discussed, and I thought it was a good point that the input and output representations of the experiments are very reliant on what people already know and expect to happen. It could be argued that this is a problem not just with AI but with the scientific field in general; every experiment is subject to the individual state of mind observing it and the scientific paradigm of the time. However, I think that this issue is especially problematic when attempting to prove that an artificial process is learning in the same way that we do. When setting up input data for the process to work through, we are (inevitably) going to draw from data which we have already had to work through- proof to ourselves that we are intelligent entities. We will then compare the output data to what our natural reactions have been. The problem is that this overlooks the long biological process which has led to our current state of intelligence, not just in our individual lives but in the life of our species. It seems naïve to think that the artificial processes will be able to acquire a comparable intelligence to ours simply by working through pre-conceived problems.

Rob Conary said

at 1:42 pm on Oct 25, 2009

Alright, chapter 6. I really sort of appreciated Clark's treatment of the apparent different kinds of problems and different methods of solving them. It seems to be an interesting feature of biology, as he presents it, that would employ these very computationally different systems across the spectrum of life depending on, it seems to me, the amount of information naturally present in the situation. So while it would make sense for something like a termite to take advantage of a lot of external world-present information, it wouldn't rule out the fact that perhaps humans need a completely different system to solve problems where we don't have such an abundance of information at hand. The proposal that rationality is a kind of coping mechanism for a real lack of physical data is an interesting possibility to me that seems very likely.

Dcwalter said

at 7:59 pm on Oct 25, 2009

Chapter 5 begins with giving the reader a fairly decent strategy for approaching the notion of any sort of artificial intelligence. The model given begins with a analysis of the task, then moves to creating a naming of potential variable associated with the task and a list of the mechanical steps for doing the initial task, and finally actually proceeding to carry out the given task. This seems to be fairly detached and abstract from the way it seems our brain works. As Carl Sagan said, "the brain has its own language for testing the structure and consistency of the world." I am not convinced that this sort of task management even comes close to spanning the gap between computational intelligence and whatever type of intelligence is associated with us humans.

Erin said

at 9:14 pm on Oct 25, 2009

In chapter 5 an “interactive vision” is discussed which discredits the simple “sense-think-act” process thought to be sufficient for visual interaction with the world. I particularly liked the second and third claims of the interactive vision account; that motor routines can be called upon to make better sense of visual input, and that real-world actions play an important role in the computational process. For instance, when I see, say, a stage with a curtain drawn across it, I can comprehend what’s in front of me based on my past experiences. A process which merely takes in visual input would register only the 3D data; that of a raised floor and what seems to be a wall of fabric. Given my complex history of experience, I know not only what the raised floor is for but also how the wall of fabric (the curtain) is constructed and what is likely behind it. It could be argued that these are mere guesses on my part fueled by likelihood, but these guesses give me a leg up on a simple processor, because in most instances they will pan out as correct. To give a robot the same understanding of visual input as us, it must be able to reasonably draw from its past experiences.


hdmartin@... said

at 9:37 am on Oct 26, 2009

Chapter 5 begins by focusing on the history and idea of artificial intelligence. David Marr, the main focus of the history of artificial intelligence in the book, explains in three tasks how receiving and understanding input/information into the brain works. The first task is a “general analysis”, information is transferred (in a 2 dimensional setting) into the brain (where it altered into a 3 dimensional setting). Task one is the key stage for input. Next, in task two, is where the information is “represented” of what task it will perform and the way it should go about performing the task. In task three, the information/input and what task it should perform is understood, at some level. The next step into making artificial intelligence is to find a way in which one can build a machine that would be able to work through these three tasks on its own. The book then brings up the importance of the biological brain. Back in the 1980’s the biological brain was hardly focused on because everything seems to focus on “the computational and information-processing stages of the brain”. However nowadays, it seems as though the physical brain plays a huge part in the equation. “Biological brains are the product of biological evolution”. The way in which the brain has evolved biologically holds great importance, especially when trying to create artificial life. To further explain, it is easier to start where evolution left us instead of starting from nothing. In order to create an artificial brain, it is important to understand how the real brain works. Clark continues describing other ways in which input works, such as the “sense-think-act” cycle. He then leaves the reader with a feeling that there is still no clear way to see how input works in the biological way combined with cognitive organization.

prakowski said

at 10:14 am on Oct 26, 2009

If there's one sentence that best sums up the implications of the content of chapter 5, I think it's, "The brain is revealed not as primarily an engine of reason or quiet deliberation but as an organ of environmentally situated control." (p. 95) For philosophers, the ever-present temptation is to think that we can figure it all out with reason or quiet deliberation, but reason is just a small, necessary but not sufficient part of who we are. On another note, I don't feel the question posed on p. 100, "How might large scale coherent behavior arise from the operation of such an internally fragmented system?" is as menacing as Clark makes it out to be. I understand the motivation for the question, but besides the promising answers Clark gives, I think it's also worth considering an existentialist approach that denies that our actions are really coherent in the sense we want to believe them to be.

Daniel_Cronin said

at 10:18 am on Oct 26, 2009

David Marr's idea of a three level computation system seems to me to be far to abstract to be a valid representation, even in a general sense, of how biological brains work. The idea that kept coming to me while reading chapter 5 was that of neural networks. We went over them a little during lecture, but the basic premise is that there are nodes, and edges connecting them. Each edge has a weight and these weight are modified until the network produces the correct result. The startling thing is that even for simple networks, these final weights can seem random. Our brain seems to work in a similar way. Computers are very deterministic, in that even at the very small scale, each transistor, each circuit, has a well defined purpose. With a neural network, whether artificial or biological, we have no understand of the connections between nodes. We cant look at a network and say "This node computes this value". We just know what the final result is. In the same sense, we cant look at our brain and know exactly what neurons control what functions. Clark talks about this on page 96, saying that how the circuits work is "mere implementation detail". Unfortunately, if we wanted to create an artificial representation much of this implementation would have to be know. And again, the final resulting network would, to us, look no different than a bunch of random weight that happen to do the right thing.

prakowski said

at 10:30 am on Oct 26, 2009

Chapter 6 is presented as an exciting alternative take on understanding our minds by looking at some interesting advances in robotics that depart from any representationalist picture. As Dennett posed the question, "Why Not the Whole Iguana?" meaning why stress isolated aspects of advanced cognition when taking a broader view might get us answers to those higher level questions we sought, plus a whole lot more? The robot examples seem to be making a lot of progress, and Clark talks about "a complex interaction among brain, body, and world, with no single component bearing the brunt of the problem-solving burden." (p.106) But in the sobering discussion, Clark brings us back down to earth by pointing out that these advances in robotics are nowhere near the advanced cognition we originally departed from. It was a good idea to depart from it and see where we could get, but it is obviously still important to us, and so if we haven't moved back in that direction the departure into robotics hasn't yielded what we wanted. Clark does a good job getting us (or me) excited about the what robotics have taught us about nature, but then reminds us of the reality that we are still far from the kind of artificial intelligence we originally wanted.

prakowski said

at 10:57 am on Oct 26, 2009

The most interesting claim in chapter 7 is the Radical Embodied Cognition Thesis, which rejects computational and representational explanations of cogntion. Thelen and Smith, some of the strongest supporters for this radical theory, insist that dynamic system explanations cannot be reduced to representational computations. Clark does a good job mediating between the old way and this radical new way. He points out that connectionism modified traditional computationalism (the progress we made through chapter 4) in a good way, and that dynamic systems may continue this progress. He summarizes this position in the paragraph about "dynamic computationalism" on p. 135. He returns to what was the big question in chapter 6: how do we get dynamical systems to solve more abstract or higher level problems? The Radical Embodied Cognition people are committed to the position that "you do indeed get full-blown, human cognition by gradually adding "bells and whistles" to basic (embodied, embedded) strategies of relating to the present at hand." This can work, Clark says, but only if you undestand it in a way that allows for the "vision for action versus vision for perception" distinction Clark explains. But this renders the REC (or "cognitive incrementalism" as Clark says) "insufficiently precise and empirically insecure." Like the robotics in chapter 6, cognitive incrementalism has a lot of explaining to do. This is not a reason to reject it off the bat, but I like Clark's strong caution towards embracing it.

Mike Prentice said

at 12:16 pm on Oct 26, 2009

I forgot to post this earlier so I figured, why waste the thought?
I find that one of the biggest philosophical questions is not only how do we obtain knowledge, but what exactly is this knowledge? How do we understand what makes up knowledge and how do we obtain it. Many have tried to answer this by simply saying that we have innate knowledge that we are born with that we build upon in our life. I, personally, am more of a fan of the idea of adventitious knowledge in which case we learn from the society that we have been born into. With that said, were I’m going with this is that through the neurons within our brains maybe it’s a mixture of both. Maybe instead of innate knowledge it’s more of innate “laws” by which our brains learn to function as a kid leaning which neurons to strengthen and which neurons to let go. If this is the case then the question that I proposed last week of; if we gave a baby Chew-Z could it have the intellectual capacity of a 50 yr old, would be a simple no. This is the case if neurons continue to strengthen and weaken as or brains learn new things making our brains subject to physical maturity in order to gain knowledge.

vincent merrone said

at 1:42 pm on Oct 26, 2009

From the chapters that we had to read I got a feel of a desire within cognitive science, robotics, etc. to move away from a framework that is modeled on how the mind, cognition, and processing works to one that is centered on how does the physical interact with the enviornment. We can see this in our talks of modeling robots and such off of evolutionary processes: for before sentience it was not cognition interacting with the enviornment, but the physical. Take the cricket example in the book; the cricket is not doing anything special (don't think it can really, not higher processing brain). It is simply materially interacting with the material world. Male cricket makes sound, female crickets auditory reciports pick up on it and the neurons adjust to find the male. Physical interacting with the physical.

vincent merrone said

at 1:42 pm on Oct 26, 2009

This does seem like a very good way try and build a robot, one that is more about the material world and its interaction. Like Herbert, the robot that picked up can but scanning the enviornment and interacting with object. For if we can model HOW things work and interact with the enviornment, than we may be able to replicate it. But that seems like a shallow statement, we don't want to replicate a cricket looking to mate, but we want to examine how a creature does an activity and try to impose that onto a practical active robot than can do things accurately and to one's desire. But I could not seem to put my finger on the point in the text where the mind of the human fits in with all of this. OK, we can see the material world of the cricket interacting, we can examine how the body walks using physics, etc. but where if the human mind. From the book and class we can chop down certain activities that we believed were actually being done by the mind (lets say where the brain induces phantom pain, not the mind, or how the leg's locomotion may be a product of physics) but what about comprehension? or thought? or language? or the application of the three? It still seems like, to me, that a purely mechanism in motion, like that of the cricket or legs won't really allow one to model the mind--perhaps the brain, but I'm being liberal by stating that.

vincent merrone said

at 1:42 pm on Oct 26, 2009

The section on using the bonobo studies to measure higher order judgment seemed awkward to me. Most of these kinds of studies do because one is expanding upon the animal's capacity to do such an activity that is outside of their natural enviornment. Yes, this may be irrelevant, but one should not be mislead into thinking that because the bonobo can match shapes, etc. means that is what the brain of the bonobo was meant to do. It is more like coopting--where one can take a knife and use it to screw in the doorknob. One thing serves many different functions. This point, of co-opting, I feel is of much importance to robotic building because we see this all the time in nature. A good example of the human hand, was the human hand naturally selected for after the split with other higher primates for grasping, tool making, or for sexually stimulating a partner? The answer is that it could be all three---or neither. These could just be (well accept for grasping i guess) just co-opted features of the hand. But studying this it seems like one could produce, at least for robots, robots that are multifunctional in their limited resources of interacting with the material world. Example, the cricket. The legs are not JUST used as devices that permit the ability to hear potential mates--the legs also allow the cricket to proceede to the mate. yes, this does seem like a complete tauatology, but the point is that by examining co-opting one can get a different array of how an artifical life can exploit its enviornment in more precise and efficient ways.

Matt Stobber said

at 5:51 pm on Oct 26, 2009

I was pondering today's lecture about the cricket and found it fascinating that nature could solve such a complicated problem with such a simple solution with only two neurons. As a programmer, it is very easy to try to think of solutions to problems from a high level perspective and this was a humbling example of how one can learn to solve solutions in a simpler way by examining how a naturalistic framework would solve it. That is, how the universe, having no intelligence, comes to an evolutionary solution to a very complicated problem. I wonder if this is how we should look at cognition? Are we trying to solve the problem of consciousness, and other complex cognitive problems, the same way we tried to solve the "cricket" problem? By adding over complicated explanations and systems?

Josh Egner said

at 8:12 pm on Oct 26, 2009

I found chapter 5 to be very interesting. The interactions between cognition, perception and motion are intricate and unintuitive. This makes me think that a haphazard assembly of different systems that perform specific functions may yield more interesting information about potential robotic capabilities than constructing a robot to perform a specific function. It also causes me to consider the possibility of combining genetic algorithms with crude robotic systems to essentially evolve a robot, like how that one robot learned to walk. We could start by training the robot to move using a GA and then from there see what the robot could be trained to do using a set of basic functional systems. Such an experiment would be very exciting to me because it could yield results that were not even imagined when the robot was constructed. It also curious in that the robot learns from interacting with the physical world and that the experimenter learns from the robot's achievements which are outside of the intentions of the experimenter and in this sense come from the real world and not one of the experimental laboratory setting. The only challenge would be creating a general enough reward system for the GA to engender unanticipated, but advantageous capabilities in the robot. Now that i think of it a GA with self programing reward systems, based on learned behaviors and physical environmental rewards, could be an suitable model for the soul and the evolution of our neural networks (i think i want to write my paper on this).

darkair@... said

at 5:28 pm on Oct 27, 2009

between the lectures and readings i has been really intrigued by bio-mimicry. It seems like such a great way to engineer or look for solutions. Most of the work has been done and rigorously tested for thousands of years. This process of thinking can potentially extend to many fields. I'm a biology major and recently at my work we were discussing bio-fuels and how to best break down plant material to better access the abundant cellulose. After reading about the current processes which require heavily on chemical and heat a coworker asked how does nature do it? Such an obvious question to ask. We had been trying to reinvent the wheel or reverse engineer the solution. When we looked at how nature accomplishes the analogous task we find an amazing array of enzymes and microorganisms that sinergize to accomplish an integral task in nature.

Josh Egner said

at 9:43 pm on Oct 27, 2009

What i got from chapter 6 was that although it is impressive how much can be achieved with simple sensors and interaction with the environment we need to appreciate representations and models for how they allow you to in a sense interact with what is not present. Planning and anticipation are integral cognitive abilities, and have been proven to influence our perceptions of the world. Our memory is a perfect example of how we use abstract representation to in a sense experience a situation without the real world creating it at that moment. It seems that to make artificial life we would need it to be able to create internal representations. How these representations would be created and understood would be a challenge. For example think of the mix of sensory, emotional, conceptual/ historical information that combine to form a memory of an event. I think that a connectionist approach would best serve such an odd combination of information types because the weighted system offers a way to generalize the influence of these different information types.

Megan Holcombe said

at 8:45 am on Oct 28, 2009

This is really the first chapter in Clark that has caught my interest, or perhaps made the most sense to me. Reading in Chapter 5 about genetic algorithms was a new idea to me. These bit strings encode solutions and then are chosen by their performance to either be “bred” or let die. This idea allows nature to decide which bit strings will evolve with the highest functioning, while the others are weeded out and become extinct. Allowing a machine or robot to evolve itself based on its surroundings follows a human evolutionary pattern. It now seems obvious how a program could learn to adapt to nature in a way more efficient than we have. It is not necessary to rely on intelligent design, but only to rely on many generations interacting with their environment within the limits of their “bodily” functions.

Megan Holcombe said

at 8:57 am on Oct 28, 2009

One aspect of artificial life shown in chapter 6 is the work done on flocking. A computer program was written that simulated a group of boids, modeled after birds that were required to only follow three rules. The rules were based around interaction with the rest of the flock. Amazingly, not only did the boids resemble the sort of behavior we see in a group of flocking birds, but they even parted and re-grouped when faced with an obstacle in their path. This work done showed remarkable behavior in patterning. Similarly, the robots modeled after termites further show the ability for A.I. to problem solve not by being designed individually with extreme intelligence, but in a group activity with external factors leading the response. While these fears are astonishing, it is noted that much behavior of humans is done without a physical of environmental limitation to guide the action. These A.I. are lacking that cognitive capacity, the capacity to respond and detect a non-nomic property. These examples are categorized as emergence as collective self-organization. The collective system performs an activity without a “self.” This activity appears to be being performed by the “self” but is really more explained by the interaction with its physical properties and an external environment placed upon it. The “self” works because it is a group that acts in a pattern and continually falls into that pattern as more members of the group influence their neighbor to follow in the group direction until the group becomes a mass collective-self.

Erin said

at 4:14 pm on Oct 28, 2009

I was interested in chapter 6 when the flocking behavior of animals was said to be almost perfectly depicted by boids following a set of basic rules, and the question was raised as to whether or not the boids were truly flocking. Undoubtedly animals who flock are also following a simple set of rules which have been programmed into their minds through the process of evolution, but it seems as though these rules (such as stay near the mass of others, match speed with others, never get too close or too far from your neighbors) are merely tools used to accomplish the overall goal: to flock for survival. In this way the definition of flocking holds a double meaning; 1)to follow a set of rules (the functional basis of flocking) and also 2)to move in a way which meets certain criteria of survival (such as confusing predators). It seems that the boids are missing the second part of this definition of flocking. While they are showing flocking behavior by exhibiting the rules, they are not truly flocking because their movement is not aiding in survival, which is the true essence of natural flocking.

Jerek Justus said

at 6:06 pm on Oct 29, 2009

I think Marr’s computational approach to understanding cognition in chapter 5 is a better representation of the way humans use logic to go about solving problems than it is an effective way to describe how nature has developed systems to solve similar problems. It seems natural that when a human goes about tackling a problem, that person first identifies the task, develops a corresponding algorithm, and lastly implements that algorithm. Just because this is the process we use as cognitive beings to understand nature doesn’t necessitate that it is the same process by which nature solves problems. It seems that we’re imposing the limitations of our knowledge on natural systems. That or we’re assuming that all such systems have this capacity to reason. If we reject this notion, the lines between task, algorithm, and implementation are substantially blurred. Take, for example, your eyes. Would not the computational analysis of information also be the implementation of that system’s function? In this sense, it is not only difficult but impossible to distinguish between what constitutes task, algorithm, and implementation in this model.
In stepping away from Marr’s approach, scientists have begun mimicking biological means of engineering. In order to truly understand this process of incremental tweaking however, we must first expand our means of comprehension. If Marr’s task/algorithm/implementation structure accurately depicts the way we rationalize, then maybe it is our very system of thought that needs to change in order to fully comprehend the process by which cognition functions.

Benjamin B. Walker said

at 9:22 am on Oct 30, 2009

In response to Erin's comment above, concerning flocking qua flocking, I think the statement "they are not truly flocking because their movement is not aiding in survival, which is the true essence of natural flocking" assumes too much on part of the natural world. We as humans experience a richly sensible world. Geese, ducks, and other flocking animals probably do not have all of what we have. In fact, I assume it would be safe to say that they have a very different idea of what the world is like. From this, I think we can draw the conclusion that the animals, much like the boids, are simply following rules. There is no consideration of whether or not survival will happen. Natural Selection rewarded birds with strong tendencies towards flocking with longer lives, and survival just happened. So here is my distinguishing question: If we introduced a predatory boid into the simulation, would it then be flocking?

Benjamin B. Walker said

at 10:09 am on Oct 30, 2009

Vincent had a groundbreaking insight a little while ago that impressed me deeply; he made a comment about the “of courseness” of mechanistic responses to the physical world, and how asinine it was when building robots to attempt to start with cognition. Nature, after all, started very basically with mechanical mobility ruled by nothing more than a few neuron-like cells firing when stimulated. It seems to me as though the cog scientists in charge of building robots in order to further understand cognition should start with this notion of mechanistic interaction with the environment, and once this has been mastered a little further, then we can start introducing more cognition-based interactions.

Mike Prentice said

at 10:58 am on Oct 30, 2009

So, What I am going to start rambling about is the craziness that is involved with figuring out how we figure out. We started out trying to compare our brains to the (hope I spell this right) Turing Machine, were direct causal effects took place to give us the end result of 2+2. As our semester has progressed we have started to see many other options to how our brain might actually work.
Neuro networks that calculate the percentage any given network has through positive reinforcement is a very good option for conceptualizing how our brain works. This is so considering that we, or at least I, can visually see something like this being able to happen and the ability of our brains to do something like this is not that far fetched. Also, when we start to compare this to the creation of things and we take the idea of creation being an overflow from one sensory input to another, this neuro network could support this idea through the cross connections of axons leading to accidental transmission of electrical pulses, “thoughts”.
I want to move more attention to what was discussed in chapter six, with the idea of emergence. How termites build their homes was explained by the book as deposits of a chemical every time a termite puts down a dirt clod. The other termites “smell” this and then decide to deposit there dirt clods wherever they chemical scent is the strongest.
Anyways, the reason that I’m writing about this is because this almost seems like the original thought process that we originally talked about. So it seems like animals go through simple causal factors that help them in determining what they are going to do while creativity comes from a short in neurons. So since humans are arguably more creative than a termite it would follow that humans have more electrical shorts in their brains. Maybe humans are a deformity.

hdmartin@... said

at 7:30 pm on Nov 1, 2009

In chapter six, Clark discusses robots and artificial life. He starts off by talking about how crickets can tell their different species apart by the frequency of the song they can produce. Crickets can also tell which direction the other cricket is by the song. When robotic crickets were made, the robots could pick up the songs from other robots, however they could not tell the difference between the different songs (i.e. the different species) nor could they tell the which direction the song was coming from. This shows that we can study nature and see the affects, and they ways in which simple tasks are done. However, at the same time it shows that we can take the information we have collected and not come up with the robot we intended to create. There is a good probability that crickets are not self-aware, so there is a good probability that our faults on creating sure a robot do not stem from the robots not being self-aware, but on the idea that we may need to take a larger look at the problem. As in we may need to look into the "brain, body, and world", or look at a larger chunk of the problem. For example, there may be exterior elements working on the interior, causing the interior to work a certain way.

hdmartin@... said

at 10:05 pm on Nov 1, 2009

In chapter seven, Clark talks about the dynamics. He starts by naming three cases. These cases, show that humans use both the mind and the biological, as one, to work through certain tasks provided. In a way, the biological introduces the inputs, or the "problem", into the mind; there the mind can evaluate the input can therefore can product an export or solution. However, it is unclear how much of the mind is actually conscious of what it is doing. For example, (for the most part) people do not have to consciously think about walking in order to walk, however one may have to be conscious of the path being walked (especially if it is unfamiliar). At the same it, Clark is not implying that with difficult tasks the mind is fully conscious, or even partially conscious. He ends with stating that real-time response and sensor motor coordination are key players for the mind, however their importance is not yet known.

Matt Stobber said

at 5:30 pm on Nov 3, 2009

I have been pondering something for awhile. I have noticed that nature solves problems as simplistically as possible, and we as humans always try to solve problems from a very high level. The question I have been pondering is "is this bad?". It is true that we can learn a lot from nature by studying how it solves problems, but do we really need to start at such a low level? Or can we "skip" billions of years of evolution and start building artificial life at a much higher level. I would think it would still be possible to replicate the "algorithms" that implement cognition if we could figure out what and how exactly the algorithms are implemented, and what they are.

Matt Stobber said

at 5:34 pm on Nov 3, 2009

The discussion on Monday about emergence was extremely interesting to me, because as I listened I began to think about life, and the possibility that life could be just an emergent process. This of course would be a naturalistic explanation, and it makes sense in the sense that of all the universes that exist(assuming the multi-verse is real), this is the only one, or just one, that allows life to be an emergent behavior, something that just occurs through random chemical processes. Though this doesn't explain why this emergent process can't be reproduced, maybe the conditions required for this emergent process are just extremly delicate and we have yet to find their exact quantities and conditions that are required to allow this emergent behavior to arise.

Erin said

at 12:36 pm on Nov 4, 2009

There is a discussion in chapter 7 in which a cat who loses a leg will quickly learn to gracefully walk on the remaining three. Pollock claims that this adaption ability does not come from an operating system of great intelligence, but rather originates from many different functioning systems working elegantly together inside the biological animal. These systems may include the physical properties in the legs and brain, a history of learning experiences the animal has been through, and even the particular nature of the animal such as energy, curiosity and a powerful will to survive. This point brought me back to my earlier discussion of the Chinese room; I wondered about the author of the book in the room, and made the point that this information must have come from some ultimately intelligent source. This shows how different a cat re-learning to use its legs is from the room- the cat has no system of superior knowledge orchestrating it, rather, its adaption ability originates from sources deep within its own working system.

Dcwalter said

at 9:53 pm on Nov 8, 2009

The most interesting chapter in this book has got to be chapter 8. It seems obvious that mind is this title we have given the cognitive process that includes the workings of our brains, to the cognitive tools at our disposal. Further, it does seem likely that human minds have had the capacity to grow and improve based on those cognitive tools we have developed. The idea is that language (arguably the first cognitive tool in our tool-belt) is fundamentally tied up with what we call the mind because in all reality there was no such thing as mind before the capacity to think about thinking came around. When thinking about mind in this way it seems that the push for some sort of artificial mind is completely within the realm of possibilities as long as it is approached in the right way. If we can really get machines to think and solve problems, even in a remedial sort of way the next step is to get them thinking about how to improve their thinking would really be an explosive leap forward.

vincent merrone said

at 10:26 am on Nov 9, 2009

It becomes interesting to think about mind at various different functional levels, i.e bacteria with a mind v.s a human with a mind. It is curious because when one thinks of mind one automatically thinks of a human cognizing. But I do not understand how the conclusion is drawn that bacteria, birds, a virus, etc. have mind. Yes, one could argue that it is a very rudimentary kind of kind--but it seems more like a material functioning of the organism than mind. The bacteria comes under heat and stops reproducing, releases xyz enzymes, etc. Is that mind or is that survival? Or should we put survival and mind into the same box? If so, does the human mind not fit that category at times? There are many things an individual cognizes about that has zero survival value: "O yes! The Yankees won the world series!" That has little survival value and without a doubt differs from the enzyme releases of bacteria. Also, what about qualia? Is qualia apart of the mind? Does bacteria, birds, bees, experience qualia? Or is what non-human animals experience in terms of qualia just a less advanced version of human qualia? It seems that there must be a distinction between what counts as mind and what does not. Of course, a rock falling down a hill does not have mind--unless one still clings to Aristotlean physics. But the point seems clear, how to we categorize, account for, name, distinguish, etc. the various levels of mind (if we are even privellaged to say that)? Turn back the clock to a time before animal life. Could we say that the most advanced unicellular organism has mind? Or are we using the human mind today as an artifical way of appropriating mind to other organisms in order to click on a paradigm that may be pragmatic in the study of AI?

vincent merrone said

at 12:38 pm on Nov 17, 2009

Notions of idenity becomes tricky. I feel it really becomes about what paramters one uses to speak of idenity. Is one an agent that is produced by the culture and society, is idenity something physical or mental. I personally feel idenity, in a Foucalt kind of way is very much a construction of a social and cultural kind: hence discourse of power. But if this is true, what is the domain that one has to pick and chose their idenity? Well, society forces X, enviornmental interactions froces Y...But is there a Z force that is one's own ruminations and mental workings that solve problems, think certain issues over, etc. that creates one's own idenity? I personally feel this is minimal for there is always an interaction that can be viewed as a simuli that ENABLES one to think and act in certain ways--that is, certain enviornmental factors allow for the indentity growth in A B or C ways. Like when someone says. "I'am rebeling against the machine, I'am doing what I WANT." Well, is that idenity not formulated because of the machine--an oppposition to it (there is no black without white, no up without down, no good without designated bad). But, what is idenities parameters? Social, mental, etc.??

darkair@... said

at 10:25 am on Nov 20, 2009

A few days ago I believe it was Ben who said that we cant isolate the part that makes us us because there was an essential part of the process of being you that requires an exported process. After Tuesdays lecture it seems like this comment has more to it. In trying to answer what my identity is we place parameters when it seems obvious that we cant be secluded away on our own island of consciousness. Instead being me requires a knowledge or understanding of my pattern of interactions and behavior regarding them. To be "me" is a unique experience unlike all others because of the awareness of the temporal and functional continuity. This view seems to account for all experiencing entities of life. I also like what Vincent had to say about using the human mind as the meter stick by which we judge other minds. Definitely anthropocentric.

Matt Stobber said

at 3:15 pm on Nov 23, 2009

I find the concept of solving problems that seem extremely complicated by breaking them down into smaller parts fascinating. For instance, the problem of flocking patterns. Instead of trying to solve the problem using complex solutions, we can solve the complex problem by solving very simple problems, and we then get an emergent outcome/solution, flocking patterns. The example in the book I liked was with boids which followed only 3 simple rules. To stay near a mass of other boids, to match velocity with your neighbors, and to avoid getting too close. I find this interesting because it is a beautiful example of emergence, and it really does change the way that one looks at how nature "implements things. I would really like to examine emergence more from the perspective of morality. Maybe we can break down morality into simpler solutions instead of a complex theistic solution.

Alex Edmondson said

at 12:00 am on Nov 28, 2009

I’m quite behind on posts, so here’s chapter 3’s. Dennett’s belief that speech is especially important for human cognition was quite interesting for me. Maybe it’s just because I’m a poetry major as well, but I agree highly with this statement. He states that “thinking-our kind of thinking-had to wait for talking to emerge” and I find this to be quite intriguing (60). We seem to grow and learn from each other and without speech I don’t know that we could get where we are today. We also must remember though that speech isn’t reality, it’s just the labels we put onto the world so I don’t know that we couldn’t survive without it. But, perhaps we need a clinging to language for us to survive as a pack, without it, loneliness would be much lonelier. But I think if talking wasn’t around, we would have found something to replace it because we thrive on communication. Ch. 4 I found box 4.2 interesting. It discussed the idea of “Gradient Descent Learning” where you stand on the edge of a slope of a giant pudding basin, while blindfolded and must maneuver your way to the bottom of the basin, without running directly into it. It goes through a series of steps, literally in this case, to test whether you’re moving up or down the basin. If you move up the basin you must go back and try stepping again in the opposite direction. If you go down, you stay where you are. You continue the process until you get to the bottom of the basin. There’s a low error percentage in this case because it’s a constant slope downwards and since there is no further change, then a solution to a problem is easily attained.

Alex Edmondson said

at 12:14 am on Nov 28, 2009

I liked what box 5.2 had to say about “mirror neurons.” It talks about a set of neurons that are action oriented, context dependent, and implicated in both self-initiated activity and passive perception. The neurons are active both when a monkey observes an action as well as when a monkey performs the same action. The conclusion says that the action is stored in terms of a coding and not through perception, which is quite interesting because according to this, maybe we have all of our knowledge stored away as children and just access it later in life.

Alex Edmondson said

at 12:35 am on Nov 28, 2009

I found the discussion of the crickets to be very intriguing. When the book discusses the process that it takes for the cricket to hear it’s amazing. It’s almost like the theory of the time lapse argument and it makes me wonder how the crickets perceive their surroundings through sound since their sense of sound is delayed and in pieces. It’s remarkable what the cricket goes through to find a mate. It seems much more intimate than for basic survival, but it isn’t, it’s to make the best offspring. Perhaps humans should follow the crickets.

Alex Edmondson said

at 12:48 am on Nov 28, 2009

Box 7.2 illustrates the idea of vision designed for action instead of for perception. It discusses an illusion where two central discs are equal in size, but our eyes always misjudge the size. It’s strange, and somewhat frightening, how our eyes are not that reliable since we seem to rely on our sense of sight the most. The basic problem becomes more complicated when it brings the circles into 3-D, and into physicality with poker chips. When the subjects were asked to pick up the pile of poker chips that were the same size, the fitted grip with finger-thumb aperture perfectly suited the physical chips. The vision was used for the experience and action and not for perception, because perception in the 2-D models is what caused the eyes to misjudge the size. It says that the processing underlying visual awareness might be working separately from visual control of action, which is very strange and, in a way, concerning.

Alex Edmondson said

at 12:55 am on Nov 28, 2009

Box 8.1 about “The Talented Tuna” was very interesting and unusual. It’s so strange that the creature shouldn’t physically be able to do what it actually does. The fish is not physically strong enough to swim as fast as it does, but manages to do so through the manipulation of its environment. It’s amazing that this fish can do this. So, the fish, as a machine, is not only the fish, but the world around the fish. The fish and the world seem to be one in its process of swimming. Yet, this process that the fish creates and this machine that it creates, is then exploited by itself. The fish is able to create a system then go about and exploit its own created system, which is quite unusual. I suppose we are like the tuna with our manipulation of technology, but it still seems quite miraculous that a tuna can do this for itself, more so than that of what we can do.

darkair@... said

at 8:16 pm on Dec 1, 2009

Piggy backing off what Alex wrote about the tuna fish, the paradox of active stupidity turns the process of interacting with the environment as an essential part of the equation. The basic principle being that neither the chicken or the egg came first but that they coevolved shaping each other. In this section Clark talks about the tool use and the opposable thumb as seeds for our current cognitive ability. The further actions allowed by these seeds produced further innovation in which a sort of positive feed back system began. I think Isaac Newton's quote is perfect for this line of thinking. "If i have seen farther than others, it is because i was standing upon the shoulders of giants." What more is life than a reward system built to em-better itself over time.

Matt Stobber said

at 10:02 pm on Dec 1, 2009

After the last class we had where we did lots of logic puzzles I started thinking about how flawed we are when it comes to logic. Some people are more logical than others of course, but nevertheless, why is it that the majority of people choose an answer that makes sense but is actually wrong. And why do most people choose the same "wrong" answer. I find it interesting that we have to apply such focus and thinking to be able to come to the correct logical answer. Why is it that logic doesn't come easier? Is it because there was no evolutionary need for logic to be an easy almost automatic process? And now that we are at a point in society where we are becoming more advanced intellectually and using this seldom used faculties more, will we some day evolve to a point where we can think deeper and easier about problems without needing as much mental effort?

Andrew Broestl said

at 2:31 pm on Dec 2, 2009

In chapter one we get an introduction to the mind as a meat machine. It is a machine in the sense that it completes simple computations to answer complex questions. The turing machine is a good example of this. As seen in box 1.3 the machine is given an input and through the process of moving down tape it produces an answer for the question. Our mind seems to work like this as well. 1+1 is a simple task for our mind and the turing machine to perform. It analyzes one as one thing say one apple plus another single apple which constitutes two apples. Take for example our two index fingers they are similar and as such we see that they are two similar things therefore we have two index fingers.

Andrew Broestl said

at 2:44 pm on Dec 2, 2009

In Chapter 2 we are introduced to the Physical Symbol System or PSS. The Chinese Room is the idea that an English speaker given questions in Chinese with instructions in English how to manipulate the Chinese to give intelligible responses is an understanding of Chinese. More goes into understanding Chinese then just being able to answer questions with the instructions in English though. We may appear to understand Chinese to the room, but the fact is that we lack the ability to understand the Chinese independently of the room. However is this room just the idea of our mind being able to translate English into Chinese? I would say no because the Chinese speaker does not get asked a question then have to translate the idea into another language to answer. He thinks about the question without having to translate it. He thinks about the Chinese and not another language when answering back.

Erin said

at 11:26 pm on Dec 6, 2009

In Chapter 8 of Mindware much credit is given to brain's capacity to utilize its environment in order to function successfully. It is argued not only that outside stimuli and sources aid the mind in learning, but that these environmental factors are actually an essential part of the process of consciousness. The tuna is given as a very clear example; that the fish would not be able to swim and maneuver (and therefore survive) as it does without utilizing the physical forces of the water to its own advantages. This presents an interesting obstacle when viewing the mind as a computational system within itself, for at what point does this system incorporate environmental properties into itself? It seems possible enough to imagine the individual gradually learning to do this, but what about the tuna? It seems as though the fish was born with the instinctual capacity built in; that is, the properties of water were already a part of its brain's neural system. This then leads to the question: how much of the tuna's identity is connected to water? More specifically, if we lose our ability to use the environment to our benefit, do we remain a defective version of ourselves, or do we lose a defining part of our neural network?

Erin said

at 11:46 pm on Dec 6, 2009

One of the final thoughts explored in Mindware is that of the artist, and why he must sketch and plan and re-create on even the most abstract piece of artwork. The fact that the artist can’t merely formulate an idea and create it in a single go is strong evidence of the mind’s need to examine and filter through its own contents. I feel this same process taking place for myself when I prepare to write a paper; I will write a line and process it, then decide whether it has adequately represented the idea that I’m trying to get across. I will often find that I will actually discover my own ideas only after I’ve written them down, and can then clearly see my own thought processes on paper. Both of these examples present a fascinating scenario of the mind analyzing itself, how is this possible? It seems to be a strong indication of dualism, but not the typical mind-body dualism, rather a mind-self dualism (where ‘mind’ is not the physical brain, but the calculative processes that go on as a result of the brain’s functions). The self must be separate from these mental calculations if it must rely on tools (sketching, writing) to really understand them.

Erin said

at 10:56 pm on Dec 10, 2009

I enjoyed the presentation given this week about the potential morality issues with advanced AI. They raised the question: at what point does cognition reach a level in which its agent becomes a moral one? This made me think of an interesting article i'd read (shown to me by a friend who is convinced our world will end in an AI apocalypse), about robots which have been programmed with an awards-based system of food foraging and producing offspring. The robots not only evolved to learn how to alert each other of danger and food, but some robots in a colony learned to lie about finding food, leading competitor robots in a false direction. The address to this article is at the bottom in case anybody wants to read it. It made me think; at what point does this level of AI interaction grant morality? If the robots are literally developing strategies to ensure the survival of each other and themselves, it seems as though this comes attached with some sort of purpose of continuation. This purpose was not something we wrote into the machine; rather, the machine seems to have realized it on its own.

http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie

Mike Prentice said

at 2:37 pm on Dec 14, 2009

So I sit here, writing my paper, wondering, how did I end up here? What was the chain of events that have led me to be the person that I am today? Naturally, I want to take a deterministic approach to this and say that I’m here strictly due to the luck, or the lack there of, of the events that have happened within my life. I don’t know if this is the right answer though. Can’t there be something more than this luck that rules my life? Can’t there be self-determination that out ways any of the factors that play into my life? Could this not be what I was always destined to be? While the answers to these questions seem distant, I rest assured in the realization that, I’m happy being who I am. Me, Mike Prentice at the time of 2:34 on Monday writing something ridiculous to get my mind off of . . .

Matt Stobber said

at 3:45 pm on Dec 18, 2009

I found chapter 6 to be very interesting as a programmer. The concept of artificial intelligence and artificial life has always been very interesting to me. I especially liked the discussion about abstract thought, and how this is such a daunting task to implement artificially. It got me thinking if this will even be possible someday, or if it is one of those problems that is so complicated it cannot be solved through brute force implementation, the only way a computer solves things. I think that it is very possible that highly intelligent artificial intelligence will not be possible until we can understand our own cognition more, and the "algorithms" that our brains have implemented over billions of years of evolution. Just like we have create new A.I techniques like neural networks and genetic algorithms by examining human cognition, I don't think there will be many more breakthroughs until we understand our own cognition on a deeper level.

Matt Stobber said

at 3:53 pm on Dec 18, 2009

In chapter 5, Clark talked a little bit about genetic algorithms, and he said something that I found fascinating. He talked about how Thompson and his colleagues used genetic algorithms to try to find new designs for electronic circuits to better control a robot that moves using sonar. This made me think about the point of singularity, that is, the point at which we will create robots that will be able to build better robots using designs we would have never though of. This really does make one think of a Matrix type of outcome where robots keep evolving themselves to be better and better and no longer need humans. But I do find the idea of using the concept of evolution to find solutions to problems brilliant. Just like nature's implementation of the cricket, I believe we can use genetic algorithms to find solutions that we would have never thought of.

Matt Stobber said

at 4:02 pm on Dec 18, 2009

Clark stated what I believe to be a very profound idea in chapter 8.He talks about how software agents almost become intertwined with our cognition, that is, our cognition becomes dependent on these software agents to operate efficiently. The idea is this. Lets say you start using the web at age 4. There is a software agent that monitors your web activity, your online reading habits, your cd purchases etc. Over the next 70 years, you and this software agent are co-evolving and influencing each other. You make your software agent adapt to you, and your software agent makes you adapt to it by recommending things you may like, back and forth.The software agent is therefor contributing to your psychological profile. Perhaps you are only using the software agent the same way you are using your frontal lobe? This really makes one think about how software influences us, and even how it may be possible to someday have implants that influence our cognition.

Matt Stobber said

at 4:12 pm on Dec 18, 2009

I was pondering the turing machine and read that there are problems and algorithms that are so computationally inefficient to solve through brute force that it would take a computer longer than the existence of the universe and then some. I found this interesting because these same problems can be solved by humans using "short cuts" and logic and reasoning. It really makes one wonder just what the hell algorithms are brain has implemented that could cut down the "computation" time so much. And if we will someday be able to implement these algorithms into a computer. Will it someday be possible to combine the computational power of computers with the abstract biological reasoning of humans?

Jerek Justus said

at 7:06 pm on Dec 18, 2009

I think this class has really stressed the misalignment between that which appears to be the case and that which really is. I would say that at the beginning of the semester, I was largely an empiricist. I believed that you can only know what you are able to perceive, but after taking this class I've had to seriously question some of the things I've long held to be my closest beliefs. I find myself now tending towards a more objective view of reality that doesn't depend so much upon an individual's perspective, largely because of the fallibility of that perspective. It seems that as a race we have this tendency to project how we thinks things really should be, and then accept those projections as fact. But just because our conception of reality contains something doesn't make it an actual representation of fact, as is the case with clark's discussion of the cricket. Not until closely studying the structure of a cricket did we discover how simple the system truly is. Before this, our knowledge depended largely upon intuition. I'm finding that this intuition is not nearly as powerful as I had originally considered. I'm curious to see how this affects my perspective from here on out.

Jerek Justus said

at 7:16 pm on Dec 18, 2009

It has been really interesting to take this class in conjunction with an Eastern Religions philosophy class, in which Buddhism proposes a compelling case against the concept of identity, while working with Hume has actually led me closer to an objective view of a continuous simple self. In attempting to reconcile these views, I have come to a better understanding of each. I find that the two actually do not even refer to each other. The lack of self in Buddhism is rather a perspective in which one lets go of their attachment to what they think their personal self should be - a move away from unrealizable idealism; whereas the simple self is more of a valuation of the vessel in which your experiences take root. I find that this class has given me a an opportunity to grow in both of these perspectives.

You don't have permission to comment on this page.