| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

View
 

Weekly Submissions

Page history last edited by Jennifer Holdier 15 years, 1 month ago

I accidentally deleted Evan Brennan's post. I was trying to edit the page and post my commentary, but I guess I'll just add it to the comments section. This is Evan's original post (Sorry, Evan!):

 

 

'I thought it was interesting in how the first chapter began by noting the brain as a "meat machine," but by then explaining it as it not being about the material the brain is made out of, but by the way this "meat machine" collects and organizes information into thoughts and ideas. The chapter also went on the discuss the thoughts produced by the brain to be nothing more than computation. For example, someone sees a car crash and immediately runs to a pay phone to dial 911 for help. In this example (x) caused (y), and it was the observer's instinctually computed response to call for help, or as Clark put it, "...interpretations thus glue inner states (of the brain) to sensible real-world behaviors" (15).'

Comments (Show all 130)

Matt Stobber said

at 5:51 pm on Oct 26, 2009

I was pondering today's lecture about the cricket and found it fascinating that nature could solve such a complicated problem with such a simple solution with only two neurons. As a programmer, it is very easy to try to think of solutions to problems from a high level perspective and this was a humbling example of how one can learn to solve solutions in a simpler way by examining how a naturalistic framework would solve it. That is, how the universe, having no intelligence, comes to an evolutionary solution to a very complicated problem. I wonder if this is how we should look at cognition? Are we trying to solve the problem of consciousness, and other complex cognitive problems, the same way we tried to solve the "cricket" problem? By adding over complicated explanations and systems?

Josh Egner said

at 8:12 pm on Oct 26, 2009

I found chapter 5 to be very interesting. The interactions between cognition, perception and motion are intricate and unintuitive. This makes me think that a haphazard assembly of different systems that perform specific functions may yield more interesting information about potential robotic capabilities than constructing a robot to perform a specific function. It also causes me to consider the possibility of combining genetic algorithms with crude robotic systems to essentially evolve a robot, like how that one robot learned to walk. We could start by training the robot to move using a GA and then from there see what the robot could be trained to do using a set of basic functional systems. Such an experiment would be very exciting to me because it could yield results that were not even imagined when the robot was constructed. It also curious in that the robot learns from interacting with the physical world and that the experimenter learns from the robot's achievements which are outside of the intentions of the experimenter and in this sense come from the real world and not one of the experimental laboratory setting. The only challenge would be creating a general enough reward system for the GA to engender unanticipated, but advantageous capabilities in the robot. Now that i think of it a GA with self programing reward systems, based on learned behaviors and physical environmental rewards, could be an suitable model for the soul and the evolution of our neural networks (i think i want to write my paper on this).

darkair@... said

at 5:28 pm on Oct 27, 2009

between the lectures and readings i has been really intrigued by bio-mimicry. It seems like such a great way to engineer or look for solutions. Most of the work has been done and rigorously tested for thousands of years. This process of thinking can potentially extend to many fields. I'm a biology major and recently at my work we were discussing bio-fuels and how to best break down plant material to better access the abundant cellulose. After reading about the current processes which require heavily on chemical and heat a coworker asked how does nature do it? Such an obvious question to ask. We had been trying to reinvent the wheel or reverse engineer the solution. When we looked at how nature accomplishes the analogous task we find an amazing array of enzymes and microorganisms that sinergize to accomplish an integral task in nature.

Josh Egner said

at 9:43 pm on Oct 27, 2009

What i got from chapter 6 was that although it is impressive how much can be achieved with simple sensors and interaction with the environment we need to appreciate representations and models for how they allow you to in a sense interact with what is not present. Planning and anticipation are integral cognitive abilities, and have been proven to influence our perceptions of the world. Our memory is a perfect example of how we use abstract representation to in a sense experience a situation without the real world creating it at that moment. It seems that to make artificial life we would need it to be able to create internal representations. How these representations would be created and understood would be a challenge. For example think of the mix of sensory, emotional, conceptual/ historical information that combine to form a memory of an event. I think that a connectionist approach would best serve such an odd combination of information types because the weighted system offers a way to generalize the influence of these different information types.

Megan Holcombe said

at 8:45 am on Oct 28, 2009

This is really the first chapter in Clark that has caught my interest, or perhaps made the most sense to me. Reading in Chapter 5 about genetic algorithms was a new idea to me. These bit strings encode solutions and then are chosen by their performance to either be “bred” or let die. This idea allows nature to decide which bit strings will evolve with the highest functioning, while the others are weeded out and become extinct. Allowing a machine or robot to evolve itself based on its surroundings follows a human evolutionary pattern. It now seems obvious how a program could learn to adapt to nature in a way more efficient than we have. It is not necessary to rely on intelligent design, but only to rely on many generations interacting with their environment within the limits of their “bodily” functions.

Megan Holcombe said

at 8:57 am on Oct 28, 2009

One aspect of artificial life shown in chapter 6 is the work done on flocking. A computer program was written that simulated a group of boids, modeled after birds that were required to only follow three rules. The rules were based around interaction with the rest of the flock. Amazingly, not only did the boids resemble the sort of behavior we see in a group of flocking birds, but they even parted and re-grouped when faced with an obstacle in their path. This work done showed remarkable behavior in patterning. Similarly, the robots modeled after termites further show the ability for A.I. to problem solve not by being designed individually with extreme intelligence, but in a group activity with external factors leading the response. While these fears are astonishing, it is noted that much behavior of humans is done without a physical of environmental limitation to guide the action. These A.I. are lacking that cognitive capacity, the capacity to respond and detect a non-nomic property. These examples are categorized as emergence as collective self-organization. The collective system performs an activity without a “self.” This activity appears to be being performed by the “self” but is really more explained by the interaction with its physical properties and an external environment placed upon it. The “self” works because it is a group that acts in a pattern and continually falls into that pattern as more members of the group influence their neighbor to follow in the group direction until the group becomes a mass collective-self.

Erin said

at 4:14 pm on Oct 28, 2009

I was interested in chapter 6 when the flocking behavior of animals was said to be almost perfectly depicted by boids following a set of basic rules, and the question was raised as to whether or not the boids were truly flocking. Undoubtedly animals who flock are also following a simple set of rules which have been programmed into their minds through the process of evolution, but it seems as though these rules (such as stay near the mass of others, match speed with others, never get too close or too far from your neighbors) are merely tools used to accomplish the overall goal: to flock for survival. In this way the definition of flocking holds a double meaning; 1)to follow a set of rules (the functional basis of flocking) and also 2)to move in a way which meets certain criteria of survival (such as confusing predators). It seems that the boids are missing the second part of this definition of flocking. While they are showing flocking behavior by exhibiting the rules, they are not truly flocking because their movement is not aiding in survival, which is the true essence of natural flocking.

Jerek Justus said

at 6:06 pm on Oct 29, 2009

I think Marr’s computational approach to understanding cognition in chapter 5 is a better representation of the way humans use logic to go about solving problems than it is an effective way to describe how nature has developed systems to solve similar problems. It seems natural that when a human goes about tackling a problem, that person first identifies the task, develops a corresponding algorithm, and lastly implements that algorithm. Just because this is the process we use as cognitive beings to understand nature doesn’t necessitate that it is the same process by which nature solves problems. It seems that we’re imposing the limitations of our knowledge on natural systems. That or we’re assuming that all such systems have this capacity to reason. If we reject this notion, the lines between task, algorithm, and implementation are substantially blurred. Take, for example, your eyes. Would not the computational analysis of information also be the implementation of that system’s function? In this sense, it is not only difficult but impossible to distinguish between what constitutes task, algorithm, and implementation in this model.
In stepping away from Marr’s approach, scientists have begun mimicking biological means of engineering. In order to truly understand this process of incremental tweaking however, we must first expand our means of comprehension. If Marr’s task/algorithm/implementation structure accurately depicts the way we rationalize, then maybe it is our very system of thought that needs to change in order to fully comprehend the process by which cognition functions.

Benjamin B. Walker said

at 9:22 am on Oct 30, 2009

In response to Erin's comment above, concerning flocking qua flocking, I think the statement "they are not truly flocking because their movement is not aiding in survival, which is the true essence of natural flocking" assumes too much on part of the natural world. We as humans experience a richly sensible world. Geese, ducks, and other flocking animals probably do not have all of what we have. In fact, I assume it would be safe to say that they have a very different idea of what the world is like. From this, I think we can draw the conclusion that the animals, much like the boids, are simply following rules. There is no consideration of whether or not survival will happen. Natural Selection rewarded birds with strong tendencies towards flocking with longer lives, and survival just happened. So here is my distinguishing question: If we introduced a predatory boid into the simulation, would it then be flocking?

Benjamin B. Walker said

at 10:09 am on Oct 30, 2009

Vincent had a groundbreaking insight a little while ago that impressed me deeply; he made a comment about the “of courseness” of mechanistic responses to the physical world, and how asinine it was when building robots to attempt to start with cognition. Nature, after all, started very basically with mechanical mobility ruled by nothing more than a few neuron-like cells firing when stimulated. It seems to me as though the cog scientists in charge of building robots in order to further understand cognition should start with this notion of mechanistic interaction with the environment, and once this has been mastered a little further, then we can start introducing more cognition-based interactions.

Mike Prentice said

at 10:58 am on Oct 30, 2009

So, What I am going to start rambling about is the craziness that is involved with figuring out how we figure out. We started out trying to compare our brains to the (hope I spell this right) Turing Machine, were direct causal effects took place to give us the end result of 2+2. As our semester has progressed we have started to see many other options to how our brain might actually work.
Neuro networks that calculate the percentage any given network has through positive reinforcement is a very good option for conceptualizing how our brain works. This is so considering that we, or at least I, can visually see something like this being able to happen and the ability of our brains to do something like this is not that far fetched. Also, when we start to compare this to the creation of things and we take the idea of creation being an overflow from one sensory input to another, this neuro network could support this idea through the cross connections of axons leading to accidental transmission of electrical pulses, “thoughts”.
I want to move more attention to what was discussed in chapter six, with the idea of emergence. How termites build their homes was explained by the book as deposits of a chemical every time a termite puts down a dirt clod. The other termites “smell” this and then decide to deposit there dirt clods wherever they chemical scent is the strongest.
Anyways, the reason that I’m writing about this is because this almost seems like the original thought process that we originally talked about. So it seems like animals go through simple causal factors that help them in determining what they are going to do while creativity comes from a short in neurons. So since humans are arguably more creative than a termite it would follow that humans have more electrical shorts in their brains. Maybe humans are a deformity.

hdmartin@... said

at 7:30 pm on Nov 1, 2009

In chapter six, Clark discusses robots and artificial life. He starts off by talking about how crickets can tell their different species apart by the frequency of the song they can produce. Crickets can also tell which direction the other cricket is by the song. When robotic crickets were made, the robots could pick up the songs from other robots, however they could not tell the difference between the different songs (i.e. the different species) nor could they tell the which direction the song was coming from. This shows that we can study nature and see the affects, and they ways in which simple tasks are done. However, at the same time it shows that we can take the information we have collected and not come up with the robot we intended to create. There is a good probability that crickets are not self-aware, so there is a good probability that our faults on creating sure a robot do not stem from the robots not being self-aware, but on the idea that we may need to take a larger look at the problem. As in we may need to look into the "brain, body, and world", or look at a larger chunk of the problem. For example, there may be exterior elements working on the interior, causing the interior to work a certain way.

hdmartin@... said

at 10:05 pm on Nov 1, 2009

In chapter seven, Clark talks about the dynamics. He starts by naming three cases. These cases, show that humans use both the mind and the biological, as one, to work through certain tasks provided. In a way, the biological introduces the inputs, or the "problem", into the mind; there the mind can evaluate the input can therefore can product an export or solution. However, it is unclear how much of the mind is actually conscious of what it is doing. For example, (for the most part) people do not have to consciously think about walking in order to walk, however one may have to be conscious of the path being walked (especially if it is unfamiliar). At the same it, Clark is not implying that with difficult tasks the mind is fully conscious, or even partially conscious. He ends with stating that real-time response and sensor motor coordination are key players for the mind, however their importance is not yet known.

Matt Stobber said

at 5:30 pm on Nov 3, 2009

I have been pondering something for awhile. I have noticed that nature solves problems as simplistically as possible, and we as humans always try to solve problems from a very high level. The question I have been pondering is "is this bad?". It is true that we can learn a lot from nature by studying how it solves problems, but do we really need to start at such a low level? Or can we "skip" billions of years of evolution and start building artificial life at a much higher level. I would think it would still be possible to replicate the "algorithms" that implement cognition if we could figure out what and how exactly the algorithms are implemented, and what they are.

Matt Stobber said

at 5:34 pm on Nov 3, 2009

The discussion on Monday about emergence was extremely interesting to me, because as I listened I began to think about life, and the possibility that life could be just an emergent process. This of course would be a naturalistic explanation, and it makes sense in the sense that of all the universes that exist(assuming the multi-verse is real), this is the only one, or just one, that allows life to be an emergent behavior, something that just occurs through random chemical processes. Though this doesn't explain why this emergent process can't be reproduced, maybe the conditions required for this emergent process are just extremly delicate and we have yet to find their exact quantities and conditions that are required to allow this emergent behavior to arise.

Erin said

at 12:36 pm on Nov 4, 2009

There is a discussion in chapter 7 in which a cat who loses a leg will quickly learn to gracefully walk on the remaining three. Pollock claims that this adaption ability does not come from an operating system of great intelligence, but rather originates from many different functioning systems working elegantly together inside the biological animal. These systems may include the physical properties in the legs and brain, a history of learning experiences the animal has been through, and even the particular nature of the animal such as energy, curiosity and a powerful will to survive. This point brought me back to my earlier discussion of the Chinese room; I wondered about the author of the book in the room, and made the point that this information must have come from some ultimately intelligent source. This shows how different a cat re-learning to use its legs is from the room- the cat has no system of superior knowledge orchestrating it, rather, its adaption ability originates from sources deep within its own working system.

Dcwalter said

at 9:53 pm on Nov 8, 2009

The most interesting chapter in this book has got to be chapter 8. It seems obvious that mind is this title we have given the cognitive process that includes the workings of our brains, to the cognitive tools at our disposal. Further, it does seem likely that human minds have had the capacity to grow and improve based on those cognitive tools we have developed. The idea is that language (arguably the first cognitive tool in our tool-belt) is fundamentally tied up with what we call the mind because in all reality there was no such thing as mind before the capacity to think about thinking came around. When thinking about mind in this way it seems that the push for some sort of artificial mind is completely within the realm of possibilities as long as it is approached in the right way. If we can really get machines to think and solve problems, even in a remedial sort of way the next step is to get them thinking about how to improve their thinking would really be an explosive leap forward.

vincent merrone said

at 10:26 am on Nov 9, 2009

It becomes interesting to think about mind at various different functional levels, i.e bacteria with a mind v.s a human with a mind. It is curious because when one thinks of mind one automatically thinks of a human cognizing. But I do not understand how the conclusion is drawn that bacteria, birds, a virus, etc. have mind. Yes, one could argue that it is a very rudimentary kind of kind--but it seems more like a material functioning of the organism than mind. The bacteria comes under heat and stops reproducing, releases xyz enzymes, etc. Is that mind or is that survival? Or should we put survival and mind into the same box? If so, does the human mind not fit that category at times? There are many things an individual cognizes about that has zero survival value: "O yes! The Yankees won the world series!" That has little survival value and without a doubt differs from the enzyme releases of bacteria. Also, what about qualia? Is qualia apart of the mind? Does bacteria, birds, bees, experience qualia? Or is what non-human animals experience in terms of qualia just a less advanced version of human qualia? It seems that there must be a distinction between what counts as mind and what does not. Of course, a rock falling down a hill does not have mind--unless one still clings to Aristotlean physics. But the point seems clear, how to we categorize, account for, name, distinguish, etc. the various levels of mind (if we are even privellaged to say that)? Turn back the clock to a time before animal life. Could we say that the most advanced unicellular organism has mind? Or are we using the human mind today as an artifical way of appropriating mind to other organisms in order to click on a paradigm that may be pragmatic in the study of AI?

vincent merrone said

at 12:38 pm on Nov 17, 2009

Notions of idenity becomes tricky. I feel it really becomes about what paramters one uses to speak of idenity. Is one an agent that is produced by the culture and society, is idenity something physical or mental. I personally feel idenity, in a Foucalt kind of way is very much a construction of a social and cultural kind: hence discourse of power. But if this is true, what is the domain that one has to pick and chose their idenity? Well, society forces X, enviornmental interactions froces Y...But is there a Z force that is one's own ruminations and mental workings that solve problems, think certain issues over, etc. that creates one's own idenity? I personally feel this is minimal for there is always an interaction that can be viewed as a simuli that ENABLES one to think and act in certain ways--that is, certain enviornmental factors allow for the indentity growth in A B or C ways. Like when someone says. "I'am rebeling against the machine, I'am doing what I WANT." Well, is that idenity not formulated because of the machine--an oppposition to it (there is no black without white, no up without down, no good without designated bad). But, what is idenities parameters? Social, mental, etc.??

darkair@... said

at 10:25 am on Nov 20, 2009

A few days ago I believe it was Ben who said that we cant isolate the part that makes us us because there was an essential part of the process of being you that requires an exported process. After Tuesdays lecture it seems like this comment has more to it. In trying to answer what my identity is we place parameters when it seems obvious that we cant be secluded away on our own island of consciousness. Instead being me requires a knowledge or understanding of my pattern of interactions and behavior regarding them. To be "me" is a unique experience unlike all others because of the awareness of the temporal and functional continuity. This view seems to account for all experiencing entities of life. I also like what Vincent had to say about using the human mind as the meter stick by which we judge other minds. Definitely anthropocentric.

Matt Stobber said

at 3:15 pm on Nov 23, 2009

I find the concept of solving problems that seem extremely complicated by breaking them down into smaller parts fascinating. For instance, the problem of flocking patterns. Instead of trying to solve the problem using complex solutions, we can solve the complex problem by solving very simple problems, and we then get an emergent outcome/solution, flocking patterns. The example in the book I liked was with boids which followed only 3 simple rules. To stay near a mass of other boids, to match velocity with your neighbors, and to avoid getting too close. I find this interesting because it is a beautiful example of emergence, and it really does change the way that one looks at how nature "implements things. I would really like to examine emergence more from the perspective of morality. Maybe we can break down morality into simpler solutions instead of a complex theistic solution.

Alex Edmondson said

at 12:00 am on Nov 28, 2009

I’m quite behind on posts, so here’s chapter 3’s. Dennett’s belief that speech is especially important for human cognition was quite interesting for me. Maybe it’s just because I’m a poetry major as well, but I agree highly with this statement. He states that “thinking-our kind of thinking-had to wait for talking to emerge” and I find this to be quite intriguing (60). We seem to grow and learn from each other and without speech I don’t know that we could get where we are today. We also must remember though that speech isn’t reality, it’s just the labels we put onto the world so I don’t know that we couldn’t survive without it. But, perhaps we need a clinging to language for us to survive as a pack, without it, loneliness would be much lonelier. But I think if talking wasn’t around, we would have found something to replace it because we thrive on communication. Ch. 4 I found box 4.2 interesting. It discussed the idea of “Gradient Descent Learning” where you stand on the edge of a slope of a giant pudding basin, while blindfolded and must maneuver your way to the bottom of the basin, without running directly into it. It goes through a series of steps, literally in this case, to test whether you’re moving up or down the basin. If you move up the basin you must go back and try stepping again in the opposite direction. If you go down, you stay where you are. You continue the process until you get to the bottom of the basin. There’s a low error percentage in this case because it’s a constant slope downwards and since there is no further change, then a solution to a problem is easily attained.

Alex Edmondson said

at 12:14 am on Nov 28, 2009

I liked what box 5.2 had to say about “mirror neurons.” It talks about a set of neurons that are action oriented, context dependent, and implicated in both self-initiated activity and passive perception. The neurons are active both when a monkey observes an action as well as when a monkey performs the same action. The conclusion says that the action is stored in terms of a coding and not through perception, which is quite interesting because according to this, maybe we have all of our knowledge stored away as children and just access it later in life.

Alex Edmondson said

at 12:35 am on Nov 28, 2009

I found the discussion of the crickets to be very intriguing. When the book discusses the process that it takes for the cricket to hear it’s amazing. It’s almost like the theory of the time lapse argument and it makes me wonder how the crickets perceive their surroundings through sound since their sense of sound is delayed and in pieces. It’s remarkable what the cricket goes through to find a mate. It seems much more intimate than for basic survival, but it isn’t, it’s to make the best offspring. Perhaps humans should follow the crickets.

Alex Edmondson said

at 12:48 am on Nov 28, 2009

Box 7.2 illustrates the idea of vision designed for action instead of for perception. It discusses an illusion where two central discs are equal in size, but our eyes always misjudge the size. It’s strange, and somewhat frightening, how our eyes are not that reliable since we seem to rely on our sense of sight the most. The basic problem becomes more complicated when it brings the circles into 3-D, and into physicality with poker chips. When the subjects were asked to pick up the pile of poker chips that were the same size, the fitted grip with finger-thumb aperture perfectly suited the physical chips. The vision was used for the experience and action and not for perception, because perception in the 2-D models is what caused the eyes to misjudge the size. It says that the processing underlying visual awareness might be working separately from visual control of action, which is very strange and, in a way, concerning.

Alex Edmondson said

at 12:55 am on Nov 28, 2009

Box 8.1 about “The Talented Tuna” was very interesting and unusual. It’s so strange that the creature shouldn’t physically be able to do what it actually does. The fish is not physically strong enough to swim as fast as it does, but manages to do so through the manipulation of its environment. It’s amazing that this fish can do this. So, the fish, as a machine, is not only the fish, but the world around the fish. The fish and the world seem to be one in its process of swimming. Yet, this process that the fish creates and this machine that it creates, is then exploited by itself. The fish is able to create a system then go about and exploit its own created system, which is quite unusual. I suppose we are like the tuna with our manipulation of technology, but it still seems quite miraculous that a tuna can do this for itself, more so than that of what we can do.

darkair@... said

at 8:16 pm on Dec 1, 2009

Piggy backing off what Alex wrote about the tuna fish, the paradox of active stupidity turns the process of interacting with the environment as an essential part of the equation. The basic principle being that neither the chicken or the egg came first but that they coevolved shaping each other. In this section Clark talks about the tool use and the opposable thumb as seeds for our current cognitive ability. The further actions allowed by these seeds produced further innovation in which a sort of positive feed back system began. I think Isaac Newton's quote is perfect for this line of thinking. "If i have seen farther than others, it is because i was standing upon the shoulders of giants." What more is life than a reward system built to em-better itself over time.

Matt Stobber said

at 10:02 pm on Dec 1, 2009

After the last class we had where we did lots of logic puzzles I started thinking about how flawed we are when it comes to logic. Some people are more logical than others of course, but nevertheless, why is it that the majority of people choose an answer that makes sense but is actually wrong. And why do most people choose the same "wrong" answer. I find it interesting that we have to apply such focus and thinking to be able to come to the correct logical answer. Why is it that logic doesn't come easier? Is it because there was no evolutionary need for logic to be an easy almost automatic process? And now that we are at a point in society where we are becoming more advanced intellectually and using this seldom used faculties more, will we some day evolve to a point where we can think deeper and easier about problems without needing as much mental effort?

Andrew Broestl said

at 2:31 pm on Dec 2, 2009

In chapter one we get an introduction to the mind as a meat machine. It is a machine in the sense that it completes simple computations to answer complex questions. The turing machine is a good example of this. As seen in box 1.3 the machine is given an input and through the process of moving down tape it produces an answer for the question. Our mind seems to work like this as well. 1+1 is a simple task for our mind and the turing machine to perform. It analyzes one as one thing say one apple plus another single apple which constitutes two apples. Take for example our two index fingers they are similar and as such we see that they are two similar things therefore we have two index fingers.

Andrew Broestl said

at 2:44 pm on Dec 2, 2009

In Chapter 2 we are introduced to the Physical Symbol System or PSS. The Chinese Room is the idea that an English speaker given questions in Chinese with instructions in English how to manipulate the Chinese to give intelligible responses is an understanding of Chinese. More goes into understanding Chinese then just being able to answer questions with the instructions in English though. We may appear to understand Chinese to the room, but the fact is that we lack the ability to understand the Chinese independently of the room. However is this room just the idea of our mind being able to translate English into Chinese? I would say no because the Chinese speaker does not get asked a question then have to translate the idea into another language to answer. He thinks about the question without having to translate it. He thinks about the Chinese and not another language when answering back.

Erin said

at 11:26 pm on Dec 6, 2009

In Chapter 8 of Mindware much credit is given to brain's capacity to utilize its environment in order to function successfully. It is argued not only that outside stimuli and sources aid the mind in learning, but that these environmental factors are actually an essential part of the process of consciousness. The tuna is given as a very clear example; that the fish would not be able to swim and maneuver (and therefore survive) as it does without utilizing the physical forces of the water to its own advantages. This presents an interesting obstacle when viewing the mind as a computational system within itself, for at what point does this system incorporate environmental properties into itself? It seems possible enough to imagine the individual gradually learning to do this, but what about the tuna? It seems as though the fish was born with the instinctual capacity built in; that is, the properties of water were already a part of its brain's neural system. This then leads to the question: how much of the tuna's identity is connected to water? More specifically, if we lose our ability to use the environment to our benefit, do we remain a defective version of ourselves, or do we lose a defining part of our neural network?

Erin said

at 11:46 pm on Dec 6, 2009

One of the final thoughts explored in Mindware is that of the artist, and why he must sketch and plan and re-create on even the most abstract piece of artwork. The fact that the artist can’t merely formulate an idea and create it in a single go is strong evidence of the mind’s need to examine and filter through its own contents. I feel this same process taking place for myself when I prepare to write a paper; I will write a line and process it, then decide whether it has adequately represented the idea that I’m trying to get across. I will often find that I will actually discover my own ideas only after I’ve written them down, and can then clearly see my own thought processes on paper. Both of these examples present a fascinating scenario of the mind analyzing itself, how is this possible? It seems to be a strong indication of dualism, but not the typical mind-body dualism, rather a mind-self dualism (where ‘mind’ is not the physical brain, but the calculative processes that go on as a result of the brain’s functions). The self must be separate from these mental calculations if it must rely on tools (sketching, writing) to really understand them.

Erin said

at 10:56 pm on Dec 10, 2009

I enjoyed the presentation given this week about the potential morality issues with advanced AI. They raised the question: at what point does cognition reach a level in which its agent becomes a moral one? This made me think of an interesting article i'd read (shown to me by a friend who is convinced our world will end in an AI apocalypse), about robots which have been programmed with an awards-based system of food foraging and producing offspring. The robots not only evolved to learn how to alert each other of danger and food, but some robots in a colony learned to lie about finding food, leading competitor robots in a false direction. The address to this article is at the bottom in case anybody wants to read it. It made me think; at what point does this level of AI interaction grant morality? If the robots are literally developing strategies to ensure the survival of each other and themselves, it seems as though this comes attached with some sort of purpose of continuation. This purpose was not something we wrote into the machine; rather, the machine seems to have realized it on its own.

http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie

Mike Prentice said

at 2:37 pm on Dec 14, 2009

So I sit here, writing my paper, wondering, how did I end up here? What was the chain of events that have led me to be the person that I am today? Naturally, I want to take a deterministic approach to this and say that I’m here strictly due to the luck, or the lack there of, of the events that have happened within my life. I don’t know if this is the right answer though. Can’t there be something more than this luck that rules my life? Can’t there be self-determination that out ways any of the factors that play into my life? Could this not be what I was always destined to be? While the answers to these questions seem distant, I rest assured in the realization that, I’m happy being who I am. Me, Mike Prentice at the time of 2:34 on Monday writing something ridiculous to get my mind off of . . .

Matt Stobber said

at 3:45 pm on Dec 18, 2009

I found chapter 6 to be very interesting as a programmer. The concept of artificial intelligence and artificial life has always been very interesting to me. I especially liked the discussion about abstract thought, and how this is such a daunting task to implement artificially. It got me thinking if this will even be possible someday, or if it is one of those problems that is so complicated it cannot be solved through brute force implementation, the only way a computer solves things. I think that it is very possible that highly intelligent artificial intelligence will not be possible until we can understand our own cognition more, and the "algorithms" that our brains have implemented over billions of years of evolution. Just like we have create new A.I techniques like neural networks and genetic algorithms by examining human cognition, I don't think there will be many more breakthroughs until we understand our own cognition on a deeper level.

Matt Stobber said

at 3:53 pm on Dec 18, 2009

In chapter 5, Clark talked a little bit about genetic algorithms, and he said something that I found fascinating. He talked about how Thompson and his colleagues used genetic algorithms to try to find new designs for electronic circuits to better control a robot that moves using sonar. This made me think about the point of singularity, that is, the point at which we will create robots that will be able to build better robots using designs we would have never though of. This really does make one think of a Matrix type of outcome where robots keep evolving themselves to be better and better and no longer need humans. But I do find the idea of using the concept of evolution to find solutions to problems brilliant. Just like nature's implementation of the cricket, I believe we can use genetic algorithms to find solutions that we would have never thought of.

Matt Stobber said

at 4:02 pm on Dec 18, 2009

Clark stated what I believe to be a very profound idea in chapter 8.He talks about how software agents almost become intertwined with our cognition, that is, our cognition becomes dependent on these software agents to operate efficiently. The idea is this. Lets say you start using the web at age 4. There is a software agent that monitors your web activity, your online reading habits, your cd purchases etc. Over the next 70 years, you and this software agent are co-evolving and influencing each other. You make your software agent adapt to you, and your software agent makes you adapt to it by recommending things you may like, back and forth.The software agent is therefor contributing to your psychological profile. Perhaps you are only using the software agent the same way you are using your frontal lobe? This really makes one think about how software influences us, and even how it may be possible to someday have implants that influence our cognition.

Matt Stobber said

at 4:12 pm on Dec 18, 2009

I was pondering the turing machine and read that there are problems and algorithms that are so computationally inefficient to solve through brute force that it would take a computer longer than the existence of the universe and then some. I found this interesting because these same problems can be solved by humans using "short cuts" and logic and reasoning. It really makes one wonder just what the hell algorithms are brain has implemented that could cut down the "computation" time so much. And if we will someday be able to implement these algorithms into a computer. Will it someday be possible to combine the computational power of computers with the abstract biological reasoning of humans?

Jerek Justus said

at 7:06 pm on Dec 18, 2009

I think this class has really stressed the misalignment between that which appears to be the case and that which really is. I would say that at the beginning of the semester, I was largely an empiricist. I believed that you can only know what you are able to perceive, but after taking this class I've had to seriously question some of the things I've long held to be my closest beliefs. I find myself now tending towards a more objective view of reality that doesn't depend so much upon an individual's perspective, largely because of the fallibility of that perspective. It seems that as a race we have this tendency to project how we thinks things really should be, and then accept those projections as fact. But just because our conception of reality contains something doesn't make it an actual representation of fact, as is the case with clark's discussion of the cricket. Not until closely studying the structure of a cricket did we discover how simple the system truly is. Before this, our knowledge depended largely upon intuition. I'm finding that this intuition is not nearly as powerful as I had originally considered. I'm curious to see how this affects my perspective from here on out.

Jerek Justus said

at 7:16 pm on Dec 18, 2009

It has been really interesting to take this class in conjunction with an Eastern Religions philosophy class, in which Buddhism proposes a compelling case against the concept of identity, while working with Hume has actually led me closer to an objective view of a continuous simple self. In attempting to reconcile these views, I have come to a better understanding of each. I find that the two actually do not even refer to each other. The lack of self in Buddhism is rather a perspective in which one lets go of their attachment to what they think their personal self should be - a move away from unrealizable idealism; whereas the simple self is more of a valuation of the vessel in which your experiences take root. I find that this class has given me a an opportunity to grow in both of these perspectives.

You don't have permission to comment on this page.