See my report on the first day here. Unfortunately, it has been a while since I attended the conference, and too many things have happened in the interim, so this part of the report may not be as crisp as I’d like it to be.
I could only attend the afternoon session on the second day of the conference, and only one talk on the third day. I did miss a few (possibly interesting) talks in the process – I have abstracts and contact info of them, so if you are interested, talk to me.
The first talk in the afternoon session on Day 2 was “Mathematics, Computation and Cognition” by Rajesh Kasturirangan of NIAS. His premise was that the study of cognitive basis of mathematics and computation is best done on experienced mathematicians as opposed to children or chimpanzees as is usually the case. He talked about the classic Turing papers on Computational machinery and intelligence and George Lakof’s work which said that computational thought is essentially metaphorical. The question that we must ask, Rajesh said, was of the origins of that metaphorical ability. After this point, I lost him, and the discussions on what should be done got mixed up with what is already done (at least in my mind). He gave some characteristics of mathematical phenomena (like precision), and said that some interesting questions in the area were understanding the cognitive capacity for induction/recursion, and formulating theories for mathematical understanding. In conclusion, he advocated taking a holistic interpretation of the metaphor-mathematical thinking link, to see if they mutually interact and benefit each other. Q&A was OK, and there were some theories that floated around natural language and natural numbers, but I didn’t understand most of the discussion. Important reference here is “Number sense” by Daheane.
Next came what I thought was the least interesting talk of the conference, “Marr’s three-level typology for Cognitive Science”, by S. Pannerselvam of the University of Madras. The speaker simply read out the paper he’d authored without bothering to look once at the audience, or pausing to see if people understood what he was trying to say. Anyhow, Pannerselvam started with the Vision module that Marr describes, where he gives a three-level typology for cognitive science, each level corresponding to one of the three, why, what, and how questions of implementing such a module. He then appeared to contrast it with a connectionist approach pioneered by Jerry Fodor (See "The Elm and the Expert”). In that approach, as opposed to the “implementationist” approach of Marr, there is no central CPU processing information – instead, there is a more of a neural net that has parallel “total cognitive states”. A “Cognitive Transitive Function” would define state changes, although the characteristics of that were not explained by the speaker. The most interesting part of this talk was when Pannerselvam mentioned Jerry Fodor’s Somehow, amongst all these, he mentioned a “Language of thought”, which he said was not a natural language. During Q&A, many in the audience asked him about the characteristics of the language, but he didn’t say much. If you are interested though, go through the link above, and it has quite some information. Dan Dennett and Don Davidson both support the “no thought without language” thesis.
What does a cognitive agent have to do to develop meaning/understanding, or what is called a “grounded representation”? The next talk, by Nagarjuna G of TIFR was on this topic. He introduced Taddeo and Floridi’s (TF) criteria to solve the “Symbol Grounding Problem” (paper by Harnad here) and explained that the drawback of the system was that it did not specify a filtering mechanism to select the states that the cognitive entity would process. Other models were discussed, for instance, the Symbolic model (where the brain was a CPU that had sensory I/O), and the Connectivist model (that of neural nets and parallel states). The other shortfall, btw, of the TF criteria was that the entity under observation was assumed to have an abstracting ability that evolved from evolution! From what I understood, these models assumed something like the following: the eye only let light through, and the brain and its structures decided what we were seeing and equally importantly, what was the importance of what we were seeing. The alternative model that the speaker proposed used “Active perception” and a sensory system where inputs collided with each other (think seeing and hearing at the same time), and not everything that was perceived was processed. There was more information on this, but I’ll let you read his paper: “Muscularity of the Mind”. The talk seems to make more sense, the more I chew on it, so I’ll try and post an addendum here later.
The final talk of the day was on the "Status and Justification of the Church-Turing Thesis (CTT)”, by Jonathan Yaari of the Hebrew University. The premise of this talk was that CTT was a contingent (i.e. aposteriori and necessary) thesis and one that didn’t require proof (or was unproveable) – similar to the fundamental axioms in geometry. The speaker wanted to use Kripke and Putnam’s theory of the existence of scientific ‘sentences’ that are both aposteriori and necessary to show that CTT was both a posteriori and necessary. See this article for more on the theory itself. What he failed to give though was a proper ‘reduction’ from CTT to the K-P theory. He also described CTT, attempted proofs for CTT, and K-P theory for too long to have time for a proper explanation of his ideas. Q & A focussed on this and other questions on computability, and while there were some good points raised, I don’t remember any of them now :(.
The last talk I attended was “Modularity revisited” by Pritha Chandra of IIT-D. The question Pritha raised was whether FL(N), the Faculty of Language (Narrow) was modular as Fodor defined it. I didn’t really get the point of the talk, except for her conclusion that FL(N) was modular. I’ll put some references here if you are interested: Chomsky’s Minimalist Program, Spelke’s work on why FL(N) isn’t modular, paper talking more about FL(N) by Hauser, Chomsky, and Fitch, and papers defending modularity of FL(N), by Fodor and Butterfill.
I’ll post my conclusions on the conference in a separate post. :)