Monday, December 15, 2008

Technology Notes Vol 1, Issue 6: Computation and Philosophy (Day 1).

This issue of Technology notes is dedicated to the "Computation and Philosophy" conference held at the National Institute of Advanced Studies. There were more than 25 talks (of which I attended 13), and although I went in with some degree of trepidation, I must say I wasn't completely lost. I am still chewing on what I heard and on my notes, so what you see here is a half-cooked report on the three day conference.

A few caveats first: As I don't have formal training in philosophy, everything I say here is what I understood from my low perch in Computer science, so please take everything in this post with a healthy dose of skepticism. Further, conclusions I draw here are my own, and they may not be what the speakers intended.

The one big lesson I drew from the conference is that a reductionist approach is grossly insufficient to describe many complex systems, one of which is our own ability at computation, which includes the ability to learn language.

The first hint towards this came from the talk on Templates, Complexity and Autonomous systems by Paul Humphreys. Prof. Humphreys is an authority on emergent behaviour, and he contrasted reductionism to a constructionist approach to explain higher-order properties. While reductionists argue that all higher orders of properties can be explained by interactions amongst lower-order properties, constructionists believe that there are levels in the reductionist hierarchy which are not algorithmically accessible from the previous levels. That is, there are higher-level properties that cannot be reduced to the interactions between the properties that exist at lower-levels (think genes->proteins->organs). The question Prof. Humphreys asked was whether this irreducibility was provable. He explained his results using Ising models and Cellular Automata, but I was unable to understand the gist of his discussion, as it assumed that one knew these concepts apriori. Overall though, I think his point was that it can be proven that it is not possible to have models of sufficiently complex physical systems where every property can be computed. I'm still reading up on some of the papers he mentioned, so if you are interested, Sorin Istrail's paper at STOC 2000 is one suggested for reading. I'm still trying to figure out what he said, and once I do, I'll put it up here for more discussion.

The next two talks, "Algorithms in Indian tradition" and "Mathematical, Algorithmic, and Computational thinking in the Indian mind-scape" were rather boring. Both speakers talked about the glorious Indian tradition of science, and while the first speaker, M D Srinivas, gave an illustration of how Aryabhata and other Indian mathematicians wrote maths in poetry, and were able to compute functions like sine-inverse, Veni Madhavan who gave the second talk said that the notion of proof and generality didn't arise in the Indian mathematical tradition because of the poetic way in which mathematics was passed down. The first talk was a repeat of a talk I'd attended at MSRI, and the second had zero proof and lot of hand-waving as to why poetry caused Indian mathematicians to miss a glorious opportunity to own, for example, the "Chinese remainder theorem". The speakers tried hard to avoid being patriotic, but that feeling somehow snuck in, and in addition, beyond making an unproven statement that poetry is some sort of an enemy of generalization, was completely unrelated to the topic at hand.

Some of the Q&A here was good though, with some people debating whether notations freed or constrained thoughts of people - Greeks, for instance, not going beyond the third power simply because their mathematics reduced to geometry, and they were unable to visualize a fourth dimension. No conclusions were drawn though.

These talks were also the started the trend of poor presentations in the conference by many of the speakers. So many of them came unprepared, many didn't have slides and read from papers or books they'd written, and one person literally recited his paper during his presentation.

Anyhow, the next talk was by N. Raja from TIFR, on the "Philosophy of Software Artifacts". This talk was another disappointment, particularly because I was really looking forward to what he had to say. N. Raja spent a LOT of time talking about denotational v/s operational semantics, explaining the lambda-calculus, and quoting books (for instance, "Meaning and Interpretation" by Charles Travis). I didn't get the point of his talk, however – I guess he was trying to introduce the different approaches to semantics in Computer Science. Some of his references though are interesting reads, notably the Scott-Strachey paper "Towards a mathematical semantics for computer languages" and Peter Landin's work on a lambda-calculus-based abstract machine. Overall, the talk wasn't crisp and while the speaker had slides, there was a lot of back-and-forth movement, and not enough time devoted to making a single, "take-homeable" point. The one thing I remember about the talk was a flat joke about how computer scientists are formalists on the weekday and 'something else' on the weekends, but I forget what.

The next talk was post-lunch, and it was by Amba Kulkarni on "Panini's Ashtadhyayi", on Sanskrit grammar, which incidentally is the first generative grammar known to us (think Chomsky's hierarchy for what a generative grammar is). Kulkarni made the point that Panini was aware of information coding abilities of language, and the need for brevity while defining grammar, and she explained how Panini used anuvrutti or factorization to get brevity in his definitions. (See the section on "IT markers" on the wiki page.) There was a lot of room for ambiguity, and in such cases, he took refuge in meta rules and akansha (what the listener expected) to resolve it. Essentially, there were phonemes, called shivasutras and rules called anubandhas that mimicked Niklaus Wirth's famous book: Programs = Data + Algorithms, with shivasutras playing the role of data, and anubandhas, the algorithms that defined the composition and behaviour of these data items. Kulkarni also gave some 'proof' (quotes intentional) for how the system is similar to OO programming, and quoted a paper by a Yale professor on inheritance in ashtadhyayi which had dealt with this issue. I only realized later that she meant a completely different inheritance - the paper here says

"What distinguishes Panini's approach is not only temporal priority, but a novel method of interleaving formal and semantic specifications along a single inheritance path to model many-to-many correspondences between the formal and semantic properties of derivational affixes"

leaving me clueless. Maybe a linguistics person can clarify this? Meanwhile, it was time for Q&A, and when there is talk of Sanskrit, how can talk of programming be far behind? Quickly, a consensus formed, excluding the two Microsofties present, on the suitability of Sanskrit for computer programming. Never mind that the grammar itself is ambiguous, and requires human understanding (see the point about akanksha) for interpretation. Of course, I'm NOT a qualified linguist, and it may indeed be possible to program in Sanskrit. One does wonder though why a compiler hasn't been built to translate Sanskrit into say, English.

Next was a talk on "Computation and the Nature of writing" by Sundar Sarukkai. This was an interesting talk, where he first defined computation as the manipulation of symbols leading to a conclusion. Then he asked the difference between writing and computing. Quoting from a reference I didn't record, he mentioned that the essential characteristics of writing were temporality, progressivity, and that it was a representation of speech, which itself was a representation of thought and was a second order copy of information. He then introduced cognitive models of writing favouring the "planning, organizing, goal-setting" model, and was emphatic that all three occur in parallel. Writing was also goal directed, he said, with process and content goals being satisfied through a process of translating thought to symbols. Coming back to computation, he mentioned semantics was the bugbear for math, and that removing meaning from symbols gave it a certain purity, and allowed 'logic' to operate on them. Thus 'symbol-centrism' became central to computation. Ultimately, his conclusion was that the two were similar, as both did translations of thoughts into symbols on a medium. He did leave a few questions open, for instance, the rules for a "metaphysics of computation", or the difference that writing and speaking make to the computability of mathematics.  Considering how the talk concluded, I must say that while the talk itself was well presented, the conclusion was something anyone with very little knowledge of philosophy would have come to. But I guess that is the challenge of philosophy - to provide theories for understanding even those things that we think are obvious.

The next session of three talks was dedicated to Biology. It started with a talk by Vijay Chandru of Strand genomics on a "Systems View of Biology". Vijay started by saying how a reductionist approach to biology has failed to describe higher order behaviours like organ function, (See "Music of Life" by Dennis Noble for more on this), and gave an illustration of the work Strand is doing in their field by the example of hepatotoxicity detection. Strand seems to be doing very well, and they appear have teams of PhDs and MDs tackling every problem in a co-ordinated way. The talk though didn't have too many insights beyond telling us that computation will change the way biology is done in this decade or the next.

Next in the biology series was Dr. Raghavendra Gadagkar's talk, "Decision making in Animals", by far the best talk of the conference: well presented, with a lot of content, and independent research backed up by evidence. The subject was apparent intelligence of decision making in bees and other social insects - for instance, bees are known to convey both the distance and direction of food accurately, and other bees are known to follow up taking into account the rotation of the earth. Ants find what is usually the shortest path to food. His conclusion was that all this is done by following certain simple heuristics - for instance, ants set off in all directions in search of food, leaving pheromones on their path. When a food source is found, the ant finding it traces its way back to the nest. If two ants find two different routes to the same food, the ant which returned first would leave a stronger pheromone trail (as it has passed the route twice). An ant that sets off after the first one returns will simply follow the path of the stronger pheromone, increasing its strength. Ants that come later continue to follow the same rule, making the path the standard one. The experiments suggested were simple, and brilliant (at least to a layman), and this was clearly the best talk of the conference. [May I make a recommendation that he be invited to give a talk at MSRI?]

The final talk of the day was by M G Narasimhan of NIAS on "The Genetic Code as an Information Entity". The speaker spent a lot of time talking about DNA and Watson and Crick and the whole history of the study of DNA, and by the time he came to his conclusions, his time was up, and so was our patience. We left without waiting for the Q&A.

Will write about Days 2 and 3 in the next post.

3 comments:

Unknown said...

"The one thing I remember about the talk was a flat joke about how computer scientists are formalists on the weekday and 'something else' on the weekends, but I forget what." -->

That 'something else' is 'Platonist'

Unknown said...

Thanks Gopal for compiling this. I must say that it is very well-written and aptly summarized :)

Looking forward to the other two postings.

Btw, I would also recommend Raghavendra Gadakkar for the Kaleidoscope series.

Indian said...

A very detailed report indeed...