What Is Research?

May 29, 2010


Some things I’ve been reading about:

  • Concept inventories: These are tests designed to determine the extent to which people have an understanding of basic concepts in a particular area. The Wikipedia article on concept inventory provides a decent introduction. Concept inventories were introduced in physics, with the Force Concept Inventory (FCI) being used to determine people’s understanding of mechanics. Here’s the Google Scholar results for Force Concept Inventory, including this paper by Hestene, Wells, and Swackhamer that describes the detailed construction of the inventory. The force concept inventory led Eric Mazur, an experimental physicist at Harvard University, to change his style of teaching introductory physics. Here is a YouTube talk by Mazur where he describes how the concept inventory led him to change his style of teaching introductory physics courses.

  • Open notebook science: Here is the Wikipedia entry on open notebook science, replete with links to various discussions of the subject. The UsefulChem blog has plenty of discussions and links related to open notebook science. Here is Michael Nielsen’s article/blog post on the future of science, with discussions of open notebook science and related ideas.

  • Moore method: Much of inquiry-based learning (IBL) in college mathematics courses is based on the Moore method and its derivatives. The Moore method was pioneered by topologist Robert L. Moore at the University of Texas, and is often also called the Texas Method. The idea is that the instructor, instead of teaching students, gives them problems to solve on their own and listens to them as they attempt to present their solutions to their peers and the instructor. Here’s the Wikipedia article on the Moore method. Here is the Legacy of R. L. Moore project and here is the University of Texas Discovery Learning Project. There’s a three-part video series (1, 2 and 3) about the Moore method. You can also view this book about the Moore method (limited preview via Google Books). See also this Math Overflow discussion on the Moore method.

  • Cognitive load theory: Here is Sweller and Chandler’s original paper (1991) on the subject and here are the Google Scholar results on the query. Cognitive load theory attempts to look at learning in terms of the cognitive load imposed on the learner. It identifies three kinds of load: intrinsic load, which is the load that naturally arises from trying to learn, extrinsic load, which is the load that arises due to distractions and does not help with learning, and germane load, which is load that the learner takes on to get a deeper understanding and form better connections within the material. The goal of good instruction should be to assess how much intrinsic load a given learning task entails, try to minimize the extrinsic load, and use whatever extra space there is to add in germane load.

March 22, 2009

Making subtle points

Filed under: Teaching and learning: dissemination and assimilation — vipulnaik @ 4:05 pm

This year, as part of my duties, I am a College Fellow for the undergraduate Algebra (Hons) sequence at the University of Chicago. I conduct a weekly problem session for the students, which serves as an opportunity to review class material, go over tricky points in homework problems, and introduce students to more examples.

On occasion, I’ve come across the problem of wanting to make a subtle point. Often, my desire to make that particular subtle point seems to simply be an ego issue. I’m not referring to the kind of subtle points that there is universal agreement need to be made — the kind that every book introducing the topic makes explicitly. I’m referring, instead, to subtle points that may be considered pointless both by beginners and experienced people. The beginners consider them pointless because they don’t comprehend the point, while experienced people may consider them pointless because they understand it and don’t see what the fuss is about.

One example is the two definitions of group — one definition that defines a group as a set with one binary operation satisfying a number of conditions, and the other defining a group in terms of three operations (a binary operation, a unary operation, and a constant function) satisfying some universally quantified equations. The latter definition is termed the universal algebraic definition, and its importance lies in the fact that it shows that groups form a variety of algebras, which allows for a lot of general constructions. The former definition is important because it indicates that essentially just one group operation controls the entire group structure. This kind of differentiation may seem pointless, or “yeah, so what?” to a lot of mathematicians.

Another example, that actually occurred in a recent problem session, is that of differentiating between a polynomial and a function. if R is a ring, a polynomial in R[x] gives rise to a function from R to R. However, polynomials carry more information than simply the functions associated with them: in other words, different polynomials can give rise to the same functions (this happens, for instance, over finite rings, though it does not happen over infinite integral domains). I’d made this point a few times in earlier problem sessions, and there was even a homework problem that essentially did this stuff.

So, during a problem session review of the notion of characteristic polynomial, I decided to make this subtle point distinguishing the characteristic polynomial from its associated induced function. I made the point that in order to compute the characteristic polynomial, we need to do a formal computation of det(A - \lambda I_n) over a polynomial ring R[\lambda] rather than simply think of a function that sends \lambda to det(A - \lambda I_n). This is a rather subtle, and in many ways, an apparently useless point (in fact, I don’t know of too many places that make this distinction very carefully at an introductory stage). However, I wanted to make it in order to rub in the difference between a polynomial and its associated function.

I hovered over this point for quite some time, so I guess a reasonable fraction of the students did get it, but at the beginning, one girl simply didn’t see the distinction between the two things, and was honest enough to admit it. So it took a couple of minutes to spell the distinction out.

In this blog post, I want to explore some of the arguments for making subtle points, and some effective ways of doing so without stealing too much attention away from the not-so-subtle-but-more-important point. (more…)

February 23, 2009

Doing it oneself versus spoonfeeding

In previous posts titled knowledge matters and intuition in research, I argued that building good intuition and skill for research requires a strong knowledge and experience base. In this post, I’m going to talk about a related theme, which is also one of my pet themes: my rant at the misconception that doing things on one’s own is important for success.

The belief that I’m attacking

It is believed in certain circles, particularly among academics, that doing things by oneself, working out details on one’s own, rather than looking them up or asking others, is a necessary step towards developing proper understanding and skills.

One guise that this belief takes is a skew of learning paradigms that go under names such as “experiential learning”, “inquiry-based learning”, “exploratory learning”, and the like. Of course, each of these learning paradigms is complex, and the paradigms differ from each other. Further, each paradigm is implemented in a variety of different ways. My limited experience with these paradigms indicates that there is a core belief common to the paradigms (I may be wrong here) which is that it is important for people to do things by themselves rather than have these things told to them by others. An extreme believer of this kind may consider with disdain the idea of simply following or reading what others, but a more moderate and mainstream stance might be that working things out for oneself, rather than following what others have done, is generally preferable, and following others is a kind of imperfect substitute that we nonetheless often need to accept because of constraints of time.

Another closely related theme is the fact that exploratory and inquiry-based methods focus more on skills and approaches rather than knowledge. This might be related to the general view of knowledge as something inferior, or less important, than skill, attitude, and approach. Which is why, in certain circles, the person who “is smart” and “thinks sharply” is considered inferior to the person who merely “knows a lot”. This page, for instance, talks about how inquiry-based learning differs from the traditional knowledge-based approach to learning because it focuses more on “information-processing skills” and “problem-solving skills”. (Note: I discovered the page via a Google search a few months back, and am not certain about how mainstream its descriptions are). (Also note: I’ve discussed more about this later in the post, where I point out other sides of this issue).

Closely related to the theme of exploration and skills-more-than-knowledge is the theme of minimal guidance. In this view, guidance from others should be minimal, and students should discover things their own way. There are many who argue both for and against such positions. For instance, a paper (Kirschner, Sweller, and Clark) that I discovered via Wikipedia argues why minimally guided instruction does not work. Nonetheless, there seems to be a general treatment of exploration, self-discovery, and skills-over-knowledge as “feel-good” things.

Partial truth to the importance of exploration

As an in-the-wings researcher (I am currently pursuing a doctoral degree in mathematics) I definitely understand the importance of exploration. I have personally done a lot of exploration, much of it to fill minor knowledge gaps or raise interesting but not-too-deep questions. And some of my exploration has led to interesting and difficult questions. For instance, I came up with a notion of extensible automorphism for groups and made a conjecture that every extensible automorphism is inner. The original motivation behind the conjecture was a direction of exploration that turned out to have little to do with the partial resolution that I have achieved on the problem. (With ideas and guidance from many others including Isaacs, Ramanan, Alperin, and Glauberman, I’ve proved that for finite groups, any finite-extensible automorphism is class-preserving, and any extensible automorphism sends subgroups to conjugate subgroups). And I’ve also had ideas that have led to other questions (most of which were easy to solve, while some are still unsolved) and others that have led to structures that might just be of use.

In other words, I’m no stranger to exploration in a mathematical context. Nor is my exploratory attitude strictly restricted to group theory. I take a strongly exploratory attitude to many of the things I learn, including things that are probably of little research relevance to me. Nor am I singularly unique in this respect. Most successful researchers and learners that I’ve had the opportunity to interact with are seasoned explorers. While different people have different exploration styles, there are few who resist the very idea of exploration. Frankly, there would be little research or innovation (whether academic or commercial) if people didn’t have an exploratory mindset.

So I’m all for encouraging exploration. So what am I really against? The idea that, in general, people are better off trying to figure things out for themselves rather than refer to existing solutions or existing approaches. Most of the exploration that I’ve talked about here isn’t exploration undertaken because of ignorance of existing methods — it is exploration that builds upon a fairly comprehensive knowledge and understanding of existing approaches. What I’m questioning is the wisdom of the idea that by forcing people to work out and explore solutions to basic problems while depriving them of existing resources that solve those problems, we can impart problem-solving and information-processing skills that would otherwise be hard to come by.

Another partial truth: when deprivation helps

Depriving people of key bits of knowledge can help in certain cases. These are situations where certain mental connections need to be formed, and these connections are best formed when the person works through the problem himself or herself, and makes the key connection. In these cases, simply being told the connection may not provide enough shock value, insight value, richness or depth for the connection to be made firmly.

The typical example is the insight puzzle. By insight puzzle, I mean a puzzle whose solutions relies on a novel way of interpreting something that already exists. Here, simply telling the learner to “think out of the box” doesn’t help the learner solve the insight puzzle. However, if a situation where a similar insight is used is presented shortly before administering the puzzle, the learner has a high chance of solving the puzzle.

The research on insight puzzles reveals, however, that in order to maximize the chances of the learner getting it, the similar insight should be presented in a way that forces the learner to have the insight by himself/herself. In other words, the learner should be forced to “think through” the matter before seeing the problem. The classic example of this is a puzzle that involves a second use of the word “marry” — a clergyman or priest marrying a couple. One group of people were presented, before the puzzle, with a passage that involved a clergyman marrying couples. Very few people in this group got the solution. Another group of people were presented a similar passage, except that this passage changed the order of sentences so that the reader had to pause to confront the two meanings of “marry”. People in this second group scored better on the test because they had to reflect upon the problem.

There are a couple of points I’d like to note here. That depriving people of some key ingredients forces them to reflect and helps form better mental connections is true. But equally important is the fact that they are presented with enough of the other ingredients in a manner that the insight represents a small and feasible step. Secondly, such careful stimulation requires a lot of art, thought, and setup, and is a far cry from setting people “free to explore”.

When to think and when to look

Learners generally need to make a trade-off between “looking up” answers and “thinking about them”. How this trade-off is made depends on a number of factors, including the quality of insight that the looked-up answer provides, the quality of insight that learners derive from thinking about problems, the time at the learner’s disposal, the learner’s ultimate goals, and many others. In my experience, seasoned learners of a topic are best able to make these trade-offs themselves and determine when to look and when to struggle. Thus, even if deprivation is helpful, external deprivation (in the sense of not providing information about places where they can look up answers) does not usually make sense. There are two broad exceptions.

The first is for novice learners. Novice learners, when they see a new problem, rarely understand enough about their own level of knowledge to know how long they should try the problem, what kind of place to look up if any, and what the relative advantages of either approach are. By “novice learner” I do not mean to suggest a general description of a person. Everybody is a novice learner in a topic they pick up for the first time. It is true that some people are better in general as learners in certain broad areas — for instance, I’d be a better learner of mathematical subjects than most people, including mathematical subjects I have never dealt with. However, beyond a slight headstart, everybody goes through the “novice learner” phase for a new field.

For novice learners, helpful hints on what things they should try themselves, how long they should try those things, and how to judge and build intuition, are important. As such, I think that these hints need to be made much better in quality than they typically are. The hint to a learner should help the learner get an idea about the difficulty level in trying the problem, the importance of “knowing” the solution at the end, the relative importance of reflecting upon and understanding the problem, and whether there are some insights that can only be obtained by working through the problem (or, conversely, whether there are some insights that can only be obtained by looking at the solution). Here, the role of the problem-provider (who may be an instructor, coach, or a passive agent such as a textbook, monograph, or video lecture series) is to provide input that helps the learner decide rather than to take the decision-making reins.

A second powerful argument is for learners whose personality and circumstances require “external disciplining” and “external motivation”. The argument here is essentially a “time inconsistency” argument — the learner would ideally like to work through the problem himself or herself, but when it comes to actually doing the problem, the learner feels lazy, and may succumb to simply looking up the solution somewhere. (“Time inconsistency” is a technical term used in decision theory and behavioral economics). Forcing learners to actually do the problems by themselves, and disciplining them by not providing them easy access to solutions, helps them meet their long-term goals and overcome their short-term laziness.

I’m not sure how powerful the time inconsistency argument is. Prima facie evidence of it seems huge, particularly in schools and colleges, where students often choose to take heavy courseloads and somehow wade through a huge pile of homework, and yet rarely do extra work voluntarily on a smaller scale (such as starred homework problems, or challenging exercises) even when the load on them is low. This fits the theory that, in the long haul, these students want to push themselves, but in the short run, they are lazy.

I think the biggest argument against the time inconsistency justification for depriving people of solutions is the fact that the most clear cases of success (again in my experience) are people who are not time inconsistent. The best explorers are people who explore regardless of whether they’re forced to do so, and who, when presented with a new topic, try to develop a sufficiently strong grasp so that they can make their own decisions of how to balance looking up with trying on their own.

Yet another argument is that laziness works against all kinds of work, including the work of reading and following existing solutions. In general, what laziness does is to make people avoid learning things if it takes too much effort. Students who decide not to solve a particular problem by themselves often also don’t “look up” the solution. Thus, in the net, they never learn the solution. Thus, even in cases where trying a problem by oneself is superior to looking it up, looking it up may still be superior to the third alternative: never learning the solution.

A more careful look at what can be done

It seems to me that providing people information that helps them decide which problems to work with and how long to try before looking up is good in practically all circumstances. It’s even better if people are provided tools that help them reflect and consolidate insights from existing problems, and if these insights are strengthened through cross-referencing from later problems. Since not every teaching resource does this, and since exploration at the cutting edge is by definition into unknown and poorly understood material, it is also important to teach learners the subject-specific skills that help them make these decisions better.

Of course, the specifics vary from subject to subject, and there is no good general-purpose learner for everything. But simply making learners and teachers aware of the importance of such skills may have a positive impact on how quickly the learners pick such skills.

Another look at exploratory learning

In the beginning, I talked about what seems to be a core premise of exploratory learning — that learners do things best when they explore by themselves. Strictly speaking, this isn’t treated as a canonical rule by pioneers of exploratory learning. In fact, I suspect that the successful executions of exploratory learning succeed precisely because they identity the things where learners investing their time through exploration yields the most benefit.

For instance, the implementation of inquiry-based learning (IBL) in some undergraduate math classes at the University of Chicago results in a far from laissez faire attitude towards student exploring things. The IBL courses seem, in fact, to be a lot more structured and rigid than non-IBL courses. Students are given a sheet of the theorems, axioms and definitions of the course, and they need to prove all the theorems. This does fit in partly with the “deprivation” idea — that students have to prove the theorems by themselves, even though proofs already exist. On the other hand, it is far from letting students explore freely.

It seems to me that while IBL as implemented in this fashion may be very successful in getting people to understand and critique the nature and structure of mathematical proofs, it is unlikely to offer significant advantages in terms of the ability to do novel exploration. That’s because, as my experience suggests, creative and new exploration usually requires immersion in a huge amount of knowledge, and this particular implementation of IBL trades off a lot of knowledge for a more thorough understanding of less knowledge.

Spoonfeeding, ego, and confidence issues

Yet another argument for letting people solve problems by themselves is that it boots their “confidence” in the subject, making them more emotionally inclined to learn. On the other hand, spoonfeeding and telling them solutions makes them feel like dumb creatures being force-fed.

In this view, telling solutions to people deprives them of the “pleasure” of working through problems by themselves, a permanent deprivation.

I think there may be some truth to this view, but it is very limited. First, the total number of problems to try is so huge that depriving people of the “pleasure” of figuring out a few for themselves has practically no effect on the number of problems they can try. Of course, part of the challenge is to make this huge stream of problems readily available to people who want to try them, without overwhelming them. Second, the “anti-spoonfeeding” argument elevates an issue of acquiring subject-matter skills to an issue of pleasing learners emotionally.

Most importantly, though, it goes against the grain of teaching people humility. Part of being a good learner is being a humble learner, and part of that involves being able to read and follow what others have done, and to realize that most of that is stuff one couldn’t have done oneself, or that would have taken a long time to do oneself. Such humility is accompanied by pride at the fact that one’s knowledge is built on the efforts of the many who came before. To use a quote attributed to Newton, “If I have seen so much, it is because I stand on the shoulder of giants.”

Of course, a learner cannot acquire such humility if he or she never attempts to solve a problem alone, but a learner cannot acquire it if he or she simply tries to solve problems rather than ask others or use references to learn solutions. It’s good for learners to try a lot of simpler problems that they get, and thus boost confidence in their learning, but it is also important that for hard problems, learners absorb the solutions of others and make them their own.

February 13, 2009

Knowledge matters

It is fashionable in certain circles to argue that, particularly, for subjects such as mathematics that have a strong logical and deductive component, it is not how much you know that counts but how you think. According to this view, cramming huge amounts of knowledge is counter-productive. Instead, mastery is achieved by learning generic methods of reasoning to deal with a variety of situations.

There are a number of ways in which this view (though considered enlightened by some) is just plain wrong. At a very basic level, it is useful to counter the (even more common) tendency to believe that in reasoning problems, it is sufficient to “memorize” basic cases. However, at a more advanced level, it can come in the way of developing the knowledge and skills needed to achieve mastery.

My first encounters with this belief

During high school, starting mainly in class 11, I started working intensively on preparing for the mathematics Olympiads. Through websites and indirect contacts (some friends, some friends of my parents) I collected a reasonable starting list of books to use. However, there was no systematic preparation route for me to take, and I largely had to find my own way through.

The approach I followed here was practice — lots and lots of problems. But the purpose here wasn’t just practice — it was also to learn the common facts and ideas that could be applied to new problems. Thus, a large part of my time also went to reviewing and reflecting upon problems I had already solved, trying to find common patterns, and seeing whether the same ideas could be expressed in greater generality. Rather than being too worried about performing in an actual examination situation, I tried to build a strong base of knowledge, in terms of facts as well as heuristics.

In addition, I spent a lot of time reading the theoretical parts of number theory, combinatorics, and geometry. The idea here was to develop the fact base as well as vocabulary so that I could identify and “label” phenomena that I saw in specific Olympiad problems.

(For those curious about the end result, I got selected to the International Mathematical Olympiad team from India in 2003 and 2004, and won Silver Medals both years.)

At no stage during my preparation did I feel that I had become “smarter” in the sense of having better methods of general reasoning or approaching problems in the abstract. Rather, my improvements were very narrow and domain-specific. After thinking, reading, and practicing a lot of geometry, I became proportionately faster at solving geometry problems, but improved very little with combinatorics.

Knowledge versus general skill

Recently, I had a chance to re-read Geoff Colvin’s interesting book Talent is overrated. This book explains how the myth of “native talent” is largely just a myth, and the secret to success is something that Colvin calls “deliberate practice”. Among the things that experts do differently, Colvin identifies looking ahead (for instance, fast typists usually look ahead in the document to know what they’ll have to type a little later), identifying subtle and indirect cues (here Colvin gives examples of expert tennis players using the body movements of the person serving to estimate the speed and direction of the ball), and, among other things, having a large base of knowledge and long-term memory that can be used to identify a situation.

Colvin describes how mathematicians and computer scientists had initially hoped for general-purpose problem solvers, who knew little about the rules of a particular problem, but would find solutions using the general rules of logic and inference. These attempts failed greatly. For instance, Deep Blue, IBM’s chess-playing computer, was defeated by then world champion Garry Kasparov in a tournament, despite Deep Blue’s ability to evaluate a hundred million of moves every second. What Deep Blue lacked, according to Colvin, was the kind of domain-specific knowledge of what works and where to start looking, that Kasparov had acquired through years of stored knowledge and memory about games that he had played and analyzed.

A large base of knowledge is also useful because it provides long-term memory that can be tapped on to complement working memory in high-stress situations. For instance, a mathematician trying to prove a complicated mathematical theorem that involves huge expressions may be able to rely on other similar expressions that he/she has worked with before to “store” the complexity of this expression in a more simple form. Similarly, a chess player may be able to use past games as a way of storing a shorter mental description of the current game situation.

A similar idea is discussed in Gary Klein’s book Sources of Power, where he describes a Recognition-Primed Decision Model (RPD model) used by people in high-stress, high-stakes situation. Klein says that expert firefighters look at a situation, identify key characteristics, and immediately fit it into a template that tells them what is happening and how to act next. This template need not be precisely like a single specific past situation. Rather, it involves features from several past situations, mixed and matched according to the present situation. Klein also gives examples of NICU nurses, in charge of taking care of babies with serious illnesses. The more experienced and expert of these nurses draw on their vast store of knowledge to identify and put together several subtle cues to get a comprehensive picture.

Knowledge versus gestalt

In Group Genius: The Creative Power of Collaboration, Keith Sawyer talks about how people solve insight problems. Sawyer talks about gestalt psychologists, who believed that for “insight” problems — the kind that require a sudden leap of insight — people needed to get beyond the confines of pre-existing knowledge and think fresh, out of the box. The problem with this, Sawyer says, is that study after study showed that simply telling people to think out of the box, or to think differently, rarely yielded results. Rather, it was important to give people specific hints about how to think out of the box. Even those hints needed to be given in such a way that people would themselves make the leap of recognition, thus modifying their internal mental models.

I recently had the opportunity to read an article, Understanding and teaching the nature of mathematical thinking, by Alan Schofield, published in Proceedings of the UCSMP International Conference on Mathematics Education, 1985 (pages 362-379). Schofield talks about how a large knowledge base is very crucial to being effective at solving problems. He refers to research by Simon (Problem Solving and Education, 1980) that shows that domain experts have a vocabulary of approximately 50,000 “chunks” — small word combinations that denote domain-specific concepts. Schofield then goes on to talk about research by Brown and Burton (Diagnostic models for procedural bugs in basic mathematical science, Cognitive Science 2, 1978 ) that shows that people who make mistakes with arithmetic (addition and subtraction) don’t just make mistakes because they don’t understand the correct rules well enough — they make mistakes because they “know” something wrong. Their algorithms are buggy in a consistent way. This is similar to the fact that people are unable to solve insight problems, not because they’re refusing to think “outside the box”, but because they do not know the correct algorithms for doing so.

Schofield then goes on to describe the experiences of people such as himself in implementing George Polya’s problem-solving strategies. Polya enumerated several generic problem-solving strategies in his books How to solve it, Mathematical discovery, and Mathematics and plausible reasoning. Polya’s heuristics included: exploiting analogies, introducing and exploring auxiliary elements in a problem solution, arguing by contradiction, working forwards, decomposing and recombining, examining special cases, exploiting related problems, drawing figures, and working backward. But teaching these “strategies” in classrooms rarely resulted in an across-the-board improvement in students’ problem-solving abilities.

Schofield argues that the reason why these strategies failed was that they were “underspecified” — just knowing that one should “introduce and explore auxiliary elements”, for instance, is of little help unless one knows how to come up with auxiliary elements in a particular situation. In Euclidean geometry, this may be by extending lines far enough that they meet, dropping perpendiculars, or other methods. In problems involving topology, this may involve constructing open covers that have certain properties. Understanding the general strategy helps a bit in the sense of putting one on the lookout for auxiliary element, but it does not provide the skill necessary to locate the correct auxiliary element. Such skill can be acquired only through experience, through deliberate practice, through the creation of a large knowledge base.

In daily life

It is unfortunately true that much of coursework in school and college is based on a learn-test-forget model — students learn something, it is tested, and then they forget it. A lack of sufficient introspection and a lack of frequent reuse of ideas learned in the past leads students to forget what they learned quickly. Thus, the knowledge base gets eroded almost as fast as it gets built.

It is important not just to build a knowledge base but to have time to reflect upon what has been built, and to strengthen what was built earlier by referencing it and building upon it. Also, students and researchers who want to become sharper thinkers in the long term need to understand the importance of remembering what they learn, putting it in a more effective framework, and making it easier to recall at times when it is useful. I see a lot of people who like to solve problems but then make no effort to consolidate their gains by remembering the solution or storing the key ideas in long-term memory in a way that can be tapped on later. I believe that this is a waste of the effort that went into solving the problem.

(See also my post on intuition in research).

February 2, 2009

On new modes of mathematical collaboration

(This blog post builds upon some of the observations I made in an earlier blog post on Google, Wikipedia and the blogosphere, but unlike that post, has a more substantive part dedicated to analysis. It also builds on the previous post, Can the Internet destroy the University?.)

I recently came across Michael Nielsen’s website. Michael Nielsen was a quantum computation researcher — he’s the co-author of Quantum computation and quantum information (ISBN 978-0521632355). Now, Nielsen is working on a book called The Future of Science, which discusses how online collaboration is changing the way scientists solve problems. Here’s Nielsen’s blog post describing the main themes of the book.

Journals — boon to bane?

Here is a quick simplification of Nielsen’s account. In the 17th century, inventors such as Newton and Galileo did not publish their discoveries immediately. Rather, they sent anagrams of these discoveries to friends, and continued to work on their discoveries in secret. Their main fear was that if they widely circulated their idea, other scientists would steal the idea and take full credit for it. By keeping the idea secret, they could develop it further and release it in a more ripe form. In the meantime, the anagram could be used to prove precedence in case somebody else also came up with the idea.

Nielsen argues that the introduction of journals, combined with public funding of science and the recognition of journal publications as a measure of academic achievement, led scientists to publish their work and thus divulge it to the world. However, today, journal publishing competes with an even more vigorous and instantaneous form of sharing: the kind of sharing done in blogs, wikis, and online forums. Nielsen argues that this kind of spontaneous sharing of rough drafts of ideas, of small details that may add up to something big, opens up new possibilities for collaboration.

In this respect, the use of online tools allows for a “scaling up” of the kind of intense, small-scale collaboration that formerly occurred only in face-to-face contact between trusted friends or close colleagues. However, Nielsen argues that academics, eager to get published in reputable journals, may be reluctant to use online forums to ask and answer questions of distant strangers. Two factors are at play here: first, the system of academic credit and tenure does little to reward online activity as opposed to publishing in journals. Second, scientists may fear that other scientists can get a whiff of their idea and beat them in the race to publish.

(Nielsen develops “scaling up” more in his blog post, Doing Science Online).

Nielsen says that this in inefficient. Economists do not like deadweight losses (Market wiki entry, Wikipedia entry) in markets — situations where one person has something to sell to another, and the other person is willing to pay the price, but the deal doesn’t occur. Nielsen says that such deadweight losses occur routinely in academic research. Somebody has a question, and somebody else has an answer. But due to the high search cost (Market wiki entry, English Wikipedia entry), i.e., the cost of finding the right person with the answer, the first person never gets the answer, or has to struggle a lot. This means a lot of time lost.

Online tools can offer a solution to the technical problem of information-seekers meeting information-providers. The problem, though, isn’t just one of technology. It is also a problem of trust. In the absence of enforceable contracts or a system where the people exchanging information can feel secure about not being “cheated” (in this case, by having their ideas stolen), people may hesitate to ask questions to the wider world. Nielsen’s suggestions include developing robust mechanisms to measure and reward online contribution.

Blogging for mathies?

Some prominent mathematical bloggers that I’ve come across: Terence Tao (Fields Medalist and co-prover of the Green-Tao theorem), Richard E. Borcherds (famous for his work on Moonshine), and Timothy Gowers. Tao’s blog is a mixed pot of lecture notes, updates on papers uploaded to the ArXiV, and his thoughts on things like the Poincare conjecture and the Navier-Stokes equations. In fact, in his post on doing science online, Nielsen uses the example of a blog post by Tao explaining the hardness of the Navier-Stokes equation. In Nielsen’s words:

The post is filled to the brim with clever perspective, insightful observations, ideas, and so on. It’s like having a chat with a top-notch mathematician, who has thought deeply about the Navier-Stokes problem, and who is willingly sharing their best thinking with you.

Following the post, there are 89 comments. Many of the comments are from well-known professional mathematicians, people like Greg Kuperberg, Nets Katz, and Gil Kalai. They bat the ideas in Tao’s post backwards and forwards, throwing in new insights and ideas of their own. It spawned posts on other mathematical blogs, where the conversation continued.

Tao and others, notably Gowers, also often throw ideas about how to make mathematical research more collaborative. In fact, I discovered Michael Nielsen through a post by Timothy Gowers, Is massively collaborated mathematics possible?, which mentions Nielsen’s post on doing science online. (Nielsen later critiqued Gowers’ post. Gowers considers alternatives such as a blog, a wiki, and an online forum, and concludes that an online forum best serves the purpose of working collaboratively on mid-range problems: problems that aren’t too easy and aren’t too hard.

My fundamental disagreements

A careful analysis of Nielsen’s thesis will take more time, but off-the-cuff, I have at least a few points of disagreement about the perspective from which Nielsen and Gowers are looking at the issue. Of course, my difference in perspective stems from my different (and altogether considerably fewer) experience compared to them.

I fully agree with Nielsen’s economic analysis with regard to research and collaboration: information-seekers and information-providers not being able to get in contact often leads to squandered opportunities. I’ve expressed similar sentiments myself in previous posts, though not as crisply as Nielsen.

My disagreement is with the emphasis on “community” and “activity”. Community and activity could be very important to researchers, but in my view they can obscure the deeper goal of growing knowledge. And it seems that in the absence of strong clusters, community and activity can result in a system that is almost as inefficient.

In the early days of the Internet, mailing lists were a big thing (they continue to be a big thing, but their relative significance in the Internet has probably declined). In those days, the Usenet mailing lists and bulletin board systems often used to be clogged with the same set of questions, asked repeatedly by different newbies. The old hands, who usually took care of answering the questions, got tired of this repetition of the same old questions. Thus was born the “Usenet FAQ”. With this FAQ, the mailing lists stopped getting clogged with the same old questions and people could devote attention to more challenging issues.

Forums (such as Mathlinks, which uses PHPBB) are a little more advanced than mailing lists in terms of the ability to browse by topic. However, they are still fundamentally a collection of questions and answers posted by random people, with no overall organizing framework that aids exploration and learning. In a situation where the absence to a forum is no knowledge, a forum is a good place. In fact, a forum can be one input among many for building a systematic base of knowledge. But when a forum is built instead of a systematic body of knowledge, the result could be a lot of duplication and inefficiency and the absence of a bigger picture.

Systematic versus creative? And the irony of Wikipedia

Systematic to some people means “top-down”, and top-down carries negative connotations for many; or at any rate, non-positive connotation. For instance, the open source movement, which includes Linux and plenty of “free software”, prides itself on being largely a bottom-up movement, with uncoordinated people working of their own volition to contribute small pieces of code to a large project. Top-down direction could not have achieved this. In economic jargon, when each person is left to make his or her own choices, the outcome is invariably more efficient, because people have more “private information” about their interests and strengths. (Nielsen uses open source as an example for where science might go by being more open in many of his posts, for instance, this one on connecting scientists to scientists).

But when I’m saying systematic, I don’t necessarily mean top-down. rather, I mean that the system should be such that people know where their contributions can go. The idea is to minimize the loss that may happen because one person contributes something at one place, but the other person doesn’t look for it there. This is very important, particularly in a large project. A forum to solve mathematical questions has the advantage over offline communication: the content is available for all to see. But this advantage is truly meaningful only if everybody who is interested can locate the question easily.

Systematic organization does not always mean less of a sense of community and activity, but this is usually the case. When material is organized through internal and logical considerations, considerations of chronological sequence and community dynamics take a backseat. The ultimate irony is that Wikipedia, which is often touted as the pinnacle of Web 2.0 achievement, seems to prove exactly the opposite: the baldness, anti-contextuality, canonical naming, and lack of a “time” element to Wikipedia’s entries is arguably its greatest strength.

Through choices of canonical naming (the name of an article is precisely its topic), extensive modularization (a large number of individual units, namely the separate articles), a neutral, impersonal, no-credit-to-author-on-the-article style, and extensive strong internal linking, Wikipedia has managed to become an easy reference for all. If I want to read the entry on a topic, I know exactly where to look on Wikipedia. If I want to edit it, I know exactly what entry to edit, and I’m guaranteed that all future people reading the Wikipedia entry looking for that information will benefit from my changes. In this respect, the Wikipedia process is extraordinarily efficient. (It is inefficient in many other ways, namely, the difficulty of quality control, measured by the massive amount of volunteer hours spent combating obvious and non-obvious spam, as well as the tremendous amount of time spent in inordinate battle over control and editing of particular entries).

The power of the Internet is its perennial and reliable availability (for people with reliable access to electricity, machines, and Internet connections). And Wikipedia, through the ease with which one can pinpoint and locate entries, and the efficiency with which it funnels the efforts both of readers and contributors to edit a specific entry, builds on that power. And I suspect that, for a lot of us, a lot of the time we’re using the Internet, we aren’t seeking exciting activity, a sense of community, or personal solidarity. We want something specific, quickly. Systematic organization and good design and architecture that gets us there fast is what we need.

What can online resources offer?

A blog creates a sense of activity, of time flowing, of comments ordered chronologically, of a “conversation”. This is valuable. At the same time, a systematic organized resource, that organizes material not based on a timeline of discovery but rather based on intrinsic characteristics of the underlying knowledge, is usually better for quick lookup and user-directed discovery (where the user is in charge of things).

It seems to me that the number of successful “activity-based online resources” will continue to remain small. There will be few quality blogs that attract high-quality comments, because the effort and investment that goes into writing a good blog entry is high. There may be many mid-ranging blogs offering random insights, but these will offer little of the daily adventure feeling from a high-traffic, high-comment blog.

On the other hand, the market was quick “pinpoint references” — the kind of resources that you can use to quickly look something up — seems huge. A pinpoint reference differs from a forum in this obvious way. In a forum you ask a question and wait for an answer, or, you browse through previously asked questions. In a pinpoint reference, you decide you want to know about a topic, and go to the page, and BANG, the answer’s already there, along with a lot of stuff you might have thought of asking but never got around to, all neatly organized and explorable.

Fortunately or unfortunately, the notion of “community” and “activity” is more appealing in a naive, human sense than the notion of pinpoint references. “Chatting with a friend” has more charm to it than having electricity. But my experience with the way people actually work seems to suggest that people value self-centered, self-directed exploration quite a bit, and may be willing to sacrifice a sense of solidarity or “being with others in a conversation” for the sake of more such exploration. Pinpoint resources offer exactly that kind of a self-directed model to users.

My experiment in this direction: subject wikis

I started a group theory wiki in December 2006, and have since extended it to a general subject wikis website. The idea is to have a central source, the subject wikis reference guide, from where one can search for terms, get short general definitions, with links to more detailed entries in individual subject wikis. See, for instance, the the entry on “normal”.

I’ve also recently started a blog for the subject wikis website, that will describe some of the strategies and approaches and choices involved in the subject wikis.

It’s not clear to me how this experiment will proceed. At the very least, my work on the group theory wiki is helping me with my research, while my work on the other wikis (which has been very little in comparison) has helped me consolidate the standard knowledge I have in these subjects along with other tidbits of knowledge or thoughts I’ve entertained. Usage statistics seem to indicate that many people are visiting and finding useful the entries on the group theory subject wiki, and there are a few visitors to each of the other subject wikis as well. What isn’t clear to me is whether this can scale to a robust reference where many people contribute and many people come to learn and explore.

January 29, 2009

Can the Internet destroy the University?

Every so often, we hear talk about how computers and the Internet are “changing everything”. In particular, the Internet is believed to have had a great impact on methods of research and academics. In this blog post, I explore the question of whether the Internet really has changed things, and how.

The early and late Internet

It may surprise some that the Internet was present as early as the 1960s. No, it wasn’t quite the same Internet. Rather, the Internet of the time lacked rudimentary features such as the World Wide Web and email. Email itself began in the late 60s and 70s. The bulk of Internet users were at universities.

In those early days, the Internet was largely a network used for transfering files from one computer to the other and for sending messages. Like the telephone helped people communicate with each other over long distances, the Internet offered a computer-based means of communication that transmitted text instead of voice.

In 1989, the World Wide Web was created by Sir Tim-Berners Lee and a couple of his friends. The basic idea of the World Wide Web was a standard for displaying “webpages” — files intended to be viewed over the Internet, and allowing easy links between webpages (this came to be known as the “hyperlink”). Even afer the World Wide Web was created, there was no standard graphical user interface browser to view webpages, and the tech-savvy web users often used text-based web browsers to access web pages. With time came graphical browsers such as Netscape. With Windows 95, Microsoft jumped into the web browser foray by introducing Internet Explorer.

As dial-up Internet started spreading in developed countries and a few places started getting broadband, more and more newspapers, magazines, universities, businesses, governmental organizations, and non-profits started their own websites. Soon, the Internet became a place for banking, booking travel tickets, submitting online applications to jobs and schools, and reading newspapers and magazines. Business-to-business as well as business-to-consumer use of the Internet became more common. This was also the time of the Internet bubble. Entrepreneurs and investors started believing that the old rules of the game no longer applied and that Internet businesses could grow exponentially. The success of companies like Amazon, Yahoo and Microsoft further fed into the investor frenzy. The bubble burst with the turn of the century. While the Internet continued to live on and grow in reach, businesses became wiser.

The original “new new thing” of the Internet was that ordinary business transactions (banking, purchasing goods) as well as consumer activity (reading newspapers and magazines, listening to music, watching video) could be conducted more efficiently over the Internet. The second phase of Internet expansion, termed “Web 2.0” by the Internet “guru” Tim O’Reilly, went in a different direction. It sought to move collective community activity to the Internet, and create new forms of community activity.

Community activity was not entirely unknown in the Internet. In the 1980s, prior to the World Wide Web, communities of users interested in specific topics formed Bulletin Board Systems (BBSes) (Jason Scott’s Textfiles has information on this — Jason Scott’s hobby involves collecting and archiving activity on the Internet from the 1980s, including the BBSes). In the 1990s, there was vigorous participation in mailing lists and Internet Relay Chats. However, this participation was limited to “geeks” — people with some comfort with technology and deep interest in the topic. What Web 2.0 sought to do was “democratize” community activity on the Internet.

Examples included content sharing sites (such as Youtube (video sharing) and Flickr (photo sharing)), social networking sites (such as Facebook, Myspace, Orkut), and collaborative content creation sites — most notability Wikipedia. there was also a significant growth in blogging (with free blog-hosting services such as Blogger, WordPress, Typepad, and many people using free software such as the WordPress software to start blogs on their own websites). Fast-growing companies such as Google now offered a comprehensive free suite including mail, online document-creation software, and applications for site developers.

The initial growth of the Internet was thus strongly rooted in academia. Academics began exchanging documents by email long before the ordinary public did. The new spate of growth of the Internet, however, has been much more widespread. With the software created either commercially or by hobbyists, and the use widespread across all kinds of users ranging from young kids to workers to retired people, much of Web 2.0 has happened outside academia.

Adaptation to the old Internet

During the late 1990s and early 2000s, many journals introduced electronic versions. Libraries, in addition to subscribing to print copies of the journals, have subscribed to electronic access. Electronic access allows anybody within a defined range (usually within the university enrolled for the subscription) to have free access to electronic versions of the articles (usually, PDFs). In addition, services such as JSTOR allow for access to old issues of journals, some of which have not yet put up online the articles in the old issues.

Some journals are moving towards open access policies. These policies allow for free access to the electronic versions, and more importantly, release the articles under an open-content license such as a Creative Commons License, that allows other researchers to use the data of the original article freely in their own further research. To further increase the availability of articles, services such as ArXiV (for mathematics and physics) have become popular. These services allow people to upload preprints of articles that are under consideration for publication in a journal. The preprints allow other researchers to have access to cutting-edge research. The ArXiV versions, as well as versions that authors may put up on their own web sites, enable people who do not have subscriptions to the journals to still read a large number of articles.

It would be an understatement to say that this has greatly increased the ease of finding published reference material online. While the profit models for electronic access and open access are still being explored, it is clear that academics have made significant use of such access to learn about more recent as well as old research, and has thus benefited researchers tremendously.

There are concerns, though. For instance, a study by James Evans based on a database of 34 million articles, shows that as journals have become more readily available online, and as older issues have become easily available, articles have been citing fewer references and the references have become more recent. Evans thinks that one of the main advantages of the pre-web indexing system was its inefficiency, which led people on tangents and thus pulled them into reading more, and often more dated, material. Evans concludes that scholarship today engages more with recent scholarship than before. (also see his Britannica blog post).

The Web 2.0 Internet

For all the impact Web 2.0 is making in the wider world, I believe that its impact on research is limited. Why? Because for research work, communicating or collaborating using a Web 2.0 tool is usually less efficient compared to an “old-fashioned” tool like e-mail.

The growth of e-mail led to a significant increase in the extent of scientific collaboration. This is particularly notable in certain areas of physics, where it is not unusual for papers to have more than five authors. Interestingly, a lot of this collaboration happens within a university; studies have shown that the most efficient uses of e-mail are by people who use it to communicate within their organization. This is good for science because historically, the bigger collaborators have been the biggest creators. References: Chapter seven of The Logic of Life by Tim Harford (personal website and book page), and Group genius by Keith Sawyer.

The great thing about a tool like e-mail is that it is an added layer of technology that does little to disturb the fundamental process of thinking and research. A couple of collaborating mathematicians can have an intense discussion over tea, collaborate over proofs at the chalkboard, and work out detail together. Then one of them can type it out and e-mail it to the other person, who sends in typed corrections or has another face-to-face discussion. After some rounds, they can email their work to others for comment or review, and reviewers can send back their reviews easily to both authors.

Now, it is true that new modes of collaborative document creation might be helpful for authors collaborating over large distances. Thus, tools like MediaWiki and Google Docs, which allow for collaborative document creation, might be used in conjunction with email. These definitely offer significant advantages for certain kinds of collaboration, particularly in situations where people are collaborating over longer distances, and might be used by people who lack awareness of or savviness with revision control systems and SVN.

But while these offer advantages for collaborative content creation, they offer little of a substitute for the robust face-to-face or otherwise intense contact needed to do research.

Serendipitious and intense contact

Universities and research institutions manage to bring together in close contact people with knowledge and intuition in a particular area. This close contact fosters a regular and almost unavoidable exchange of ideas. In my high school, there were few people with whom I could discuss my area of interest, mathematics. In my college, where there were others interested in mathematics, I could go to a discussion area and start a conversation if I wanted, but rarely were there animated discussions going on that I could just drop into. Here, at the University of Chicago, where I’m doing graduate studies, there are several places where mathematical discussions are continuously going on. I can pop in, look at what’s going on, and join in if it seems interesting. The tea room, for instance, often has people discussing mathematics effortlessly merged with other topics, and simply sitting there makes me learn a few things here and there, and sometimes introduces me to something I wouldn’t have sought myself. The first-year graduate student office, similarly, is usually abuzz with people trying to solve their homework problems and discussing other related mathematical ideas.

It is this serendipitious contact with new ideas not explicitly sought that makes the university more than just a convenient place to exchange ideas. Face-to-face contact, the ability to make hand gestures and write on chalkboards, and the ability for anybody from outside to drop in, are hard to mimic on the Internet. This doesn’t mean that it is impossible to build on the Internet a system that allows for such serendipity (for instance, it may be possible to live stream activities in all tea rooms and discussion areas in all universities so that people in one university can tune in to the live stream of what’s happening in another — it isn’t clear, though, whether the benefits of such streaming are worth the costs). Rather, existing social networking sites and content creation sites were not designed for this purpose and are ill-suited for it.

The strength of the Internet

The strength of the Internet is its quick and ready availability. For this reason, I think that mathematical reference material, including pinpoint references (such as Planetmath, Mathworld, Wikipedia, Springer Online Encyclopedia of Mathematics, and my own subject wikis reference guide) can play an important role. It isn’t infrequent for people having a debate or discussion on a point of mathematics to resolve the matter by checking it online using a handy IPhone, netbook or laptop. More development of pinpoint references, as well as more competition among them, can be good. In addition to pinpoint references, the presence and online accessibility of journal articles is also a great boon, allowing people to clarify points of confusion immediately. Similarly, online course notes, including one-off course notes put up by faculty as well as the systemic OpenCourseWare efforts by institutions such as MIT and Yale, also add to the usefulness of the Internet. Finally, I hope for a system whereby libraries can get access not only to online versions of journal articles but also online versions of books, so that people in universities can have free access to online books. (In practice, many people donwload pirated electronic versions, but such a practice is hardly one that should be treated as a model worth sustaining).

Where the Internet doesn’t do so well is in recording off-the-cuff dialogues and conversations. If I’m talking with somebody and I’m not sure about a particular fact, I can say so, and the other person can dig me further and get another related answer. Here, my lack of full knowledge and authority is compensated by my immediate presence. However, posts on Internet forums that give partial or incomplete information, particularly for questions where definite answers exist, have the drawback without the compensation. People have to put up with reading incomplete or possibly incorrect answers, but cannot follow up with questions to clarify matters.

In summary, it seems to me that the Internet is very far from destroying the university. Rather, it can substantially increase the value of living in the university by making more information readily available online.

What about those living outside the University?

Not everybody has the combination of talent and circumstance that lands one inside a university that is a hub of serendipity of the sort I’ve described. The Internet is particularly important in providing these people some of the things that those in a good university take for granted.

Access to online references, for instance, is something that has enabled people across the world to discover new ideas and concepts that they do not find in a particular book they are following. I have discovered several new ideas while surfing Wikipedia, going through newspaper and magazine articles, surfing Mathworld and Planetmath, or link-traipsing from blogs. Access to online journal articles is another trickier question. The online subscriptions charged by journals are usually too hefty for individuals, and this means that individuals who are not members of a university or library with subscription to the journals may not be able to get access to journal articles.

This is unfortunate, but of course, these people didn’t have access to the journals prior to the Internet either. Usually, such people can get copies of the article from preprint sites such as the ArXiV, author’s personal websites, or by requesting the author personally. There is also a movement towards open-access publishing as mentioned earlier, which would in particular enable free online access for all.

But more importantly, access to full articles is not usually necessary. If online references are good and fairly thorough, users should be able to access the online reference to get an idea of at least the main points, concepts and definitions introduced in a particular journal article even if they are unable to access that particular article. As an undergraduate student, I often faced the problem of being unable to access a basic definition because the only source I could locate was an article in a journal to which my college did not subscribe. Of course, even with the existence of such references, there will be people who want to read the full article to get a deeper understanding.

Finally, open course ware presents a great opportunity for people outside the university system to get a flavor of the way leading researchers and educators think. Unfortunately, open course ware, such as MIT OCW and Yale OYC, is largely limited to lower-level undergraduate course material. It is possible that for advanced graduate course material, the demand is not high enough to justify the costs of preparing open course ware. I hope that the movement expands more and encompasses more universities across different countries and languages so that eager learners everywhere have more options.

March 21, 2008

The mind’s eye

Recently, when talking to an Olympiad aspirant Ashwath Rabindranath about how to prepare effectively for the International Mathematical Olympiad, I came up with a formulation that I realized I’d implicitly been using for some time. After I discussed it with him, he said that he’s been trying it a lot and it’s been fairly helpful to him.

The concept is called the Mind’s Eye.

The idea is simple: everything should be in the mind’s eye. In mathematics, it is not enough to know that something can be proved. Truth is there only when you know how it has been proved. But even knowing, in the abstract, how it can be proved, isn’t enough. To really feel that a proof is correct, one should be able to behold it in the mind’s eye. Thus, if I tell you that every nilpotent group is solvable, you shouldn’t be satisfied withknowing that there’s some proof in a dusty book somewhere. You should see why the statement is true, and you should see it immediately, in your mind’s eye. By that I mean you should behold the proof conceptually, or pictorially, in a way that you can magnify any component of the proof at will. You should be able to tell me what the related facts are, what the applications and lemmas used are, and what the possible generalizations could be.

The mind’s eye is particularly important for Olympiad preparation because of the format of Olympiads: students are expected to solve a few challenging problems in a short time-frame, and they cannot refer any existing texts. A lot of Olympiad students waste precious in-exam time going down wrong alleys. If the student has in her/his mind’s eye all the possible things that could be done with the problem, what the consequences of each path would be, and what the likelihood of success on each path would be, then the time and effort spent online (during the exam) reduces proportionately.

But the importance of the mind’s eye is not merely limited to closed-book examinations, or time-crunched examinations. The importance extends to the more general scenario of learning and teaching. I can look up in a book a proof that not every normal subgroup is characteristic, but having the counterexample in my mind’s eye means that I can explore variations more easily, go forward, generalize. Books and online references are useful to supplement the mind’s eye in storing information — they cannot supplant the mind’s eye. The greatest research, insights and breakthrough come by immersing oneself in a problem, which means one can see it in the mind’s eye.

The idea is so breathtakingly simple that it amazes me why people do not use it more often. The mind’s eye can begin right in high school, in fact, when students are studying physics, chemistry, mathematics, history, economics, geography or just about anything. For those who’re more tuned to sound, they could use the mind’s ear, and for those more tuned to touch, they could use the mind’s touch. And it can begin simply. You look at a long-winding text or explanation. It’s too big for the mind’s eye. You look at it again. You break it down, you think about it. You mull over it. You sleep over it, and your subconscious reorganizes the ideas and the next day, it fits into the mind’s eye. Now, anybody can ask you about that idea and you can explain it offhand. More importantly, though, you can see (or hear or feel) it.

The mind’s eye could do well if supplemented by other resources that specifically prod it on. One of these techniques I’ve been exploring is an idea for math-related wikis, that I’ve started implementing. The first mathematics wiki I started is a wiki in group theory. The ”one thing, one page” paradigm on this wiki, as well as the diverse ways in which pages are linked together, using different relational paradigms, supplements the mind’s eye pretty well. I’ve often got new insights simply by surfing the wiki — and that’s saying something considering I’ve written almost all of it. The wiki has pages on things that might get short shrift in a textbook; for instance, normality is not transitive.

This isn’t the only way to supplement the mind’s eye. I came up with some ideas long ago about the use of a method of properties to organize information. That didn’t take off too well, though some of its features have been incorporated pretty effectively in the group theory wiki. Then, of course, we can learn from the way advertisers work: they tie in the core idea using a number of different paradigms. In his series on the Palace of Possibilities, Gary Craig talks of two tools to reinforce concepts: repetition and emotion. Instructional design texts emphasize the importance of reiterating the same basic point from a number of different perspective,s appealing to audio, visual and kinesthetic sense in the students over and above their cognitive abilities.

It is important to distinguish between the mind’s eye, and rote (or memorization). In fact, rote is a very special case of the mind’s eye; basically where you juts memorize the text as text, or in a specific form. The mind’s eye, in its more general form, encourages a complete immediate grasp of the material, but not from a specific angle, but rather from a large number of angles. The mind’s eye works best by building redundancy: by having not just an eye, but several eyes, several ears, and several hands to touch and feel.

Another important point is that the specific methods one uses to build the mind’s eye could vary widely, which is why I’m not listing here how to do this. The core idea is to increase the number of ways ideas linked together in the mind, and this could be done through random association attempts, by using systematic paradigms, or just by exposing oneself to a lot of material and letting the subconscious do the organizing. The key is to get the mind’s eye in action.

The dissemination of science

This is a somewhat unusual post for this blog. I typically use the What is Research? blog to describe issues related to my day-to-day study and to-be research life, and my own experiments. In this post, I’m going to talk about something broader that has, of late, been concerning me.

Recently, I’ve been reading The Future of Ideas and Code, Version 2, two fantastic books by Lawrence Lessig. Lawrence Lessig is the man who gave birth to the Creative Commons movement. The core idea of the Creative Commons is simple: authors and creators of original work get to specify exactly how they’re okay with their work being reused. For instance, an ”attribution-share like” license means that the author allows others to reuse the work and create derivatives, as long as all derivatives attribute the original work, and also have the same or a compatible license. There are no-derivative licenses (which forbid the creation of derivative works) and noncommercial licenses (which forbid commercial use).

Lessig was motivated to start the Creative Commons, roughly by concerns he had about big corporations pushing the government to extend the term of copyright. Copyright law in the United States currently gives the author’s descendants the copyright on the author’s work for 70 years. Just a few years ago, this number was 50 years: the increase to 70 was one of the things that raised Lessig’s eyebrows. Lessig points out that increasing the term of copyright beyond 50 years after an author’s death is hardly an incentive to create new works, and plays more the role of protecting old works against new challenges.

Lessig isn’t the first of his kind. Richard Stallman probably takes a more extreme stand on the issue: he wants all software to be licensed under the GNU Public License (GPL) which forces software to reveal its source code, and requires that the source code always be free to modify and tinker with. Stallman has been responsible for the development of a lot of excellent software, including the text editor Emacs, and he certainly knows of the merits of free software, as he calls it.

What’s interesting is that Lessig and Stallman both come from universities. Like Donald Knuth, the man who created TeX, they don’t come from a profit-maximizing corporate perspective. They come with a clear aim and work towards it. In a sense, they’re representative of the best traditions in American universities: some of the most radical ideas stem from universities. Google, currently the world’s biggest search company, also grew out of a student project at a university. Universities give rise to the best ideas, perhaps precisely because they’re not pressured by or responsible to existing corporations that are entrenched in old ways of doing things.

Yet, it is ironic that academics largely remains unaffected by the improvements in science, technology, by the new methods of communication and interaction and the new paradigms developed by academics. In his book The Future of Ideas, Lessig talks about how old corporations are entrenched in old ways of doing things, and hence are more resistant to change than people who have nothing to lose. I see great evidence of this in academics, as I’m going to explain here.

Academics has arguably evolved some of the bes traditions for peer-to-peer sharing, scientific publishing, and knowledge dissemination. Indeed, by making publication necessary to get credit and move higher on the tenure track, scientists are forced to publish, rather than hoard, their findings. Universities in the United States also have an excellent tradition of sincere teaching, where professors are involved not only with teaching students, but also with setting challenging examinations, regular and challenging assignments, and maintaining office hours for students to contact them and discuss specific issues. Academics at the good American university combines professional high-quality service with a culture and ethic of sharing, mixing, and reusing.

Yet, academics hasn’t scaled, or benefited from the kind of economies of scale and large participation. The first and most obvious reason for this is that there aren’t that many academicians. What we gain in quality, we lose in quantity. For instance, there’s a significant difference in the scope and nature of Wikipedia articles on Harry Potter topics, and the scope of Wikipedia articles in mathematics (and probably the Wikipedia articles in other sciences are in a similar situation). It’s very easy to find loads of online discussion on the World of Warcraft or Star Wars but hard to find quality discussions in group theory (a part of mathematics).

As I mentioned above, part of the reason is that academics doesn’t have that many people. Google Scholar notwithstanding, there isn’t much scope, either in terms of commerce or in terms of numbers, for building the kind of communities around academic topics as there are around a lot of trivia. Entry barriers are high. Also, the general tendency in academics to be careful, to have your facts right before coming to the table, means that there is less quick, rapid and spontaneous participation.

But I think the deeper problem lies with the fact that people in academia do not see the reason to challenge the way things have always been done. True, we now have email, online journal access, and a host of other facilities made possible by modern technology. Lecturers put up freely available lecture notes online. Yet, the language of thinking, at least in mathematics, hasn’t come online. We haven’t exploited the tremendous opportunities that cyberspace can offer us.

Mathematics Doctoral Programs, then and now, a Notices Letter from the Editor, describes some of these. Even today, the standard way of teaching is for the lecturer to stand in front of the board, and write stuff on it from carefully prepared notes, as students struggle to take notes/copy and ask questions. True, students now have Google and Wikipedia to help in solving assignments, in addition to the large number of books available for the purpose. But the fundamental methodology of looking at and solving problems hasn’t changed. Most alarmingly, mathematicians haven’t come around and said Wikipedia provides information, and it’s good; but we could use the same technology to provide much better, more reliable, easy-to-locate information. Let’s do it. My impression is that a lot of precious class time, and a lot of the effort of researchers, is wasted simply in resolving trivial questions and doubts of students that should have readily available answers online.

What are the reasons for this? One, of course, is that a mathematician’s job (and probably the same for any academic’s job) is a full-time one. Knowledge dissemination isn’t in the main a part of the job, so the mathematicians have little reason to put in effort for it beyond what is needed for preparing the classes. However, what I think this misses is the fact that the one-time investments needed to disseminate knowledge on a wider scale, have long-lasting repercussions, because they improve the intelligence of the audience, at very little additional effort or cost to the mathematician. When you write a book in mathematics, this means that people can read the book while you’re sleeping, and gain from the knowledge. This doesn’t make you useless to them; it means instead that they start off interacting with you from a higher plane. Similarly, if we have more mechanisms for putting academic information and ideas in cyberspace, more people can access and learn from those ideas while we sleep. That means that more people get into the subject, and more people ask us questions that require actual thought, rather than question us about trivalities.

What I fear is that the importance of having reliable and quality information available in a way that a lot of people with different needs can use it, is underestimated. True, we have public seminars and colloquia, and a lot of good work has been done, specially by MIT Open CourseWare, but this, again, remains more the exception than the rule.

I’m also aware that a lot of work has been going on, recently, with the so-called Semantic Web, particularly in the biological sciences. Some good projects have been taken on by the Science Commons, the science branch of Creative Commons. Yet, I see something missing, and strange, in these endeavors. They declare standards and protocols for scientists to follow, suggesting ways to integrate large existing amounts of data. Not surprisingly, the main push for these initiatives is biology, and specifically genomics, where commercial interests are also strong. However, what we do not see that much is entrepreneurial bottom-up spirit.

What we do not see is individual scientists, educators, researchers, graduate students, undergraduate students, and high school students, exploring a lot of new ways to make knowledge reach out to more people. The typical impression/response, it seems, is the lack of demand. I can say from my personal experience, with various small-scale initiatives I have tried and continue trying, that demand is low for bottom-up initiatives. Start a discussion on Harry Potter, or on some problem with Mozilla Thunderbird, and you’re likely to get a reasonable number of responses. Start a discussion on some obscure area of mathematics, and responses will be slow.

But I don’t think that the initial low demand is a reason to be greatly worried. It does mean that we need to be careful when porting methods and ideas from the worlds of commerce and fandom to academics, and it means we need to blend and modify to go well with the best traditions of academics. But potentially, there is a lot of demand for a good knowledge dissemination tool for academics. I think Wikipedia can prove that point: sloppy source though it is, Wikipedia is used by a large number of students. And if you actually think of it, not that many person-hours have gone into the mathematics part of Wikipedia. if we could leverage the same ideas in other endeavors, it’d be great.

(Wikipedia has had challengers in the past, most of them poorly architected; there is a new and growing threat to Wikipedia called Citizendium. From what I’ve seen, the Citizendium is based on sound principles and is likely to soon be able to offer value that is endemically missing from Wikipedia. But our thirst for knowledge, information and ideas is too large to be quenched by either Wikipedia or Citizendium).

This is what motivates me in part to work on the group theory wiki, topology wiki, commutative algebra wiki, and some other wikis that I am gradually developing. They’re based on a way of making basic mathematical knowledge available in a very structured and easily navigable way ,suggesting new insights and ideas. These aren’t the only endeavors I’m experimenting with; there are some others nascent in my mind, that I’ll blog about when I’ve got enough to say on them. And I’m not sure if these endeavors, specifically, will catch on with the masses within mathematics. They’re not likely to get an exponentially increasing audience in the near future. My hope, rather, is that with a lot of people trying a lot of new things, we’ll be able to understand what dissemination tools work, and how.

March 8, 2008

Tryst with functional analysis

It’s the end of the ninth of the eleven-week winter quarter, and the next two weeks are probably going to be fairly hectic: we have examinations/final homeworks to submit in all subjects, and I’m guessing that from tomorrow onwards, work on these will begin full-force. So I’m taking a little time off right now to describe my tryst with functional analysis so far.

During the first 1-2 weeks of the functional analysis course taught by Professor Ryzhik, I was enjoying the material, more or less keeping pace with the material, and also reading ahead some topics that I thought he might cover. However, from around the third week onwards, the nature of topics being covered in the course changed somewhat and I started getting out of sync with the material. Then came an assignment with problems that I had no idea of how to solve. Eventually, solutions to these problems were found in an expository paper by David (one of my batchmates) and the first years worked out the details of the solution on the chalkboard.

At the time, I was feeling tired, so I didn’t try to keep pace with and understand all the details of these solutions. I wrote down a reasonable bit of them to muster a decent score on the assignment but I didn’t internalize the problem statements (I did have some ideas about the problems but not from the angle that Prof. Ryzhik was targeting).

So, in the next week’s problem set, I wasn’t able to solve any of the problems. This wasn’t because the problems were individually hard (though some of them were) but because even the easy problems needed a kind of tuning in that I hadn’t done/ I learned of the solutions from others and understood enough of them to submit my assignment, but they hadn’t sunk in. At the same time, I was handling a number of other things and I didn’t have a clear idea of how to proceed with studying analysis.

Some time during this uncomfortable period with the subject, I remembered that the previous quarter, I had overcome my discomfort with noncommutative algebra by writing Flavour of noncommutative algebra part 1 and Flavour of noncommutative algebra part 2. Noncommutative algebra differed from functional analysis: in the former, I was reasonably good at solving individual problems but just hadn’t had the time to look back and get the bigger picture. In functional analysis, I didn’t start off with a good problem-solving ability or an understanding of the bigger picture.

Nonetheless, I knew that trying to prepare a write-up on the subject was probably the best way of utilizing my energies and probably a way that would also be useful to other students, which could partly be a way of contributing back, considering that I hadn’t solved any of the recent assignment problems. Moreover, it was something I knew I’d enjoy doing and I hoped to learn a lot from. So I got started. The first attempt at preparing notes was just aroudn the corner from the mid-term. I got a lot of help from Rita Jimenez Rolland (one of my batchmates) who explained various parts of the course to me as I typed them in. (Here’s the write-up).

However, after the examination (where I didn’t do too well — notes are more useful if not prepared at the last minute) and as I learned more and more of the subject, I felt that it’s good to restart the notes-making process. I brainstormed myself about what kind of write-up would be most useful. Instead of just trying to cover whatever has been done in the course, I tried to look at the problems from a more basic angle, like: what are the fundamental objects here? What are the things we’re fundamentally interested in? I also brainstormed Mike Miller, who provided some more useful suggestions, and I got started with the write-up.

Preparing the analysis write-up hasn’t been plain sailing. The problem isn’t so much lack of time, as it is lack of richness of engagement. When I’m working on my group theory wiki or writing this blog entry, or doing something where I have a very rich and vivid idea of what’s going on, every part of my mind is engaged. There isn’t scope for distraction or going lax, because I’m engaging myself completely. However, when writing functional analysis notes, I faced the problem of my own ignorance and lack of depth and ideas in the subject. So, when I got stuck at something, I didn’t have enough alternate routes to keep myself engaged with the subject. The result? I kept distracting myself by checking email, catching up with other stuff, and what-not.

The contrast was most striking some time about a week ago. Through one hour of interrupted and not-very-focussed work on the functional analysis notes, I was getting somewhat frustrated. On a whim, I decided to switch to working on the group theory wiki. I did that, and was surprised to observe that for the next one hour, I didn’t check my email even once.

The complete concentration on the subject isn’t merely explained by the fact that I like group theory more, or am better at it. It is more the fact that I can see a bigger picture. Even if I’m concentrating on a couple of trees in the forest of group theory, I can see a much larger part of the forest. But when working on a couple of trees in functional analysis, all I can see is those and a bunch of other trees. So distractions find their way more easily.

I consider this illustrative because we often think of concentration as a kind of tool of willpower. True, the exertion of willpower is necessary to concentrate at some times (e.g. to pull myself back from the group theory wiki and back to functional analysis). But more fundamentally, I think it’s the intrinsic ability to see something as very rich and beautiful and to keep oneself completely engaged, that matters. Do determination and hardwork play a role? Yes, they do, but they do so because they help build that internal richness. Which explains why I love writing so much: in a number of areas, writing allows me the most to explore the inner richness. And I think this is a factor in explaining why, although many different people work hard, there are not so many who, at the end of their hardwork, find the work enjoyable. That’s because most of us use a very small part of the tremendous hardwork that we put in, into creating an internal richness that can engage us better.

What about functional analysis and me? Do I see the richness in functional analysis yet? Not to the level that’d help me cope very effectively with the course, but yes, I do feel a lot better about the subject. And I think the new notes on function spaces, even though they may seem amateurish right now, do indicate some of the insight and richness that I have gathered over the past few weeks. Let’s hope I can augment these notes in the coming days to a level that really gets me prepared for the examination!

February 4, 2008

Quarterly progress

When I started life this quarter, I had determined that it would be more enjoyable than last quarter, with less paranoia about assignments, more fun in the learning process and a cooler and calmer perspective to life. Things have been going fairly well in all respects.

Probably the first difference is that I’m much calmer about assignments, even when they don’t get done or are left right for the last minute. Providence also seems to have helped me; the assignments are (by and large) shorter, though there are some exceptions and I have to sometimes do a hasty last-minute job. But then, I had to do hasty jobs last quarter too; the difference was that assignments occupied much more mental space so that I couldn’t concentrate on doing the things that I liked.

One thing I’ve been experimenting with is wikiing while I work, and that means that as I’m learning stuff, I’m constantly thinking of how it can be organized on and integrated with the wikis that I’m working on. I’ve been augmenting the Commutative algebra wiki as I go along. This hasn’t been instant magic, because I don’t have the kind of feel for commutative algebra to immediately see how certain facts can be organized, but it means that I’m thinking of the subject in a way that’s not just limited to assignments. The wiki’s also becoming a useful, no-nonsense, reference point for me, and a convenient way to augment my memory and intelligence.

Differential topology is very interesting, and while studying it, I have to keep updating two wikis, the Topology Wiki, and the Differential geometry wiki, which often cover similar stuff from slightly different perspectives. I had worked quite a bit on organizing the topology wiki over the winter so every new thing I want to say seems to have a nice place to put it, and it seems to be not too far when the wiki will start exhibiting the kind of beautiful self-organization that I’m seeing in the group theory wiki.

The fact that there’s a course on local analysis in finite groups keeps me very happy. Although I can’t devote too much time to group theory while in the midst of all my compulsory courses, this course at least keeps me on track in the subject. It’s fascinating to see in formal proof all the things that I have picked up from textbooks and miscellaneous papers. I hope that I can really work out on the Group Properties Wiki soon, though I keep augmenting it from time to time. It’s looking more and more beautiful.

I’ve also been discussing some ideas in group theory with Professor George Glauberman, the instructor for the course on finite groups. Again, I plan to pursue them more later on.

The functional analysis course isn’t going as well as I’d hope, but I’m still having fun trying to follow the ideas. The problem for me is that the topics and direction are changing rapidly. Assignment-solving had a huge collective component last week (in other words, for many problems, I couldn’t figure out the solutions even after I wrote them). But at least there are some things in the subject that I’m learning. I am trying to wiki things out there on the measure theory wiki, but since I hadn’t set it up and structured it, what I add ar ejust isolated articles, and there’s no bigger picture emerging.

There’s also a course by Professor Victor Ginzburg on semisimple groups and geometry that I’m attending. For the first time, I’m seeing proofs (although more on the line of outlines of proofs) for statements on algebraic groups. This isn’t my primary focus area but it is something I’d like to understand well and Professor Ginzburg’s approach is interesting and his excitement is infectious. Unfortunately, I’m not getting to spend time on this outside class. It does remind me of some things I’ve played around with, like APS theory and the log category, and I hope that with a better understanding of semisimple groups I can come back to these and put more life into them.

How much I’ve learned this quarter remains to be seen, but I’m definitely enjoying it a lot.

Next Page »

Blog at WordPress.com.