What Is Research?

January 14, 2010

Math overflow: further notes

Filed under: Culture and society of research,Thinking and research — vipulnaik @ 12:17 am
Tags:

I mentioned Math Overflow a while back (also see the general backgrounder on math overflow on this blog). At the time, I hadn’t joined Math Overflow or participated in it. I joined a week ago (January 6) and my user profile is here. Below are some of my observations.

Surprising similarity of questions with questions I’ve asked in the past

Looking over the group theory questions, I found that a number of questions I had asked — and taken a long time to get answers to — had been asked on Math Overflow, and had been answered within a few days. The answers given weren’t comprehensive; I added some more information based on my past investigations into the topics, but it was still remarkable that these questions, most of which aren’t well known to most people in the subject, were answered so quickly.

  • The question Are the inner automorphisms the only ones that extend to every overgroup? is a question that I first asked more than five years ago, when still an undergraduate. I struggled with the question and asked a number of group theorists, none of whom were aware of any pasy work on these problems. I later managed to solve the problem for finite groups, and then my adviser discovered, from Avinoam Mann, that the problem was tackled in papers by Paul Schupp (1987) and Martin Pettet (1990), along with the many generalizations that I had come up with (some variants of the problem seem to remain unsolved, and I am working on them). You can see my notes on the problem here and you can also see my blog post about the discovery here.

  • The question When is Aut G abelian? was a question that I had been idly curious about at one point in time. I couldn’t find the answer at the time I raised the question, but stumbled across the papers a few months later by chance (all well before Math Overflow). It’s interesting that the question was so quickly disposed of on Math Overflow. See also my notes on such groups here.

  • The question How can we formalize the naturality of certain characteristic subgroups is a more philosophical question with no real concrete answers, which I’ve considered for a long time too.

  • The question Balancing problem in combinatorics that I had, based on a generalization of an Olympiad question I had seen, turns out to be part of something called rainbow Ramsey theory, as the answer suggests.

Two things stand out: (i) all these questions are questions whose answers are not well-known (the people I asked didn’t know the answers offhand) but are questions that many people do ask (ii) On Math Overflow, they were dealt with quickly.

I think the situation is similar in many other established areas of mathematics — the answers are out there, but they are not well-known, probably because these problems are not “important” in the sense of being parts of bigger results. But they are questions that may naturally occur as minor curiosities to people studying the subject. These curiosities may either go unanswered or may get answered — but the answers do not spread to the level of becoming folk knowledge or standard textbook knowledge, because they aren’t foundational for any (as yet discovered) bigger results.

Math Overflow now provides a place to store such questions and their answers — thus, next time you have one of these questions, a bit of searching on Math Overflow would yield the question and the answers provided and stored for posterity. Apart from the questions I had thought of, consider this one that somebody thought up and turned out to have been considered in multiple papers: When is A isomorphic to A^3?.

The situation is probably different for areas where new, cutting-edge questions are being asked — i.e., areas where the questions are charting “new” territory and are helping build the understanding of participants. Some people told me that this is likely to be the situation with areas such as topological quantum field theory or higher category theory.

Skills needed and developed

So what skills are needed to participate in Math Overflow, and what skills get developed? In order to answer questions quickly, a combination of good background knowledge of mathematics and the ability to search ArXiV, Mathscinet, JSTOR, Google Scholar, and other resources seems necessary. In order to ask questions, it seems that a combination of a lot of background knowledge and a natural curiosity — of the bent needed for research, is what is needed.

Pros and cons of posting on Math Overflow

The obvious pro seems to be that a lot of people read your question. The sheer number of readers compensates for the fact that most people, even experts in the area, may not immediately know the answer. Because of the larger number of people, it is likely that at least a few will either have come across something similar or will be able to locate something similar or will be able to solve the question because he/she gets curious about it. It was pretty exhilerating when a minor question that I didn’t feel equipped or curious enough to struggle with was answered by somebody who came up with a construction in a few hours (see collection of subsets closed under union and intersection), or when a question I had about elementary equivalence of symmetric groups was answered within a few days (see elementary equivalence of infinitary symmetric groups).

The potential con could be that people may be tempted to ask a question on Math Overflow without thinking too much about it. This probably does happen but I don’t think it is a major problem. First, the general quality of participants is quite high, so even if people ask questions without thinking a lot about them, chances are there is something interesting and nontrivial about the question — because if there weren’t, people of the profile contributing would have been able to solve it even without a lot of thought. Further, even if a question is a good exercise for a person specializing in that subject — so he/she should struggle with it rather than ask others, it may be good for a specialist in another subject to simply ask.

The voting (up/down) system and the system of closing questions that are either duplicates or not suitable for Math Overflow for other reasons (such as undergraduate homework problem), combined with a reputation system for users linked to the votes they receive, seems to be a good way of maintaining the quality of the questions.

A revision of some of my earlier thoughts

In a blog post almost a year ago titled On new modes of mathematical collaboration, I had expressed some concern regarding the potential conflict between the community and activity that is needed to have frequently updated, regularly visited content, and the idea of a steady, time-independent knowledge base that could be used as a reference. mathematics blogs, with regular postings and comments, and polymath projects, which involve collaborative mathematical problem-solving, are examples of the former. Mathematics references such as Mathworld, Planetmath, the mathematics parts of Wikipedia, The Springer Encyclopedia of mathematics, ncatlab, Tricki, and the Subject Wikis (my idea) are some examples of the latter, to varying degrees and in different ways. The former generate more activity, the latter seem to have greater potential for timeless content.

Math Overflow has the features of both, which is what makes it interesting. It has a lot of activity — new questions posted daily, new answers posted to questions, and so on. The activity, combined with the points system for reputation, can be addictive. But at the same time, a good system of classification and search, along with a wide participatory net, makes it a useful reference. I’m inclined to think of its reference value as greater than what I thought of at first, largely because of the significant overlap in questions that different people have, as I anecdotally describe above.

Math Overflow stores both mathematical data — questions and their answers, as well as metadata — what kind of questions do people like to ask, what kind of answers are considered good enough for open-ended questions, how quickly people arrive at answers, what kind of questions are popular, how do mathematicians think about problems, etc. This metadata has its own value, and the reason Math Overflow is able to successfully collate such metadata is because it has managed to attract high-quality participants as well as get a number of participants who are very regular contributors. Both the data and the metadata could be useful to future researchers, teachers, and people designing resources whether of the community participation type or the reference type.

On the other hand…

On the other hand, Math Overflow is not the answer to all problems, particularly the ones for which it was not built. Currently, answers to questions on Math Overflow are broadly of the following three types (or mixtures thereof): (i) an actual solution or outline of a solution, when it is either a short solution arrived at by the person posting or an otherwise well-known short answer (ii) a link to one or more pages in a short online reference (such as an entry on Wikipedia or any of the other reference resources mentioned above) (iii) a link or reference to papers that address that or similar questions.

For some questions, the links go to blog posts or other Math Overflow discussions, which can be thought of as somewhere in between (ii) and (iii).

With (i), the answer is clearly and directly presented, and the potential downside is that that short answer may not provide a general framework or context of related results and terminology. With (ii), a little hunting and reconstruction may need to be done to answer the question as originally posed, but the reference resource (if a nice one) gives a lot of related useful information. (iii) alone (i.e., without being supplemented by (i) or (ii)) is, in some sense, the “worst”, in that reading a paper (particularly if it is an original research paper in a journal) may take a lot of investment for a simple question.

In my ideal world, the answer would be either (i) + (ii), or (ii) (with one of the links in (ii) directly answering the question), plus (iii) for additional reference and in-depth reading. But there is a general paucity of the kind of in-depth material in online reference resources that would make the answer to typical Math Overflow questions by adequately dealt with by pointing to online references. So, I do think that an improvement in online reference resources can complement Math Overflow by providing more linkable and quickly readable material in answer to the kinds of questions asked.

August 9, 2009

Collaborative mathematics, etc.

Filed under: polymath — vipulnaik @ 12:51 am

UPDATE: See the polymath project backgrounder for the latest information.

It’s been some time since I last wrote about the polymath project (see this, this for past coverage), and an even longer time since I wrote an extremely lengthy blog post about Michael Nielsen’s ideas about collaborative science.

The first polymath project, polymath1, was about the density Hales-Jewett theorem. This was declared a success, since the original problem was solved within about a month, though the writing up of the paper is still proceeding. The problem for the project was proposed by Timothy Gowers.

Terry Tao (WordPress blog) has now started a polymath blog discussing possible open problems for the polymath project, strategies for how to organize the problem-selection and problem-solving process, and other issues related to writing up final solutions and sharing of credit.

In this blog post, Jon Udell reflects on how the introduction of LaTeX typesetting into wordpress was a positive factor in getting talented mathematicians like Terence Tao and Timothy Gowers into the blogosphere, and leading to innovative user projects such as the polymath project. Udell notes that introducing existing typesetting solutions into new contexts such as Internet blogging software can have profound positive effects. (more…)

March 24, 2009

Concluding notes on the polymath project — and a challenge

Filed under: polymath,Wikis — vipulnaik @ 4:27 pm

In this previous blog post, I gave a quick summary of the polymath project, as of February 20, 2009. The project, which began around February 2, 2009, has now been declared successful by Gowers. While the original aim was to find a new proof of a special case of an already proved theorem, the project seems to have managed to find a new proof of the general case. There’s still discussion on how to clean up, prepare the draft of the final paper, and wrap up various loose ends.

In a subsequent blog post, Gowers gave his own summary of the project as well as what he thinks about the future potential of open collaborative mathematics. Michael Nielsen, who hosted the Polymath1 wiki where much of the collaboration occurred, also weighed in with this blog post.

In Gowers’ assessment, the project didn’t have the same kind of massive participation as he had hoped for. People like Gowers and Terence Tao participated quite a bit, and there were also a number of other people who made important contributions (my own estimate of this is around eight or nine, based on the comment threads, with an additional three or four who made a few crucial comments but did not participate extensively). But it still wasn’t “massive” the way that Gowers had envisaged it. Nielsen felt that, for a project just taking off, it did pretty well. He compared it to the early days of Wikipedia and the early days of Linux, and argued that the polymath project did pretty well compared to these other two projects, even though those projects probably had a lot larger appeal.

Good start, but does it scale?

Before the polymath project began (or rather, before I became aware of it), I wrote this blog post, where my main point was that while forums, blogs and “activity” sound a lot appealing, the greater value creation lies in having reliable online reference material that people can go to.

Does that make me a critic of polymath projects?

Well, yes and no. I had little idea at the time (just like everybody else) about whether the particular polymath project started by Gowers would be a success. Moreover, because Ramsey theory is pretty far from the kind of math I have a strong feel for, I had no idea how hard the problem would be. Nonetheless, a solution within a month for any nontrivial problem does seem very impressive. More important than the success in the project, what Gowers and the many others working on it should be congratulated for is the willingness to invest a huge amount of time into this somewhat new approach to doing math. Only through experimentation with new approaches can we get a feel for whether they work, and Gowers has possibly kickstarted a new mode of collaboration.

The “no” part, though, comes from my strong suspicion that this kind of thing doesn’t scale. (more…)

February 23, 2009

Doing it oneself versus spoonfeeding

In previous posts titled knowledge matters and intuition in research, I argued that building good intuition and skill for research requires a strong knowledge and experience base. In this post, I’m going to talk about a related theme, which is also one of my pet themes: my rant at the misconception that doing things on one’s own is important for success.

The belief that I’m attacking

It is believed in certain circles, particularly among academics, that doing things by oneself, working out details on one’s own, rather than looking them up or asking others, is a necessary step towards developing proper understanding and skills.

One guise that this belief takes is a skew of learning paradigms that go under names such as “experiential learning”, “inquiry-based learning”, “exploratory learning”, and the like. Of course, each of these learning paradigms is complex, and the paradigms differ from each other. Further, each paradigm is implemented in a variety of different ways. My limited experience with these paradigms indicates that there is a core belief common to the paradigms (I may be wrong here) which is that it is important for people to do things by themselves rather than have these things told to them by others. An extreme believer of this kind may consider with disdain the idea of simply following or reading what others, but a more moderate and mainstream stance might be that working things out for oneself, rather than following what others have done, is generally preferable, and following others is a kind of imperfect substitute that we nonetheless often need to accept because of constraints of time.

Another closely related theme is the fact that exploratory and inquiry-based methods focus more on skills and approaches rather than knowledge. This might be related to the general view of knowledge as something inferior, or less important, than skill, attitude, and approach. Which is why, in certain circles, the person who “is smart” and “thinks sharply” is considered inferior to the person who merely “knows a lot”. This page, for instance, talks about how inquiry-based learning differs from the traditional knowledge-based approach to learning because it focuses more on “information-processing skills” and “problem-solving skills”. (Note: I discovered the page via a Google search a few months back, and am not certain about how mainstream its descriptions are). (Also note: I’ve discussed more about this later in the post, where I point out other sides of this issue).

Closely related to the theme of exploration and skills-more-than-knowledge is the theme of minimal guidance. In this view, guidance from others should be minimal, and students should discover things their own way. There are many who argue both for and against such positions. For instance, a paper (Kirschner, Sweller, and Clark) that I discovered via Wikipedia argues why minimally guided instruction does not work. Nonetheless, there seems to be a general treatment of exploration, self-discovery, and skills-over-knowledge as “feel-good” things.

Partial truth to the importance of exploration

As an in-the-wings researcher (I am currently pursuing a doctoral degree in mathematics) I definitely understand the importance of exploration. I have personally done a lot of exploration, much of it to fill minor knowledge gaps or raise interesting but not-too-deep questions. And some of my exploration has led to interesting and difficult questions. For instance, I came up with a notion of extensible automorphism for groups and made a conjecture that every extensible automorphism is inner. The original motivation behind the conjecture was a direction of exploration that turned out to have little to do with the partial resolution that I have achieved on the problem. (With ideas and guidance from many others including Isaacs, Ramanan, Alperin, and Glauberman, I’ve proved that for finite groups, any finite-extensible automorphism is class-preserving, and any extensible automorphism sends subgroups to conjugate subgroups). And I’ve also had ideas that have led to other questions (most of which were easy to solve, while some are still unsolved) and others that have led to structures that might just be of use.

In other words, I’m no stranger to exploration in a mathematical context. Nor is my exploratory attitude strictly restricted to group theory. I take a strongly exploratory attitude to many of the things I learn, including things that are probably of little research relevance to me. Nor am I singularly unique in this respect. Most successful researchers and learners that I’ve had the opportunity to interact with are seasoned explorers. While different people have different exploration styles, there are few who resist the very idea of exploration. Frankly, there would be little research or innovation (whether academic or commercial) if people didn’t have an exploratory mindset.

So I’m all for encouraging exploration. So what am I really against? The idea that, in general, people are better off trying to figure things out for themselves rather than refer to existing solutions or existing approaches. Most of the exploration that I’ve talked about here isn’t exploration undertaken because of ignorance of existing methods — it is exploration that builds upon a fairly comprehensive knowledge and understanding of existing approaches. What I’m questioning is the wisdom of the idea that by forcing people to work out and explore solutions to basic problems while depriving them of existing resources that solve those problems, we can impart problem-solving and information-processing skills that would otherwise be hard to come by.

Another partial truth: when deprivation helps

Depriving people of key bits of knowledge can help in certain cases. These are situations where certain mental connections need to be formed, and these connections are best formed when the person works through the problem himself or herself, and makes the key connection. In these cases, simply being told the connection may not provide enough shock value, insight value, richness or depth for the connection to be made firmly.

The typical example is the insight puzzle. By insight puzzle, I mean a puzzle whose solutions relies on a novel way of interpreting something that already exists. Here, simply telling the learner to “think out of the box” doesn’t help the learner solve the insight puzzle. However, if a situation where a similar insight is used is presented shortly before administering the puzzle, the learner has a high chance of solving the puzzle.

The research on insight puzzles reveals, however, that in order to maximize the chances of the learner getting it, the similar insight should be presented in a way that forces the learner to have the insight by himself/herself. In other words, the learner should be forced to “think through” the matter before seeing the problem. The classic example of this is a puzzle that involves a second use of the word “marry” — a clergyman or priest marrying a couple. One group of people were presented, before the puzzle, with a passage that involved a clergyman marrying couples. Very few people in this group got the solution. Another group of people were presented a similar passage, except that this passage changed the order of sentences so that the reader had to pause to confront the two meanings of “marry”. People in this second group scored better on the test because they had to reflect upon the problem.

There are a couple of points I’d like to note here. That depriving people of some key ingredients forces them to reflect and helps form better mental connections is true. But equally important is the fact that they are presented with enough of the other ingredients in a manner that the insight represents a small and feasible step. Secondly, such careful stimulation requires a lot of art, thought, and setup, and is a far cry from setting people “free to explore”.

When to think and when to look

Learners generally need to make a trade-off between “looking up” answers and “thinking about them”. How this trade-off is made depends on a number of factors, including the quality of insight that the looked-up answer provides, the quality of insight that learners derive from thinking about problems, the time at the learner’s disposal, the learner’s ultimate goals, and many others. In my experience, seasoned learners of a topic are best able to make these trade-offs themselves and determine when to look and when to struggle. Thus, even if deprivation is helpful, external deprivation (in the sense of not providing information about places where they can look up answers) does not usually make sense. There are two broad exceptions.

The first is for novice learners. Novice learners, when they see a new problem, rarely understand enough about their own level of knowledge to know how long they should try the problem, what kind of place to look up if any, and what the relative advantages of either approach are. By “novice learner” I do not mean to suggest a general description of a person. Everybody is a novice learner in a topic they pick up for the first time. It is true that some people are better in general as learners in certain broad areas — for instance, I’d be a better learner of mathematical subjects than most people, including mathematical subjects I have never dealt with. However, beyond a slight headstart, everybody goes through the “novice learner” phase for a new field.

For novice learners, helpful hints on what things they should try themselves, how long they should try those things, and how to judge and build intuition, are important. As such, I think that these hints need to be made much better in quality than they typically are. The hint to a learner should help the learner get an idea about the difficulty level in trying the problem, the importance of “knowing” the solution at the end, the relative importance of reflecting upon and understanding the problem, and whether there are some insights that can only be obtained by working through the problem (or, conversely, whether there are some insights that can only be obtained by looking at the solution). Here, the role of the problem-provider (who may be an instructor, coach, or a passive agent such as a textbook, monograph, or video lecture series) is to provide input that helps the learner decide rather than to take the decision-making reins.

A second powerful argument is for learners whose personality and circumstances require “external disciplining” and “external motivation”. The argument here is essentially a “time inconsistency” argument — the learner would ideally like to work through the problem himself or herself, but when it comes to actually doing the problem, the learner feels lazy, and may succumb to simply looking up the solution somewhere. (“Time inconsistency” is a technical term used in decision theory and behavioral economics). Forcing learners to actually do the problems by themselves, and disciplining them by not providing them easy access to solutions, helps them meet their long-term goals and overcome their short-term laziness.

I’m not sure how powerful the time inconsistency argument is. Prima facie evidence of it seems huge, particularly in schools and colleges, where students often choose to take heavy courseloads and somehow wade through a huge pile of homework, and yet rarely do extra work voluntarily on a smaller scale (such as starred homework problems, or challenging exercises) even when the load on them is low. This fits the theory that, in the long haul, these students want to push themselves, but in the short run, they are lazy.

I think the biggest argument against the time inconsistency justification for depriving people of solutions is the fact that the most clear cases of success (again in my experience) are people who are not time inconsistent. The best explorers are people who explore regardless of whether they’re forced to do so, and who, when presented with a new topic, try to develop a sufficiently strong grasp so that they can make their own decisions of how to balance looking up with trying on their own.

Yet another argument is that laziness works against all kinds of work, including the work of reading and following existing solutions. In general, what laziness does is to make people avoid learning things if it takes too much effort. Students who decide not to solve a particular problem by themselves often also don’t “look up” the solution. Thus, in the net, they never learn the solution. Thus, even in cases where trying a problem by oneself is superior to looking it up, looking it up may still be superior to the third alternative: never learning the solution.

A more careful look at what can be done

It seems to me that providing people information that helps them decide which problems to work with and how long to try before looking up is good in practically all circumstances. It’s even better if people are provided tools that help them reflect and consolidate insights from existing problems, and if these insights are strengthened through cross-referencing from later problems. Since not every teaching resource does this, and since exploration at the cutting edge is by definition into unknown and poorly understood material, it is also important to teach learners the subject-specific skills that help them make these decisions better.

Of course, the specifics vary from subject to subject, and there is no good general-purpose learner for everything. But simply making learners and teachers aware of the importance of such skills may have a positive impact on how quickly the learners pick such skills.

Another look at exploratory learning

In the beginning, I talked about what seems to be a core premise of exploratory learning — that learners do things best when they explore by themselves. Strictly speaking, this isn’t treated as a canonical rule by pioneers of exploratory learning. In fact, I suspect that the successful executions of exploratory learning succeed precisely because they identity the things where learners investing their time through exploration yields the most benefit.

For instance, the implementation of inquiry-based learning (IBL) in some undergraduate math classes at the University of Chicago results in a far from laissez faire attitude towards student exploring things. The IBL courses seem, in fact, to be a lot more structured and rigid than non-IBL courses. Students are given a sheet of the theorems, axioms and definitions of the course, and they need to prove all the theorems. This does fit in partly with the “deprivation” idea — that students have to prove the theorems by themselves, even though proofs already exist. On the other hand, it is far from letting students explore freely.

It seems to me that while IBL as implemented in this fashion may be very successful in getting people to understand and critique the nature and structure of mathematical proofs, it is unlikely to offer significant advantages in terms of the ability to do novel exploration. That’s because, as my experience suggests, creative and new exploration usually requires immersion in a huge amount of knowledge, and this particular implementation of IBL trades off a lot of knowledge for a more thorough understanding of less knowledge.

Spoonfeeding, ego, and confidence issues

Yet another argument for letting people solve problems by themselves is that it boots their “confidence” in the subject, making them more emotionally inclined to learn. On the other hand, spoonfeeding and telling them solutions makes them feel like dumb creatures being force-fed.

In this view, telling solutions to people deprives them of the “pleasure” of working through problems by themselves, a permanent deprivation.

I think there may be some truth to this view, but it is very limited. First, the total number of problems to try is so huge that depriving people of the “pleasure” of figuring out a few for themselves has practically no effect on the number of problems they can try. Of course, part of the challenge is to make this huge stream of problems readily available to people who want to try them, without overwhelming them. Second, the “anti-spoonfeeding” argument elevates an issue of acquiring subject-matter skills to an issue of pleasing learners emotionally.

Most importantly, though, it goes against the grain of teaching people humility. Part of being a good learner is being a humble learner, and part of that involves being able to read and follow what others have done, and to realize that most of that is stuff one couldn’t have done oneself, or that would have taken a long time to do oneself. Such humility is accompanied by pride at the fact that one’s knowledge is built on the efforts of the many who came before. To use a quote attributed to Newton, “If I have seen so much, it is because I stand on the shoulder of giants.”

Of course, a learner cannot acquire such humility if he or she never attempts to solve a problem alone, but a learner cannot acquire it if he or she simply tries to solve problems rather than ask others or use references to learn solutions. It’s good for learners to try a lot of simpler problems that they get, and thus boost confidence in their learning, but it is also important that for hard problems, learners absorb the solutions of others and make them their own.

February 20, 2009

A quick review of the polymath project

Filed under: polymath — vipulnaik @ 12:15 am

In an earlier blog post on new modes of mathematical collaboration, I offered my critical views on Michael Nielsen’s ideas about making mathematics more collaborative using the Internet. Around the time, Timothy Gowers, a prominent mathematician, was inspired by Michael Nielsen’s post, to muse in this blog post about whether massively collaborated mathematics is possible. The post was later critiqued by Michael Nielsen.

Since then, Gowers decided to actually experiment with solving a problem using collaborative methods. The project is called the “polymath” project. “Polymath” means a person with extensive knowledge of a wide range of subjects. Gowers was arguably punning on the word, with the idea being that when many people do math together, it is like a “polymath”.

Gowers, who is more of a problem-solver than a theory-builder, naturally chose solving a problem as the testing ground for collaborative mathematics. Further, he chose a combinatorial problem (the density Hales-Jewett theorem) that had already been solved, albeit by methods that were not directly combinatorial, and defined his goal as trying to get to a combinatorial solution for the problem. Gowers wrote a background post about the problem and a post about the procedure, where he incorporated feedback from Michael Nielsen and others. These rules stipulated, among other things, that those participating in the collaborative project must not try to think too much about the problem away from the computer, and must not do any technical calculations away from the computer. Rather, they should share their insights. The idea was to see whether sharing and pooling insights led to discovery faster than working on them alone. I may have misunderstood Gowers’ words, so I’ll quote them here:

If you are convinced that you could answer a question but that it would just need a couple of weeks to go away and try a few things out, then still resist the temptation to do that. Instead, explain briefly, but as precisely as you can, why you think it is feasible to answer the question and see if the collective approach gets to the answer more quickly. (The hope is that every big idea can be broken down into a sequence of small ideas. The job of any individual collaborator is to have these small ideas until the big idea becomes obvious — and therefore just a small addition to what has gone before.) Only go off on your own if there is a general consensus that that is what you should do.

In the next post, Gowers listed his ideas broken down into thirty-eight points. He also clarified the circumstances under which the project could be declared finished. In Gowers’ words:

It is not the case that the aim of the project is to find a combinatorial proof of the density Hales-Jewett theorem when k=3. I would love it if that was the result, but the actual aim is more modest: it is either to prove that a certain approach to that theorem (which I shall soon explain) works, or to give a very convincing argument that that approach cannot work. (I shall have a few remarks later about what such a convincing argument might conceivably look like.)

In the next post, Gowers explained the rationale for selecting this particular problem. He explained that, first, he wanted to select a serious problem, the kind whose solution would be considered important for researchers in the field. Second, he didn’t want to select a problem that was parallelizable in a natural sense — rather, he believes that the solution to every problem does parallelize at some stage, and how this parallelization is to occur can itself be determined.

By this time, Gowers’ blog was receiving hundreds of comments, mostly comments by Gowers himself, but also including comments from distinguished mathematicians such as Terence Tao. Tao has his own blog, and he published a post giving the background of the Hales-Jewett theorem and a later post with some of his own ideas about the problem.

A few days later, Gowers announced at the end of this post that there was a wiki on the enterprise of solving the density Hales-Jewett theorem. In the same post, Gowers also summarized all the proof strategies that had come up thanks to the comments. Since then, there have been no more blog posts about the problem.

A look at the wiki

It’s still early days to know the eventual shape that the Polymath1 wiki will take. One thing that seems to be conspicuous by its absence is a copyright notice. This could create problems, particularly considering that this is a collaboratively edited website aimed at solving a problem.

There are some other things that I think need to be decided.

  1. Is the wiki intended only to provide leads or reference points to ideas elaborated elsewhere, or is it intended to provide the structure, substance and background material as well? If the former is the case, then the wiki can be designed in a problem-centric fashion. However, if the wiki is designed this way (i.e., only to provide leads), its generic comprehensibility is going to be poor. Moreover, the “cross-fertilization” of ideas with other problems is going to be minimal if the organization is centered completely around the density Hales-Jewett theorem. On the other hand, if the wiki provides too much of background information, it would be better to organize it according to the background information. This would make it lose its problem-specific focus. I think there is a trade-off here.

  2. Style of pages: The pages currently have a very conversational style. This may be because, currently, the pages are adaptations of material put up in blog posts and blog comments. But this conversational style makes it hard to use the pages as a handy reference or lookup point.

  3. Classifying page types: There needs to be some sort of separation between definition pages, pages about known theorems, pages about speculation and conjectures, and pages describing conjectures and thoughts. As of now, such a separation or classification is not available.

  4. Interfacing with other reference sources: If (and this goes back to the first point) it is decided that the wiki will not provide too much background information and will focus on a style suited to the problem focus, then some decisions will need to be made on how to link up to outside reference sources.

  5. Linking mechanisms between pages: A person who reads about one idea, definition, theorem, or conjecture, should have a way of knowing what else is most closely related to that. Robust linking mechanisms need to be decided for this.

To give an illustration of this, consider the current page on Line (permalink to current version). This page introduces definitions for three kinds of “lines” talked about in combinatorics — combinatorial lines, algebraic lines, and geometric lines. Some of the things I’d recommend for this page are:

  • Create separate pages for combinatorial line, algebraic line, geometric line.

  • In each page, create a “definition” section with a precise definition, and perhaps an “examples” section with examples, as well as links to the other two pages, explaining the differences.

  • For the page on combinatorial line, link to generalizations such as combinatorial subspace.

  • For the page on combinatorial line, provide a reverse link to pages that use this concept, or link to expository articles/blog entries that explain how and why the concept of combinatorial line is important.

Here are some suggestions on the theorem pages.

  • Create a separate section in the theorem page giving a precise statement of the theorem.

  • For each theorem, have sections of the page devoted to listing/discussing stronger and weaker theorems, generalizations and special cases. For instance, the coloring Hales-Jewett theorem is “weaker” than the density Hales-Jewett theorem as well as the Graham-Rothschild theorem.

Another suggestion I’d have would be to use the power of tools such as Semantic MediaWiki to store the relationships between theorems in useful ways.

I’ll post more comments as things progress.

February 13, 2009

Knowledge matters

It is fashionable in certain circles to argue that, particularly, for subjects such as mathematics that have a strong logical and deductive component, it is not how much you know that counts but how you think. According to this view, cramming huge amounts of knowledge is counter-productive. Instead, mastery is achieved by learning generic methods of reasoning to deal with a variety of situations.

There are a number of ways in which this view (though considered enlightened by some) is just plain wrong. At a very basic level, it is useful to counter the (even more common) tendency to believe that in reasoning problems, it is sufficient to “memorize” basic cases. However, at a more advanced level, it can come in the way of developing the knowledge and skills needed to achieve mastery.

My first encounters with this belief

During high school, starting mainly in class 11, I started working intensively on preparing for the mathematics Olympiads. Through websites and indirect contacts (some friends, some friends of my parents) I collected a reasonable starting list of books to use. However, there was no systematic preparation route for me to take, and I largely had to find my own way through.

The approach I followed here was practice — lots and lots of problems. But the purpose here wasn’t just practice — it was also to learn the common facts and ideas that could be applied to new problems. Thus, a large part of my time also went to reviewing and reflecting upon problems I had already solved, trying to find common patterns, and seeing whether the same ideas could be expressed in greater generality. Rather than being too worried about performing in an actual examination situation, I tried to build a strong base of knowledge, in terms of facts as well as heuristics.

In addition, I spent a lot of time reading the theoretical parts of number theory, combinatorics, and geometry. The idea here was to develop the fact base as well as vocabulary so that I could identify and “label” phenomena that I saw in specific Olympiad problems.

(For those curious about the end result, I got selected to the International Mathematical Olympiad team from India in 2003 and 2004, and won Silver Medals both years.)

At no stage during my preparation did I feel that I had become “smarter” in the sense of having better methods of general reasoning or approaching problems in the abstract. Rather, my improvements were very narrow and domain-specific. After thinking, reading, and practicing a lot of geometry, I became proportionately faster at solving geometry problems, but improved very little with combinatorics.

Knowledge versus general skill

Recently, I had a chance to re-read Geoff Colvin’s interesting book Talent is overrated. This book explains how the myth of “native talent” is largely just a myth, and the secret to success is something that Colvin calls “deliberate practice”. Among the things that experts do differently, Colvin identifies looking ahead (for instance, fast typists usually look ahead in the document to know what they’ll have to type a little later), identifying subtle and indirect cues (here Colvin gives examples of expert tennis players using the body movements of the person serving to estimate the speed and direction of the ball), and, among other things, having a large base of knowledge and long-term memory that can be used to identify a situation.

Colvin describes how mathematicians and computer scientists had initially hoped for general-purpose problem solvers, who knew little about the rules of a particular problem, but would find solutions using the general rules of logic and inference. These attempts failed greatly. For instance, Deep Blue, IBM’s chess-playing computer, was defeated by then world champion Garry Kasparov in a tournament, despite Deep Blue’s ability to evaluate a hundred million of moves every second. What Deep Blue lacked, according to Colvin, was the kind of domain-specific knowledge of what works and where to start looking, that Kasparov had acquired through years of stored knowledge and memory about games that he had played and analyzed.

A large base of knowledge is also useful because it provides long-term memory that can be tapped on to complement working memory in high-stress situations. For instance, a mathematician trying to prove a complicated mathematical theorem that involves huge expressions may be able to rely on other similar expressions that he/she has worked with before to “store” the complexity of this expression in a more simple form. Similarly, a chess player may be able to use past games as a way of storing a shorter mental description of the current game situation.

A similar idea is discussed in Gary Klein’s book Sources of Power, where he describes a Recognition-Primed Decision Model (RPD model) used by people in high-stress, high-stakes situation. Klein says that expert firefighters look at a situation, identify key characteristics, and immediately fit it into a template that tells them what is happening and how to act next. This template need not be precisely like a single specific past situation. Rather, it involves features from several past situations, mixed and matched according to the present situation. Klein also gives examples of NICU nurses, in charge of taking care of babies with serious illnesses. The more experienced and expert of these nurses draw on their vast store of knowledge to identify and put together several subtle cues to get a comprehensive picture.

Knowledge versus gestalt

In Group Genius: The Creative Power of Collaboration, Keith Sawyer talks about how people solve insight problems. Sawyer talks about gestalt psychologists, who believed that for “insight” problems — the kind that require a sudden leap of insight — people needed to get beyond the confines of pre-existing knowledge and think fresh, out of the box. The problem with this, Sawyer says, is that study after study showed that simply telling people to think out of the box, or to think differently, rarely yielded results. Rather, it was important to give people specific hints about how to think out of the box. Even those hints needed to be given in such a way that people would themselves make the leap of recognition, thus modifying their internal mental models.

I recently had the opportunity to read an article, Understanding and teaching the nature of mathematical thinking, by Alan Schofield, published in Proceedings of the UCSMP International Conference on Mathematics Education, 1985 (pages 362-379). Schofield talks about how a large knowledge base is very crucial to being effective at solving problems. He refers to research by Simon (Problem Solving and Education, 1980) that shows that domain experts have a vocabulary of approximately 50,000 “chunks” — small word combinations that denote domain-specific concepts. Schofield then goes on to talk about research by Brown and Burton (Diagnostic models for procedural bugs in basic mathematical science, Cognitive Science 2, 1978 ) that shows that people who make mistakes with arithmetic (addition and subtraction) don’t just make mistakes because they don’t understand the correct rules well enough — they make mistakes because they “know” something wrong. Their algorithms are buggy in a consistent way. This is similar to the fact that people are unable to solve insight problems, not because they’re refusing to think “outside the box”, but because they do not know the correct algorithms for doing so.

Schofield then goes on to describe the experiences of people such as himself in implementing George Polya’s problem-solving strategies. Polya enumerated several generic problem-solving strategies in his books How to solve it, Mathematical discovery, and Mathematics and plausible reasoning. Polya’s heuristics included: exploiting analogies, introducing and exploring auxiliary elements in a problem solution, arguing by contradiction, working forwards, decomposing and recombining, examining special cases, exploiting related problems, drawing figures, and working backward. But teaching these “strategies” in classrooms rarely resulted in an across-the-board improvement in students’ problem-solving abilities.

Schofield argues that the reason why these strategies failed was that they were “underspecified” — just knowing that one should “introduce and explore auxiliary elements”, for instance, is of little help unless one knows how to come up with auxiliary elements in a particular situation. In Euclidean geometry, this may be by extending lines far enough that they meet, dropping perpendiculars, or other methods. In problems involving topology, this may involve constructing open covers that have certain properties. Understanding the general strategy helps a bit in the sense of putting one on the lookout for auxiliary element, but it does not provide the skill necessary to locate the correct auxiliary element. Such skill can be acquired only through experience, through deliberate practice, through the creation of a large knowledge base.

In daily life

It is unfortunately true that much of coursework in school and college is based on a learn-test-forget model — students learn something, it is tested, and then they forget it. A lack of sufficient introspection and a lack of frequent reuse of ideas learned in the past leads students to forget what they learned quickly. Thus, the knowledge base gets eroded almost as fast as it gets built.

It is important not just to build a knowledge base but to have time to reflect upon what has been built, and to strengthen what was built earlier by referencing it and building upon it. Also, students and researchers who want to become sharper thinkers in the long term need to understand the importance of remembering what they learn, putting it in a more effective framework, and making it easier to recall at times when it is useful. I see a lot of people who like to solve problems but then make no effort to consolidate their gains by remembering the solution or storing the key ideas in long-term memory in a way that can be tapped on later. I believe that this is a waste of the effort that went into solving the problem.

(See also my post on intuition in research).

February 2, 2009

On new modes of mathematical collaboration

(This blog post builds upon some of the observations I made in an earlier blog post on Google, Wikipedia and the blogosphere, but unlike that post, has a more substantive part dedicated to analysis. It also builds on the previous post, Can the Internet destroy the University?.)

I recently came across Michael Nielsen’s website. Michael Nielsen was a quantum computation researcher — he’s the co-author of Quantum computation and quantum information (ISBN 978-0521632355). Now, Nielsen is working on a book called The Future of Science, which discusses how online collaboration is changing the way scientists solve problems. Here’s Nielsen’s blog post describing the main themes of the book.

Journals — boon to bane?

Here is a quick simplification of Nielsen’s account. In the 17th century, inventors such as Newton and Galileo did not publish their discoveries immediately. Rather, they sent anagrams of these discoveries to friends, and continued to work on their discoveries in secret. Their main fear was that if they widely circulated their idea, other scientists would steal the idea and take full credit for it. By keeping the idea secret, they could develop it further and release it in a more ripe form. In the meantime, the anagram could be used to prove precedence in case somebody else also came up with the idea.

Nielsen argues that the introduction of journals, combined with public funding of science and the recognition of journal publications as a measure of academic achievement, led scientists to publish their work and thus divulge it to the world. However, today, journal publishing competes with an even more vigorous and instantaneous form of sharing: the kind of sharing done in blogs, wikis, and online forums. Nielsen argues that this kind of spontaneous sharing of rough drafts of ideas, of small details that may add up to something big, opens up new possibilities for collaboration.

In this respect, the use of online tools allows for a “scaling up” of the kind of intense, small-scale collaboration that formerly occurred only in face-to-face contact between trusted friends or close colleagues. However, Nielsen argues that academics, eager to get published in reputable journals, may be reluctant to use online forums to ask and answer questions of distant strangers. Two factors are at play here: first, the system of academic credit and tenure does little to reward online activity as opposed to publishing in journals. Second, scientists may fear that other scientists can get a whiff of their idea and beat them in the race to publish.

(Nielsen develops “scaling up” more in his blog post, Doing Science Online).

Nielsen says that this in inefficient. Economists do not like deadweight losses (Market wiki entry, Wikipedia entry) in markets — situations where one person has something to sell to another, and the other person is willing to pay the price, but the deal doesn’t occur. Nielsen says that such deadweight losses occur routinely in academic research. Somebody has a question, and somebody else has an answer. But due to the high search cost (Market wiki entry, English Wikipedia entry), i.e., the cost of finding the right person with the answer, the first person never gets the answer, or has to struggle a lot. This means a lot of time lost.

Online tools can offer a solution to the technical problem of information-seekers meeting information-providers. The problem, though, isn’t just one of technology. It is also a problem of trust. In the absence of enforceable contracts or a system where the people exchanging information can feel secure about not being “cheated” (in this case, by having their ideas stolen), people may hesitate to ask questions to the wider world. Nielsen’s suggestions include developing robust mechanisms to measure and reward online contribution.

Blogging for mathies?

Some prominent mathematical bloggers that I’ve come across: Terence Tao (Fields Medalist and co-prover of the Green-Tao theorem), Richard E. Borcherds (famous for his work on Moonshine), and Timothy Gowers. Tao’s blog is a mixed pot of lecture notes, updates on papers uploaded to the ArXiV, and his thoughts on things like the Poincare conjecture and the Navier-Stokes equations. In fact, in his post on doing science online, Nielsen uses the example of a blog post by Tao explaining the hardness of the Navier-Stokes equation. In Nielsen’s words:

The post is filled to the brim with clever perspective, insightful observations, ideas, and so on. It’s like having a chat with a top-notch mathematician, who has thought deeply about the Navier-Stokes problem, and who is willingly sharing their best thinking with you.

Following the post, there are 89 comments. Many of the comments are from well-known professional mathematicians, people like Greg Kuperberg, Nets Katz, and Gil Kalai. They bat the ideas in Tao’s post backwards and forwards, throwing in new insights and ideas of their own. It spawned posts on other mathematical blogs, where the conversation continued.

Tao and others, notably Gowers, also often throw ideas about how to make mathematical research more collaborative. In fact, I discovered Michael Nielsen through a post by Timothy Gowers, Is massively collaborated mathematics possible?, which mentions Nielsen’s post on doing science online. (Nielsen later critiqued Gowers’ post. Gowers considers alternatives such as a blog, a wiki, and an online forum, and concludes that an online forum best serves the purpose of working collaboratively on mid-range problems: problems that aren’t too easy and aren’t too hard.

My fundamental disagreements

A careful analysis of Nielsen’s thesis will take more time, but off-the-cuff, I have at least a few points of disagreement about the perspective from which Nielsen and Gowers are looking at the issue. Of course, my difference in perspective stems from my different (and altogether considerably fewer) experience compared to them.

I fully agree with Nielsen’s economic analysis with regard to research and collaboration: information-seekers and information-providers not being able to get in contact often leads to squandered opportunities. I’ve expressed similar sentiments myself in previous posts, though not as crisply as Nielsen.

My disagreement is with the emphasis on “community” and “activity”. Community and activity could be very important to researchers, but in my view they can obscure the deeper goal of growing knowledge. And it seems that in the absence of strong clusters, community and activity can result in a system that is almost as inefficient.

In the early days of the Internet, mailing lists were a big thing (they continue to be a big thing, but their relative significance in the Internet has probably declined). In those days, the Usenet mailing lists and bulletin board systems often used to be clogged with the same set of questions, asked repeatedly by different newbies. The old hands, who usually took care of answering the questions, got tired of this repetition of the same old questions. Thus was born the “Usenet FAQ”. With this FAQ, the mailing lists stopped getting clogged with the same old questions and people could devote attention to more challenging issues.

Forums (such as Mathlinks, which uses PHPBB) are a little more advanced than mailing lists in terms of the ability to browse by topic. However, they are still fundamentally a collection of questions and answers posted by random people, with no overall organizing framework that aids exploration and learning. In a situation where the absence to a forum is no knowledge, a forum is a good place. In fact, a forum can be one input among many for building a systematic base of knowledge. But when a forum is built instead of a systematic body of knowledge, the result could be a lot of duplication and inefficiency and the absence of a bigger picture.

Systematic versus creative? And the irony of Wikipedia

Systematic to some people means “top-down”, and top-down carries negative connotations for many; or at any rate, non-positive connotation. For instance, the open source movement, which includes Linux and plenty of “free software”, prides itself on being largely a bottom-up movement, with uncoordinated people working of their own volition to contribute small pieces of code to a large project. Top-down direction could not have achieved this. In economic jargon, when each person is left to make his or her own choices, the outcome is invariably more efficient, because people have more “private information” about their interests and strengths. (Nielsen uses open source as an example for where science might go by being more open in many of his posts, for instance, this one on connecting scientists to scientists).

But when I’m saying systematic, I don’t necessarily mean top-down. rather, I mean that the system should be such that people know where their contributions can go. The idea is to minimize the loss that may happen because one person contributes something at one place, but the other person doesn’t look for it there. This is very important, particularly in a large project. A forum to solve mathematical questions has the advantage over offline communication: the content is available for all to see. But this advantage is truly meaningful only if everybody who is interested can locate the question easily.

Systematic organization does not always mean less of a sense of community and activity, but this is usually the case. When material is organized through internal and logical considerations, considerations of chronological sequence and community dynamics take a backseat. The ultimate irony is that Wikipedia, which is often touted as the pinnacle of Web 2.0 achievement, seems to prove exactly the opposite: the baldness, anti-contextuality, canonical naming, and lack of a “time” element to Wikipedia’s entries is arguably its greatest strength.

Through choices of canonical naming (the name of an article is precisely its topic), extensive modularization (a large number of individual units, namely the separate articles), a neutral, impersonal, no-credit-to-author-on-the-article style, and extensive strong internal linking, Wikipedia has managed to become an easy reference for all. If I want to read the entry on a topic, I know exactly where to look on Wikipedia. If I want to edit it, I know exactly what entry to edit, and I’m guaranteed that all future people reading the Wikipedia entry looking for that information will benefit from my changes. In this respect, the Wikipedia process is extraordinarily efficient. (It is inefficient in many other ways, namely, the difficulty of quality control, measured by the massive amount of volunteer hours spent combating obvious and non-obvious spam, as well as the tremendous amount of time spent in inordinate battle over control and editing of particular entries).

The power of the Internet is its perennial and reliable availability (for people with reliable access to electricity, machines, and Internet connections). And Wikipedia, through the ease with which one can pinpoint and locate entries, and the efficiency with which it funnels the efforts both of readers and contributors to edit a specific entry, builds on that power. And I suspect that, for a lot of us, a lot of the time we’re using the Internet, we aren’t seeking exciting activity, a sense of community, or personal solidarity. We want something specific, quickly. Systematic organization and good design and architecture that gets us there fast is what we need.

What can online resources offer?

A blog creates a sense of activity, of time flowing, of comments ordered chronologically, of a “conversation”. This is valuable. At the same time, a systematic organized resource, that organizes material not based on a timeline of discovery but rather based on intrinsic characteristics of the underlying knowledge, is usually better for quick lookup and user-directed discovery (where the user is in charge of things).

It seems to me that the number of successful “activity-based online resources” will continue to remain small. There will be few quality blogs that attract high-quality comments, because the effort and investment that goes into writing a good blog entry is high. There may be many mid-ranging blogs offering random insights, but these will offer little of the daily adventure feeling from a high-traffic, high-comment blog.

On the other hand, the market was quick “pinpoint references” — the kind of resources that you can use to quickly look something up — seems huge. A pinpoint reference differs from a forum in this obvious way. In a forum you ask a question and wait for an answer, or, you browse through previously asked questions. In a pinpoint reference, you decide you want to know about a topic, and go to the page, and BANG, the answer’s already there, along with a lot of stuff you might have thought of asking but never got around to, all neatly organized and explorable.

Fortunately or unfortunately, the notion of “community” and “activity” is more appealing in a naive, human sense than the notion of pinpoint references. “Chatting with a friend” has more charm to it than having electricity. But my experience with the way people actually work seems to suggest that people value self-centered, self-directed exploration quite a bit, and may be willing to sacrifice a sense of solidarity or “being with others in a conversation” for the sake of more such exploration. Pinpoint resources offer exactly that kind of a self-directed model to users.

My experiment in this direction: subject wikis

I started a group theory wiki in December 2006, and have since extended it to a general subject wikis website. The idea is to have a central source, the subject wikis reference guide, from where one can search for terms, get short general definitions, with links to more detailed entries in individual subject wikis. See, for instance, the the entry on “normal”.

I’ve also recently started a blog for the subject wikis website, that will describe some of the strategies and approaches and choices involved in the subject wikis.

It’s not clear to me how this experiment will proceed. At the very least, my work on the group theory wiki is helping me with my research, while my work on the other wikis (which has been very little in comparison) has helped me consolidate the standard knowledge I have in these subjects along with other tidbits of knowledge or thoughts I’ve entertained. Usage statistics seem to indicate that many people are visiting and finding useful the entries on the group theory subject wiki, and there are a few visitors to each of the other subject wikis as well. What isn’t clear to me is whether this can scale to a robust reference where many people contribute and many people come to learn and explore.

January 29, 2009

Can the Internet destroy the University?

Every so often, we hear talk about how computers and the Internet are “changing everything”. In particular, the Internet is believed to have had a great impact on methods of research and academics. In this blog post, I explore the question of whether the Internet really has changed things, and how.

The early and late Internet

It may surprise some that the Internet was present as early as the 1960s. No, it wasn’t quite the same Internet. Rather, the Internet of the time lacked rudimentary features such as the World Wide Web and email. Email itself began in the late 60s and 70s. The bulk of Internet users were at universities.

In those early days, the Internet was largely a network used for transfering files from one computer to the other and for sending messages. Like the telephone helped people communicate with each other over long distances, the Internet offered a computer-based means of communication that transmitted text instead of voice.

In 1989, the World Wide Web was created by Sir Tim-Berners Lee and a couple of his friends. The basic idea of the World Wide Web was a standard for displaying “webpages” — files intended to be viewed over the Internet, and allowing easy links between webpages (this came to be known as the “hyperlink”). Even afer the World Wide Web was created, there was no standard graphical user interface browser to view webpages, and the tech-savvy web users often used text-based web browsers to access web pages. With time came graphical browsers such as Netscape. With Windows 95, Microsoft jumped into the web browser foray by introducing Internet Explorer.

As dial-up Internet started spreading in developed countries and a few places started getting broadband, more and more newspapers, magazines, universities, businesses, governmental organizations, and non-profits started their own websites. Soon, the Internet became a place for banking, booking travel tickets, submitting online applications to jobs and schools, and reading newspapers and magazines. Business-to-business as well as business-to-consumer use of the Internet became more common. This was also the time of the Internet bubble. Entrepreneurs and investors started believing that the old rules of the game no longer applied and that Internet businesses could grow exponentially. The success of companies like Amazon, Yahoo and Microsoft further fed into the investor frenzy. The bubble burst with the turn of the century. While the Internet continued to live on and grow in reach, businesses became wiser.

The original “new new thing” of the Internet was that ordinary business transactions (banking, purchasing goods) as well as consumer activity (reading newspapers and magazines, listening to music, watching video) could be conducted more efficiently over the Internet. The second phase of Internet expansion, termed “Web 2.0” by the Internet “guru” Tim O’Reilly, went in a different direction. It sought to move collective community activity to the Internet, and create new forms of community activity.

Community activity was not entirely unknown in the Internet. In the 1980s, prior to the World Wide Web, communities of users interested in specific topics formed Bulletin Board Systems (BBSes) (Jason Scott’s Textfiles has information on this — Jason Scott’s hobby involves collecting and archiving activity on the Internet from the 1980s, including the BBSes). In the 1990s, there was vigorous participation in mailing lists and Internet Relay Chats. However, this participation was limited to “geeks” — people with some comfort with technology and deep interest in the topic. What Web 2.0 sought to do was “democratize” community activity on the Internet.

Examples included content sharing sites (such as Youtube (video sharing) and Flickr (photo sharing)), social networking sites (such as Facebook, Myspace, Orkut), and collaborative content creation sites — most notability Wikipedia. there was also a significant growth in blogging (with free blog-hosting services such as Blogger, WordPress, Typepad, and many people using free software such as the WordPress software to start blogs on their own websites). Fast-growing companies such as Google now offered a comprehensive free suite including mail, online document-creation software, and applications for site developers.

The initial growth of the Internet was thus strongly rooted in academia. Academics began exchanging documents by email long before the ordinary public did. The new spate of growth of the Internet, however, has been much more widespread. With the software created either commercially or by hobbyists, and the use widespread across all kinds of users ranging from young kids to workers to retired people, much of Web 2.0 has happened outside academia.

Adaptation to the old Internet

During the late 1990s and early 2000s, many journals introduced electronic versions. Libraries, in addition to subscribing to print copies of the journals, have subscribed to electronic access. Electronic access allows anybody within a defined range (usually within the university enrolled for the subscription) to have free access to electronic versions of the articles (usually, PDFs). In addition, services such as JSTOR allow for access to old issues of journals, some of which have not yet put up online the articles in the old issues.

Some journals are moving towards open access policies. These policies allow for free access to the electronic versions, and more importantly, release the articles under an open-content license such as a Creative Commons License, that allows other researchers to use the data of the original article freely in their own further research. To further increase the availability of articles, services such as ArXiV (for mathematics and physics) have become popular. These services allow people to upload preprints of articles that are under consideration for publication in a journal. The preprints allow other researchers to have access to cutting-edge research. The ArXiV versions, as well as versions that authors may put up on their own web sites, enable people who do not have subscriptions to the journals to still read a large number of articles.

It would be an understatement to say that this has greatly increased the ease of finding published reference material online. While the profit models for electronic access and open access are still being explored, it is clear that academics have made significant use of such access to learn about more recent as well as old research, and has thus benefited researchers tremendously.

There are concerns, though. For instance, a study by James Evans based on a database of 34 million articles, shows that as journals have become more readily available online, and as older issues have become easily available, articles have been citing fewer references and the references have become more recent. Evans thinks that one of the main advantages of the pre-web indexing system was its inefficiency, which led people on tangents and thus pulled them into reading more, and often more dated, material. Evans concludes that scholarship today engages more with recent scholarship than before. (also see his Britannica blog post).

The Web 2.0 Internet

For all the impact Web 2.0 is making in the wider world, I believe that its impact on research is limited. Why? Because for research work, communicating or collaborating using a Web 2.0 tool is usually less efficient compared to an “old-fashioned” tool like e-mail.

The growth of e-mail led to a significant increase in the extent of scientific collaboration. This is particularly notable in certain areas of physics, where it is not unusual for papers to have more than five authors. Interestingly, a lot of this collaboration happens within a university; studies have shown that the most efficient uses of e-mail are by people who use it to communicate within their organization. This is good for science because historically, the bigger collaborators have been the biggest creators. References: Chapter seven of The Logic of Life by Tim Harford (personal website and book page), and Group genius by Keith Sawyer.

The great thing about a tool like e-mail is that it is an added layer of technology that does little to disturb the fundamental process of thinking and research. A couple of collaborating mathematicians can have an intense discussion over tea, collaborate over proofs at the chalkboard, and work out detail together. Then one of them can type it out and e-mail it to the other person, who sends in typed corrections or has another face-to-face discussion. After some rounds, they can email their work to others for comment or review, and reviewers can send back their reviews easily to both authors.

Now, it is true that new modes of collaborative document creation might be helpful for authors collaborating over large distances. Thus, tools like MediaWiki and Google Docs, which allow for collaborative document creation, might be used in conjunction with email. These definitely offer significant advantages for certain kinds of collaboration, particularly in situations where people are collaborating over longer distances, and might be used by people who lack awareness of or savviness with revision control systems and SVN.

But while these offer advantages for collaborative content creation, they offer little of a substitute for the robust face-to-face or otherwise intense contact needed to do research.

Serendipitious and intense contact

Universities and research institutions manage to bring together in close contact people with knowledge and intuition in a particular area. This close contact fosters a regular and almost unavoidable exchange of ideas. In my high school, there were few people with whom I could discuss my area of interest, mathematics. In my college, where there were others interested in mathematics, I could go to a discussion area and start a conversation if I wanted, but rarely were there animated discussions going on that I could just drop into. Here, at the University of Chicago, where I’m doing graduate studies, there are several places where mathematical discussions are continuously going on. I can pop in, look at what’s going on, and join in if it seems interesting. The tea room, for instance, often has people discussing mathematics effortlessly merged with other topics, and simply sitting there makes me learn a few things here and there, and sometimes introduces me to something I wouldn’t have sought myself. The first-year graduate student office, similarly, is usually abuzz with people trying to solve their homework problems and discussing other related mathematical ideas.

It is this serendipitious contact with new ideas not explicitly sought that makes the university more than just a convenient place to exchange ideas. Face-to-face contact, the ability to make hand gestures and write on chalkboards, and the ability for anybody from outside to drop in, are hard to mimic on the Internet. This doesn’t mean that it is impossible to build on the Internet a system that allows for such serendipity (for instance, it may be possible to live stream activities in all tea rooms and discussion areas in all universities so that people in one university can tune in to the live stream of what’s happening in another — it isn’t clear, though, whether the benefits of such streaming are worth the costs). Rather, existing social networking sites and content creation sites were not designed for this purpose and are ill-suited for it.

The strength of the Internet

The strength of the Internet is its quick and ready availability. For this reason, I think that mathematical reference material, including pinpoint references (such as Planetmath, Mathworld, Wikipedia, Springer Online Encyclopedia of Mathematics, and my own subject wikis reference guide) can play an important role. It isn’t infrequent for people having a debate or discussion on a point of mathematics to resolve the matter by checking it online using a handy IPhone, netbook or laptop. More development of pinpoint references, as well as more competition among them, can be good. In addition to pinpoint references, the presence and online accessibility of journal articles is also a great boon, allowing people to clarify points of confusion immediately. Similarly, online course notes, including one-off course notes put up by faculty as well as the systemic OpenCourseWare efforts by institutions such as MIT and Yale, also add to the usefulness of the Internet. Finally, I hope for a system whereby libraries can get access not only to online versions of journal articles but also online versions of books, so that people in universities can have free access to online books. (In practice, many people donwload pirated electronic versions, but such a practice is hardly one that should be treated as a model worth sustaining).

Where the Internet doesn’t do so well is in recording off-the-cuff dialogues and conversations. If I’m talking with somebody and I’m not sure about a particular fact, I can say so, and the other person can dig me further and get another related answer. Here, my lack of full knowledge and authority is compensated by my immediate presence. However, posts on Internet forums that give partial or incomplete information, particularly for questions where definite answers exist, have the drawback without the compensation. People have to put up with reading incomplete or possibly incorrect answers, but cannot follow up with questions to clarify matters.

In summary, it seems to me that the Internet is very far from destroying the university. Rather, it can substantially increase the value of living in the university by making more information readily available online.

What about those living outside the University?

Not everybody has the combination of talent and circumstance that lands one inside a university that is a hub of serendipity of the sort I’ve described. The Internet is particularly important in providing these people some of the things that those in a good university take for granted.

Access to online references, for instance, is something that has enabled people across the world to discover new ideas and concepts that they do not find in a particular book they are following. I have discovered several new ideas while surfing Wikipedia, going through newspaper and magazine articles, surfing Mathworld and Planetmath, or link-traipsing from blogs. Access to online journal articles is another trickier question. The online subscriptions charged by journals are usually too hefty for individuals, and this means that individuals who are not members of a university or library with subscription to the journals may not be able to get access to journal articles.

This is unfortunate, but of course, these people didn’t have access to the journals prior to the Internet either. Usually, such people can get copies of the article from preprint sites such as the ArXiV, author’s personal websites, or by requesting the author personally. There is also a movement towards open-access publishing as mentioned earlier, which would in particular enable free online access for all.

But more importantly, access to full articles is not usually necessary. If online references are good and fairly thorough, users should be able to access the online reference to get an idea of at least the main points, concepts and definitions introduced in a particular journal article even if they are unable to access that particular article. As an undergraduate student, I often faced the problem of being unable to access a basic definition because the only source I could locate was an article in a journal to which my college did not subscribe. Of course, even with the existence of such references, there will be people who want to read the full article to get a deeper understanding.

Finally, open course ware presents a great opportunity for people outside the university system to get a flavor of the way leading researchers and educators think. Unfortunately, open course ware, such as MIT OCW and Yale OYC, is largely limited to lower-level undergraduate course material. It is possible that for advanced graduate course material, the demand is not high enough to justify the costs of preparing open course ware. I hope that the movement expands more and encompasses more universities across different countries and languages so that eager learners everywhere have more options.

November 16, 2008

Wikipedia — side-effects

In a recent blog post, Nicholas Carr talked about the “centripetal web” — the increasing concentration and dominance of a few sites that seem to suck in links, attention and traffic. Carr says something interesting:

Wikipedia provides a great example of the formative power of the web’s centripetal force. The popular online encyclopedia is less the “sum” of human knowledge (a ridiculous idea to begin with) than the black hole of human knowledge. At heart a vast exercise in cut-and-paste paraphrasing (it explicitly bans original thinking), Wikipedia first sucks in content from other sites, then it sucks in links, then it sucks in search results, then it sucks in readers. One of the untold stories of Wikipedia is the way it has siphoned traffic from small, specialist sites, even though those sites often have better information about the topics they cover. Wikipedia articles have become the default external link for many creators of web content, not because Wikipedia is the best source but because it’s the best-known source and, generally, it’s “good enough.” Wikipedia is the lazy man’s link, and we’re all lazy men, except for those of us who are lazy women.

This is an important and oft-overlooked point: when saying whether something is good or bad, we need to look not just at the benefit it provides, but also at the opportunity cost. In the case of Wikipedia, there is at least some opportunity cost: people seeking those answers may well have gone to the “specialist sites” instead of to Wikipedia.

Of course, it’s possible to argue that specialist sites of the required quality do not exist, but it can again be argued, in a counter-response, that specialist sites would have existed in greater number and greater quality if Wikipedia didn’t exist, or at any rate, if Wikipedia weren’t so much of a default. It might be argued, for instance, that of all the free labor donated to Wikipedia, at least a fraction of it could have gone into developing and improving existing “specialist sites”. As I described in another blog post, the very structure of Wikipedia creates strong disincentives for competition.

Wikipedia, Mathworld and Planetmath

In 2003, at a time when I was in high school and used a dial-up to connect to the Internet, I was delighted to find a wonderful resource called Mathworld. I devoured Mathworld for all the hundreds of triangle centers it contained information on, and I eagerly awaited the expansion of Mathworld in other areas where it didn’t have much content. I was on a dial-up connection, so I saved many of the pages for referencing offline.

Later, in 2004, I discovered Planetmath. It wasn’t as beautifully done as Mathworld (Planetmath relies on a large contributor pool with little editorial control, as opposed to Mathworld, that has a small central team headed by Eric Weisstein that vets every entry before publication). But, perhaps because of less vetting and fewer editing restrictions, Planetmath had entries on many of the topics where Mathworld lacked entries. I found myself using both these resources, and was appreciative of the strengths and weaknesses of both models.

A litte later in the year, I discovered Wikipedia. At the time, Wikipedia was fresh and young — some of the policies such as notability and verifiability had not been formulated in their current form, and many of the issues Wikipedia currently faces were non-existent. Wikipedia’s model was even more skewed towards ease of editing. It didn’t have the production quality looks of Mathworld or the friendly fontfaces of Planetmath, but the page structure and category structure was pretty nice. Yet another addition to my repository, I thought.

Today, Wikipedia stands as one of the most dominant websites (it is ranked 8 in the Alexa rankings, for instance). More important, Wikipedia enjoyed steady growth both in contributions and usage until 2007 (contribution dropped a little in 2008). Planetmath and Mathworld, that fit Nicholas Carr’s description of “specialist sites”, on the other hand, haven’t grown that visibly. They haven’t floundered either — they continue to be at least as good as they were four years ago, and they continue to attract similar amounts of traffic. But there’s this nagging feeling I get that Wikipedia really did steal the thunder — in the absence of Wikipedia, there would have been more contributions to these sites, and more usage of these sites.

The relation between Wikipedia and Planetmath is of particular note. In 2004, Wikipedia wasn’t great when it came to math articles — a lot of expansion needed to be done to make it competitive. Planetmath released all of its articles under the GNU Free Documentation License — the same license as Wikipedia. Basically, this meant that Wikipedia could copy Planetmath articles as long as the Wikipedia article acknowledged the Planetmath article as its source. Not surprisingly, many of the Planetmath articles on topics that Wikipedia didn’t have were copied. Of course, the Planetmath page was linked to, but we know where the subsequent action involved with “developing” the articles happened — Wikipedia.

Interestingly, Wikipedia acknowledged its debt to Planetmath — at some point in time, the donations page of Wikipedia suggested donating to Planetmath, a resource Wikipedia credited for helping it get started with its mathematics articles (I cannot locate this page now, but it is possibly still lying around somewhere). Planetmath, on its part, introduced unobtrusive Google ads in the top left column — an indicator that it is perhaps not receiving enough donations.

Now, most of the mathematics students I meet are aware of Mathworld and Planetmath and look these up when using the Internet — they haven’t given up these resources in favor of Wikipedia. But they, like me, started using the Internet at a time when Wikipedia was not in a position of dominance. Will new generations of Internet users be totally unaware of the existence of specialist sites for mathematics? Will there be no interest in developing and improving such sites, for fear that the existence of an all-encompassing behemothing “encyclopedia” renders such efforts irrelevant? It is hard to say.

(Note: I, for one, am exploring the possibility of new kinds of mathematics reference resources, using the same underlying software that powers Wikipedia (the MediaWiki software). For instance, I’ve started the Group properties wiki).

The link-juice to Wikipedia

As Nick Carr pointed out in his post:

Wikipedia articles have become the default external link for many creators of web content, not because Wikipedia is the best source but because it’s the best-known source and, generally, it’s “good enough.” Wikipedia is the lazy man’s link, and we’re all lazy men, except for those of us who are lazy women.

In other words, Wikipedia isn’t winning its link-juice through the merit of its entries; it is winning links through its prominence and dominance and through people’s laziness or inability to find alternative resources. Link-juice has two consequences. The direct consequence is that the more people link to something, the more it gets found out by human surfers. The indirect consequence is that Google PageRank and other search engine ranking algorithms make intensive use of the link structure of the web, so a large number of incoming links increases the rank of a page. This is a self-reinforcing loop: the more people link to Wikipedia, the higher Wikipedia pages rank in searches, and the higher Wikipedia pages rank in searches, the more likely it is that people using web searches to find linkable resources will link to the Wikipedia article.

To add to this, external links from Wikipedia articles are ignored by search engines, based on Wikipedia’s settings. This is ostensibly a move to avoid spam links, but it makes Wikipedia a sucker of link-juice as far as search engine ranking is concerned.

In addition, the way people link to Wikipedia is also interesting. Often, links to Wikipedia articles do not include, in the anchor text, any information that the link goes to the Wikipedia article. Rather, the anchor text simply gives the article name. This sends the message to readers that the article on wikipedia is the first place to look something up.

Even experienced and respected bloggers do this. For instance, Terence Tao, a former medalist at the International Mathematical Olympiad and a mathematician famous for having settled a conjecture regarding primes in arithmetic progressions, links copiously to Wikipedia in his blog posts. To be fair, he also links to articles on Planetmath, and papers on the ArXiV in cases where these resources offer better information than the Wikipedia article. Nonetheless, the copious linking suggests that it is likely that not every link to a Wikipedia article is based on the Wikipedia article genuinely being the best resource on the web for that content.

What can we do about it?

Ignoring a strong centripetal influence, such as an all-encompassing knowledge source, does not make us less immune to its pull. There is a strong temptation to use Wikipedia as a “first source” for information. To counter this pull, it is important to be both understanding of the causes behind it and critical of its inevitability.

The success of a quick reference resource like Wikipedia stems from many factors, but two noteworthy among them are desire to learn and grow and laziness. Our curiosity/desire to learn leads us to look for new information, and our laziness prevents us from exerting undue effort in that regard. Wikipedia capitalizes on both our curiosity/desire to learn and grow and laziness in its readers (quick and dirty access to lots of stuff immediately), contributors (easy edit-this-page), linkers (satisfying reader curiosity by providing web links, but using Wikipedia instead of others thanks to laziness). Wikipedia is what I call a “pinpoint resource” — something that provides one-stop access to very specific queries over a large range of possibilities very quickly.

For something to complete with Wikipedia, it must cater to these fundamental attributes. It must be quick to use, provide quality information, and encourage exploration without making things too hard. It must be modular and easily pinpointable. This doesn’t necessarily mean that everything should be modular and easily pinpointable — there are other niches that don’t compete with Wikipedia. But to compete for the “quick-and-dirty” vote, a site has to offer at least some of what Wikipedia offers.

Of course, one of the questions that arises naturally at this point is: isn’t Wikipedia “good enough” to satisfy passing curiosities? I agree that there is usually no harm in using Wikipedia — when compared with ignoring one’s curiosity. But I emphatically disagree with the idea that we cannot do better with dealing with the passing curiosities and desires people have to learn new stuff and teach others, than funnel it through Wikipedia. Passing curiosities can form the basis of enduring and useful investigations, and the kind of resource people turn to at first can determine how the initial curiosity develops. For this reason, if Wikipedia is siphoning off attention from specialist sites that do a better job, not just of providing the facts, but of fostering curiosity and inviting exploration, then there is a loss at some level.

November 15, 2008

Intuition in research

Filed under: Thinking and research — vipulnaik @ 10:06 pm

I recently read a couple of books by Gary Klein: Sources of power and The Power of Intuition. Klein is a decision researcher who started work in the 1980s studying high-stakes high-pressure decision-making. His research team began by studying how firefighters make high-stakes decision in the face of severe time constraints, incomplete information, high pressure, and unclear goals. The research team found that the firefighters rarely compared multiple options. Rather, when faced with a particular situation, the experienced firefighters typically found an immediate first response, simulated it mentally, and executed it if the simulation seemed to work fine.

While there were certain situations where the firefighters rejected one course of action and selected another, Klein found that the courses of action were usually considered sequentially: the first course of action was simulated mentally, and if the firefighter sensed it to be good, he or she executed it. If the course of action didn’t seem good, another course of action was simulated. The hallmark of experienced firefighters was their ability to pick out a good first option and execute it.

According to Klein’s book, his findings contradicted the formal decision-making strategies considered good by decision researchers at the time. Many decision researchers warned people against using their intuitions, which were prone to being misleading, and to instead consider multiple options and compare them across multiple dimensions. In recent years, there has been greater acknowledgement of the infeasibility of comparing multiple options as well as the advantages of strengthening intuition in order to pick good first options.

Through studies of firefighters, marines, NICU nurses, and many other decision makers, Klein came up with a model he called the Recognition-Primed Decision (RPD) model. The core idea is that the repository of experience that a worker builds creates certain templates, and when the worker is thrust into a new situation, the new situation is matched against these templates. If a good match is obtained, the course of action suitable for that template is tried. The rich repository of experience helps with the initial recognition of the appropriate template, with the mental simulation that follows, and with collecting feedback once a course of action has been sought.

For instance, a firefighter, over the years, becomes sensitive to different cues such as the smell, floor temperature, room temperature, way in which the fire is spreading, and numerous other small indicators. By gauging these cues, the firefighter subconsciously develops a “story” around how the fire developed and what the priority should be (rescue people, douse the flames, call for more help). Similarly, nurses in intensive care units for children (The NICU nurses) develop a repository of experiences on subtle cues as well as combinations of cues that ill children provide. An experienced nurse can thus size up a situation based on the many cues he or she (usually, she) sees, and develop a story that immediately suggests a next course of action.

The emphasis is thus not on analysis but on building a story, and judging the story by how well it fits the facts, where the extent of the fit is determined by feedback from past experience.

Do similar principles apply to research?

In terms of speed, research is the opposite of firefighting. For a firefighter, the situation is live and demands immediate action, with high stakes and usually very immediate feedback about success (either the flames get doused or they don’t). Research, on the other hand, is a slow process, with very little riding on decisions made on the spur of the moment and very rare opportunities for instantaneous feedback. If the research problem I pick is too hard for me, I don’t get to know the consequences and feel the pain for quite a bit of time. I might suffer the delusion that I am making steady progress on the problem and figure out that it is too hard after several years of trying.

Given the obvious differences, it is natural to be suspicious of an assertion models that help with high-stakes decision-making are prima facie suitable for researchers. However, I make the case here that intuition is important in research, albeit in a different way.

In general, intuition is important in situations where either the information explicitly and clearly available is inadequate, or the effort needed to process all this information is infeasibly high. In a firefighting situation, the information available is inadequate at the time the decision needs to be made, even though the story usually does become clearer in a short while. When working on a research problem, we again have a gap: the information available (as to whether or not I should work on the problem) is inadequate at the time I start work on the problem, though it is likely to become more clear once I have worked on the problem. The difference is in the time scale. But in both cases, there is inadequate information at the time of decision-making.

Strengthening one’s intuitions

Klein’s book on the power of intuition offers many concrete suggestions on strengthening intuitions. Klein begins with the (obvious) observation that practice improves intuition. More importantly, he identifies two aspects: frequency of exposure to situations, and feedback that helps in correct model-building. Frequency alone is not enough.

Of course, there are obvious problems with providing practice for emergency situations — the kind that shouldn’t occur anyway. How do novice doctors get practice in performing critical surgical procedures and making critical medical diagnoses, without risking the lives of patients more than necessary? Atul Gawande, a Massachussetts surgeon, discusses these issues in his bestselling book Complications, where he points out that doctors have a learning curve, and this necessitates that some patients receive substandard care because they get treated by residents and new doctors rather than more experienced ones.

Similarly, how do firefighters get experience fighting fires? Again, this experience is provided through apprenticeship and observing other firefighters — the less experienced firefighter accompanies the more experienced firefighter and sees the more experienced firefighter make the critical decisions while offering support.

Apprenticeship is one approach, but it may be expensive, and it is best complemented with other approaches that are less expensive. Other approaches (some of which are mentioned in Klein’s book) include simulation and training exercises, where some of the features of the real experience are captured through simulation. An example is flight simulation. In addition, there is the crucial aspect of experienced people documenting their experiences, and sharing these experiences with others, so that the many little nuggets of wisdom get passed on.

Now let’s move from high-stakes, instantaneous decision-making to the world of research. The same ideas seem to apply: newer researchers generally have less experience, and they learn more through apprenticeship and through interaction with more experienced people. The timescales are, of course, very different. A Ph.D., which is like an apprenticeship under a guide (the thesis advisor) may take anywhere between three and eight years. Even after completing a Ph.D., researchers generally tend to work under the guidance and tutelage of more experienced people before fleshing it out totally on their own.

So the same questions seem to apply: what are the skills that experienced researchers have, that their less experienced counterparts may lack? And how can new researchers pick up these skills faster?

The importance of metacognition

Klein claims that people with experience and expertise in a topic not only have a richer repository of experience to draw upon — they are also capable of analysis of their own thoughts on the subject. In other words, they not only know more of the territory they are exploring, they also know more about their own tendencies in exploring the territory. For instance, an experienced athlete not only knows how to run long distances, but is also aware of how it will impact his or her mood and energy levels, and can thus plan ahead accordingly. Awareness of their own proclivities thus helps experts plan around as well as exploit their human strengths and weaknesses.

Thus, a person who has just learned a subject, say group theory, and doesn’t have much experience with it, may not be able to distinguish between two problems in group theory in terms of their level of difficulty without attempting the two problems. A more experienced person may be able to look at both problems and get a feel as to which one is likely to be harder, even without attempting either problem. This comes from a stronger intuitive grasp of the territory, including its highs and lows.

Of course, there are times when expertise of this sort can be misleading, because it may lead the more experienced person to be less adventurous in exploring some things based on false preconceptions. This needs to be watched out for.

The upshot is that there are certain skills: a clearer understanding of the specific territory of the subject being researched, as well as a broader intuitive feel of the subject that helps one decide what direction to go in. The question is: can these skills be developed faster? Are there any obvious ways of doing this?

Some concrete suggestions

The goal is that new researchers should get a sense of how to think about specific problems, be better at metacognition (understanding their own thought processes) and get a broader intuitive idea about the direction that certain approaches will lead to. This should not be done at the cost of diversity in thinking — new researchers should not be handed down the right way of thinking about something. I discuss here some interesting suggestions to help improve the intuitions of new researchers.

  • The two-problem faceoff: This is a game of sorts, where two problems are presented to the novice researcher. Neither of these problems is trivial; one of them, however, is easy (i.e., it can be solved by the researcher) and the other is hard (i.e., it either requires a lot of ingenuity or some new machinery). The researcher has to decide which of the problems to try, and try that, and succeed.

    This faceoff game has some interesting aspects. First, researchers are forced to develop their intuitions not just on how to solve a problem, but also on how to pick a problem to solve among a collection of two. Thus, researchers are forced to consider metacognitive questions: which path should I tread?, and to look ahead and predict what will happen. Second, it may actually turn out that the purportedly harder problem is the one the researcher picks and actually does solve (perhaps coming up with an easier solution). Perhaps this indicates that the new researcher is particularly good with problems of that kind.

  • What came first?: Here, a researcher is presented with two proofs of the same theorem, both of which arose at different historical points, and is asked to do a comparison: which proof came first? Which one is more useful? Which one would be the kind of proof you’d come up with?
  • Spotting relations, thinking creatively: New researchers should constantly confront reflective questions regarding different aspects of the work they are doing or learning about. For instance: what does the statement of this result tell me? What does the structure of proof tell me? Are there corollaries of the statement? Are there other related statements? Are there other statements whose proof follows the same structure? Can the proof idea be transferred to a totally new subject? Can I come up with similar-sounding statements that are false?

I have been exploring some of these possibilities for spotting relations and encouraging reflective thinking that builds intuitions for some time. I’ve implemented some of these ideas in the structure I’m using for the Group properties wiki. For instance, see this page about a property of normal subgroups, or this page about a cute fact regarding unions of two subgroups of a group.

Next Page »

Create a free website or blog at WordPress.com.