# What Is Research?

## March 29, 2008

### Frequently (?) asked questions

Filed under: Advice and information,Places and events — vipulnaik @ 2:37 pm

Yesterday, I went and attended a “panel” held at the Chemistry Department about science students who’ve gone on to unconventional careers related to science, outside both academia and industry. A few weeks ago, there was a panel discussion between us grad students at the math department and four post-doctoral students. And I’ve been watching some panel discussions on videos on the web. Based on this, I’ve started getting a feel of what the Frequently Asked Questions are in panel discussions. So I’m just imagining that I’ve been invited to a panel discussion, and the audience is now asking questions, and I’m trying to figure out what answers I’d probably give.

By the way, if any of the readers has a question that isn’t answered here, put a comment stating your question (or send it to me) and I’ll add my answer to that to this post.

How does life as a student in the United States, differ from life as a student in India?

I’d have to confess to very limited experience on both counts: I completed undergraduate studies at Chennai Mathematical Institute (CMI), and am currently doing graduate studies at the The University of Chicago. CMI isn’t the most typical undergraduate institution in India, and I haven’t been long enough at the University of Chicago to udnerstand exactly how the system works.

From what I’ve seen till now, one key difference is the emphasis on assignments and homework. In CMI, we didn’t have a whole lot of regular assignments. Most subjects had at most 3-4 assignments a semester, and there wasn’t a very strict rule about when to submit assignments. There certainly weren’t any graders and teaching assistants: all grading was done by the instructor herself or himself, so the motivation to give assignments was less.

In the University of Chicago, there is a lot of emphasis on regular assignments and regular testing. Apart from the teaching sessions, there are also separate sessions where the teaching assistant helps sharpen the students’ problem-solving skills. In short, there’s more activity. Most of the assignments (as far as I can see) aren’t deep or creativity-inspiring. They’re largely mundane, with a few gems. The goal, from what I can make out, is to give students a lot of practice and a lot of experience in writing solutions.

Moreover, all assignments are graded by the teaching assistant, rather than the instructor. So there is a full-time involvement of two people in teaching a course, rather than of one person. Both teaching and learning is a more hectic and intense activity.

Another difference between life in CMI and life in the University of Chicago is the size and diversity of the place. CMI was largely an “only-math” institute. There were only three subjects of study: mathematics, physics and computer science. Even within these subjects, the number of people doing research and the amount of active research was fairly low. In the University of Chicago, on the other hand, there are researchers of all stripes. There are a whole lot of Ph.D. students. And there are also researchers in other disciplines, and there are a whole lot of facilities apart from those geared at research (including an athletics center, a bookstore, some huge libraries, an office of international affairs, and many others). There are more festivities and occasions and activities outside academics (just as there’s more activity within academics).

How does graduate life differ from undergraduate life?

Again, I have limited experience in this area. Broadly, I’d say that in undergraduate life, the focus is on learning a lot of different things, on scoring well, on getting a good idea of mathematics, and on developing credentials that’ll impress graduate schools. There’s also a strong element of competition with others in the same year, because there are far more undergraduates than the number of openings at graduate schools. In other words, just as much as it matters how good you are on absolute terms, it also matters (at least, to some extent) how you compare with your immediate peers.

In graduate school, this element of competition is probably less. Of course, the number of post-doctoral openings is significantly less than the number of people who do a Ph.D., so there is still strong competition. However, the competition now is not so much against your immediate peers in your specific institution. Applications beyond graduate studies are fairly specialized, so people with different mathematical interests are likely to be seeking different jobs at different places under different scholarship schemes. Thus, the element of cooperation is probably more.

Graduate school also brings with it a host of teaching responsibilities. Undergraduates are largely responsible or answerable only to themselves. Graduates, who are also involved in teaching duties, have to balance their research with their teaching requirements.

Finally, graduate studies, at the end of the day ,requires a Ph.D. You don’t get a Ph.D. for general knowledge in mathematics. Rather, a Ph.D. is contingent on solving one or two specific problems, usually problems that extend things that have already been studied by the research community or problems that would open the way for new ideas. So graduate studies isn’t just more of the same undergraduate studies. It requires going deep into some area and struggling hard to solve problems, often giving up and returning again.

In what ways did your undergraduate education prepare you for graduate school, and in what ways do you think you were underprepared?

As far as basic mathematical knowledge is concerned, I’d say that my undergraduate education at CMI, combined with whatever I had read on my own, prepared me fairly well for graduate studies at the University of Chicago. Of course, one can look back and say: I wish I had done a course in algebraic topology or if only I had taken a course in functional analysis. But many of my friends who are from leading American univerisities, also hadn’t taken such courses in their undergraduate years, so I do not think I was significantly disadvantaged in a uniform way, as far as coursework was concerned.

That said, I’d say that considering that the CMI course is three years long, while that in most American universities is four years long, there are elements of detail that aren’t covered in CMI courses. Secondly, because we didn’t have to do too many assignments, I didn’t brush up my calculus skills at the undergraduate level. Thirdly, my impression is that CMI’s focus on analysis is low, and this was one of the factors that made me find analysis disproportionately harder.

However, what I value most about my undergraduate life is that it gave me the freedom to explore in mathematics and it was during those years that I started forming my interests. It is possible that if I were subject to a hectic system of assignments, I would not have been able to explore so much and develop the specific interests and viewpoints within mathematics, that I did. On the other hand, it is also possible that a more hectic and challenging course structure could have brought out more from inside me. I don’t know.

Undergraduate program(me)s in the United States are four years long, while the typical duration for a B.Sc. in India is three years. Is this a disadvantage?

It really depends on the specific nature of the undergraduate course. An honors course that covers most of the prerequisites needed for graduate school, is good enough to satisfy most US graduate schools, though some of them have a strict four-year requirement.

However, it is true that B.Sc. courses offered at most institutes in this country do not cover enough material to start a Ph.D. with. In that case, doing a M.Sc. is advisable before applying for a Ph.D.

Why did you apply for graduate studies to the United States? How do options in the United States compare with those in India?

I was specifically interested in the University of Chicago, though I did apply to many other graduate schools in the United States (some of which turned me down as well). My interest in the University of Chicago arose because of two factors: the overall reputation of the University, and the presence of two group theorists: Professors Alperin and Glauberman. It didn’t hurt that some of my seniors and other acquaintances were already at the University and had given me positive reviews of the place.

On the whole, I’d say that if you can do it and are keen on pursuing doctoral studies, then universities in the United States are a good option. Being in the United States isn’t a magic cure. But there are certain factors about the United States universities that set them apart. First, a reasonably large university in the United States will have people doing research in diverse topics, so there is enough scope to interact with a lot of different styles of doing mathematics. Second, there is a culture of hardwork and commitment, and the opportunity to acquire skills in teaching, presentation, research, and forming mathematical communities. Third, there are also good opportunities to interact with other departments.

In India, Tata Institute of Fundamental Research is a place where high-quality research is done in mathematics and a number of other science subjects. It’s small compared to a place like the University of Chicago, but may be among the best choices in India. The Indian Institute of Science has a greater diversity of subjects, but its mathematics department is relatively smaller. From what I’ve heard, the mathematics department is expanding. The Institute of Mathematical Sciences is a good place to pursue doctoral studies in mathematics. The other subjects here include physics and computer science, and all three departments are fairly good.

Nonetheless, an opportunity to study in a top-range or mid-range US university for a Ph.D. can prepare one better for the challenges of research (in my opinion).

What are the pros and cons of pursuing undergraduate studies in the United States as against in India?

The pros:

• Easier to get admission to top-quality graduate schools
• More rigorous and challenging coursework
• Greater interaction with other disciplines, the option of doing a double-major, the option of studying a lot of subjects on diverse areas
• Greater opportunities for undergraduate research, and the opportunity to interact with cutting-edge researchers during day-to-day coursework

Cons:

• Making a move to a new country at a younger age could be harder. There is less interaction with the family, and it could be harder to make friends.
• If you ultimately plan to return to India, it’s also important to understand the structure of research and education in India. My stint as a student in CMI gave me a fair idea of what goes on in India. So I’m better equipped to return to India at a later stage.
• The coursework could get really hectic, and making a transition between the exam-based style of learning in India and the assignment-intensive style of learning in the United States could be hard.

None of the cons, however, is totally unavoidable. In other words, by being aware of the possible cons, one can plan to minimize their effect. Thus, if you plan to go abroad, start preparing for it psychologically, physically and mentally a year in advance. It’s not the quantity of preparation as much as the mental transition needed.

Similarly, if you want to go abroad for undergraduate studies, but want to stay in touch with the research opportunities in India, this too is not hard, if you’re aware of it.

The most important thing to remember is that just after high school, we could be pretty impressionable, and may get swept off by the assignment-intensive style of doing things, losing out some of our skills and approach. So it is important to keep in mind that there are a wide variety of different ways of learning and approaching a subject. While joining a University, one must abide by and work with the rules of the system. But that doesn’t necessarily mean that one needs to embrace that system in every way. One can try to keep the best from all the different systems of learning one has seen and learn in a way that is most suited to one’s personality.

Is the money you receive sufficient to meet your expenses in the United States?

The University of Chicago offers a fairly generous stipend, which is considerably more than enough to meet day-to-day expenses. Though stipends vary widely across Universities, mathematics departments by and large offer stipends that can cover living expenses reasonably well, and probably allow you to save money.

For undergraduate studies, the situation is somewhat different. Students from India usually get tuition waivers, but scholarships that more than cover the expenses are relatively rare. I don’t know much about this, though.

How does the United States compare with Europe?

Europe and the United States differ in various respects, but again I don’t have much firsthand experience of studying in a European institution. One difference is language. A second difference is probably culture. The core American system is based on hardwork and the testing pattern is usually assignment-based. Students are actively involved with teaching and learning. In Europe, the system of scholarships, stipends and other stuff works differently.

If there was one thing you wish you’d known before beginning graduate studies in the University of Chicago, what would it be?

It would probably be that even though the system of evaluation and the structure and setup is different here compared to what I’m used to, the fundamental values remain the same. These are the fundamental values of being sincere, hardworking, creative, and cooperating with others. The fundamental principle is to be honest to oneself, to have faith in one’s abilities, and to have fun and find one’s equilibrium in a new climate.

I cannot really say I suffered here; I did pretty well in the first two quarters. But during the first quarter, I was somewhat stressed because of the expectations of the new system with three assignments a week. It took me some time to come to terms with this and to find joy and fun in my day-to-day activities. I’d encourage everybody who goes to a new learning environment to not be blown over by the superficial differences and to know that in the end, it is good fundamental values that triumph.

## March 28, 2008

### Glauberman conference

Filed under: Places and events — vipulnaik @ 11:44 pm
Tags: , ,

Today was the last day of a five-day long conference on group theory. This was the Glauberman conference, held right here at the Mathematics Department of the University of Chicago. The conference was in honor of Professor George Glauberman, a leading group theorist who’s proved results like the ZJ-theorem and the Z*-theorem. Prior to this conference, I’d heard of mathematical conferences and read books with proceedings of these conferences, but I didn’t have any experience of attending a conference. So I was very eager to attend this one. “Conference” can have many meanings. The Glauberman conference was primarily a series of lectures by different mathematicians on different topics. In fact, most of the lectures were on very specific topics, they were short (about 30-35 minutes) and there wasn’t a unifying theme to the talks. I didn’t follow too much of the content of the talks, primarily because of the fast pace and the large number of talks. But it was a great experience to meet people from across the world (group theorists from Europe and Japan had also come). I’d read books writen by some of these people, and had also corresponded with some of them, so it was nice to see them in person (though the schedule was too hectic to interact more with them). I got an idea of the notational conventions that were followed in group theory. I learned that the default convention in group theory is to make elements act on the right (especially when writing down cumbersome commutators and expressions to simplify) rather than on the left. More importantly, I got to understand some of the important research themes in the subject. One research theme is around a collection of conjectures intended to understand better the relation between the representation theory of a huge group, and the representation theory of “local” subgroups (small subgroups that occur as normalizers of subgroups of prime power order). The first conjecture in this regard was by McKay. McKay conjectured that the number of irreducible representations of $p'$ order of a group equals the number of irreducible representations of $p'$ order of the normalizer of any $p$-Sylow subgroup. Many modifications of this conjecture have been proposed by Alperin, Isaacs and others. In a similar but somewhat different vein, there’s the Glauberman correspondence, that gives an explicit bijection between the representations of a huge group and a smaller subgroup. This, too, has spawned a number of related thoughts. There were some talks in the Glauberman conference that focused on some of the applications and results inspired. Professor Bhama Srinivasan, who gave a lecture about some correspondences involving linear groups, told us that the whole spectrum of conjectures had been summarized as “I AM DRUNK” where the letters stood for the initials of the people who had come up with variations of the McKay conjecture. (I, A, M stand for Isaacs, Alperin and McKay; I forget all the other letters right now). Another important theme was the theory of “replacement”: replacing a subgroup satisfying certain, weaker properties, with a subgroup satisfying certain, stronger properties. Thompson was the first person to come up with replacement theorems, and Professor Glauberman has published a number of recent results in that regard, making good use of the ideas behind the Lazard correspondence. One of the interesting results was mentioned by Professor Khukhro, who was inspired by Professor Glauberman’s replacement theorem to prove a very general result that works for all groups: a normal subgroup of finite index can be replaced by a characteristic subgroup of finite index, and satisfying the same multilinear commutator identity (so for instance a normal nilpotent subgroup of finite index can be replaced by a characteristic nilpotent subgroup of finite index). Group theory’s recently winning the attention of people in topology and category theory. During the classification of finite simple groups, there were some “candidates” for finite simple groups that never materialized into actual groups. however, there was a lot of data in these cases to suggest that a group exists. later, it was discovered that one could define an abstract notion, called a fusion system, and that every group gives rise to a fusion system, but there are fusion systems that don’t come from groups. Fusion systems are something like a piece of consistent data that could have come from a group, but on the other hand, may not. Some recent work has gone on into find out what are the fusion systems that do not come from groups, and how one can judge whether a given fusion system arises from a group. The talks at the Glauberman conference weren’t directly on these basic concerns, but on some related research. This included talks by Cermak, Bob Oliver, and Radha Kessar. There were also talks related to classifying and making sense of the $p$-groups (groups whose order is a power of a prime). Classifying $p$ groups is a tricky proposition: it makes sense only if we decide what it means to “classify. Professor Leedham-Green gave a talk on classifying $p$-groups by coclass. All the talks were 30-35 minutes long. Some of them used the chalkboard, others used laptop-based presentations, and yet others used transparencies. In fact, the ones using transparencies used two projectors, with one projector used to show the “previous” transparency for reference. It was good fun.

### Weird questions, fundamental correspondences, and random computation

Filed under: Thinking and research — vipulnaik @ 10:16 pm

Around 3.5 years ago, I asked a question about extensible automorphisms. The question was motivated by this simple, and weird consideration. I was looking at the proof that if H is a normal subgroup of G, then H is also normal in any intermediate subgroup K. This is really something obvious if you know the definitions, but I wasn’t satisfied. The core explanation, I felt, was the fact that any inner automorphism of the intermediate subgroup K extends to an inner automorphism of G.

This led me to the question: what automorphisms of a group have the property that they can be extended to automorphisms of any bigger group? I strongly suspected that the only such automorphisms are the inner automorphisms, but didn’t have the tools to prove this. The more I thought about it, the deeper the question seemed. In fact, it had interpretations and implications that could be couched in the language of model theory, category theory, and universal algebra. It wasn’t a terribly important thing to prove because its proof wouldn’t have important questions, but it seemed, in a way, a fairly fundamental problem.

The interesting thing about the way I came up with this problem is that, in general, it’s pretty different from the way a lot of research is done. Research is usually done incrementally and collaboratively: based on the new results, based on attempted ideas, on programs, on correspondences that need to be established. For instance, some of the big research in group theory these days involves getting some correspondences of various sorts between the representations of big groups, and small, local subgroups (subgroups that arise as normalizers of p-subgroups). This is a big theme, and new results are typically generated by looking at old results, and saying: okay, here’s a bit more in that direction. Similarly, proofs that attempt to get a better correspondence between Lie groups and Lie algebras, again work independently.

Even though the majority of mathematical research is of this kind, I believe that there is a lot of potential for just simple, stunning, and stupid ideas that can be raised by looking at the simplest and dumbest of results. And the beauty is that a lot of this can happen even without an in-depth knowledge of examples and advanced machinery. Alas, the questions cannot usually be answered without advanced machinery, but they can be asked.

Another important point is that usually, people seasoned in a field, have fairly strong views on what is the correct way to develop intuition in that field. Much of this is guided by the “example-oriented” thinking: to understand something, you need to look at, and work with, a lot of examples. Definitions on their own are worthless and misleading, we are often told. Working on the examples tells us what is really going on.

I strongly disagree with thinking of “examples” as a kind of oracle. Working with examples gives one kind of intuition: a very necessary and important intuition. But fiddling around with definitions gives another. Fiddling around using the ideas of logic gives another. Drawing pictures gives one intuition, pushing symbols gives another intuition, and fiddling with words gives yet another intuition. Definitions should not be degraded or delegated to second-order position, because the most interesting questions can often be asked just by staring at the definition.

The extensible automorphisms conjecture is an example of all these things. When I asked the question, I had practically zilch knowledge of representation theory, though most of the progress I’ve made on the problem (with inputs from many older, and more experienced, individuals) has been using ideas from linear representations and permutation representations. But at the time I asked the question, I wasn’t even motivated by a single example of a group. What I was motivated was by the sheer structure of simple proofs involving groups and normal subgroups.

So how did progress happen on the conjecture? The conjecture didn’t emerge from a solid understanding of groups, but efforts at solving it required a good understanding of groups. Based on ideas of Professor Ramanan, and based on long correspondence with Professor Isaacs and some email exchanges with Professor Alperin, I was able to come up with a proof that for finite groups, any such automorphism must send every element to its conjugacy class. This used the fundamental theorems of representation theory, in a fairly elementary way, along with some simple arguments about semidirect products. Independently, I proved that for a finite group, it must also send every subgroup to a conjugate subgroup. This was achieved by looking at permutation representations.

This highlights yet another interesting principle. If a problem is perceived independently and externally of a discipline, and yet can be solved (partly or completely) using fundamental results from the discipline, it proves the utility of the discipline. When Professor Ramanan first suggested fiddling with characters, I was stunned that something like linear representations could attack a basic group-theoretic problem. But after successfully implementing it in part, I was able to achieve a greater understanding and appreciation of the fundamental theorems of representation theory, from a perspective that didn’t require any knowledge of linear algebra.

Later, I learnt that linear representation theory also makes an unexpected appearance for proving results related to the hidden subgroup problem, and that it is the only way to prove some basic results in group theory including Burnside’s theorem stating that a group whose order has only two prime factors is solvable. In other words, representation theory comes up spontaneously for purely group-theoretic problems, and that happens for many reasons.

What I want to stress here is that for us to really understand how useful a discipline is, or what novel uses it can be put to, it’s important for people to keep asking questions that are apparently unrelated to anything at all, and then see what hammers need to be used to answer those questions.

The extensible automorphisms problem isn’t the only problem I’ve come up with. One, very closely related, problem is this. As mentioned earlier, if a subgroup is normal in the whole group, it’s also normal in any intermediate subgroup. But the same isn’t true for the notion of a characteristic subgroup. A subgroup that is characteristic in the whole group, need not be characteristic in the intermediate subgroups. So the question: what can we say about subgroups that we know can be made characteristic if we expand the bigger group? Clearly, they’re normal (because characteristic implies normal, and normality is preserved on going to intermediate subgroups). But can we say something stronger? If H is normal in G, can we always find a K containing G such that H is characteristic in G?

Once again, this seems a fairly hard problem, and one on which I’ve made hardly any progress. I don’t have too many ideas on where to start it. I do know of some strong relations with the problem of extensible automorphisms, but nothing that proves anything conclusively. Again, this is the kind of problem that doesn’t yet fit into a grand scheme of the subject of group theory. It’s an isolated problem that’s probably hard but isn’t getting a lot of attention because there’s no immediate payoff to solving it, either in terms of the machinery developed to solve it, or the consequences of its being true. But to me, it is important because it’ll help me understand exactly what the meanings of the words “normal” and “characteristic” are.

The problem with exploring both these questions, apart from the fact that they do not live in a broad scheme, is that exploring them basically requires looking at all the overgroups (or supergroups) of a given group. This is a challenging task, because — there are infinitely many groups containing a given group, and there isn’t even a nice way to start about it. But this problem also presents an opportunity, because it allows us to invert our thinking about a group. We usually think of a group in terms of what lives inside it. But now we’re concentrating on a group in terms of what groups it lives inside. My gut feeling is that perhaps trying to solve these problems will lead us to new tools to understand how to tackkle problems that “quantify over all overgroups”. With such tools at hand, people might be able to formulate and solve a lot more problems in group theory that currently seem beyond the possibility of stating.

To this end, let me mention a third cluster of ideas, one that is probably more achievable, and which I have been helping out with some experimentation using GAP, a computational package using group theory. Again, it stems from looking back at a simple proof, though this time, not an obvious or easy one. This is the proof that for a finite group, the Frattini subgroup is nilpotent. I looked at the proof, and then said: okay, what’s going on here? Can we replace Sylow by something weaker? I did exactly that: replaced it by the condition of being an automorph-conjugate subgroup — an idea that I’d been playing with for some time. And it turned out that the proof actually showed that Frattini subgroups of finite groups (and of a more general class of groups) satisfied a property in terms of these subgroups: any automorph-conjugate subgroup is characteristic.

This is again an overgroup search problem. If I give you a group G, and ask you: does G occur as a Frattini subgroup? Overgroup search would suggest that you need to look at all possible groups containing G. But that’d only be helpful if the answer were actually yes. What if the answer were no? In that case, we need to say something like if G were a Frattini subgroup, it would satisfy some property, which in fact it doesn’t. So, the fact that Frattini subgroups are nilpotent, tells us that we can reject G right away if it isn’t nilpotent. What I’d obtained was a more sophisticated condition that could even reject some nilpotent groups: a condition that was purely in terms of the subgroups, which is essentially a finite condition. I called groups satisfying this condition ACIC-groups.

Next, I started asking some questions: what are ACIC-groups? For finite groups, they live somewhere between Abelian and nilpotent. Any Abelian group is ACIC, any ACIC group is nilpotent. But where exactly do they live? Is a subgroup of an ACIC-group an ACIC-group? Apparently, no. Even a normal subgroup of an ACIC-group need not be ACIC. But a characteristic subgroup of an ACIC-group is ACIC: again, a purely formal argument that requires practically zero knowledge about the structure of groups.

What about quotients? A quotient of an ACIC-group need not be ACIC. But a quotient by a characteristic subgroup is ACIC. This suggests interesting things: starting with ACIC-groups, as long as we restrict ourselves to subgroup-defining functions and their respective quotients (like the center, commutator subgroup, etc.) we’ll remain in the ACIC-world. But as soon as we take an arbitrary subgroup, not “canonically” defined, then we could exit the world. So the ACIC-world enjoys a somewhat different kind of closure properties from the typical world.

Finally, what about direct factors? A direct product of ACIC-groups need not be ACIC, and direct factors of an ACIC-group need not be ACIC. However, if the groups have relatively prime orders, then both conclusions hold. That’s because the subgroups, as well as automorphisms, can be analyzed component-wise.

Finally, is the ACIC condition tight with respect to being realizable as a Frattini subgroup? The jury’s not out on that yet, but my strong suspicion, based on preliminary analysis on GAP, is that the answer is: far from it. My guess is that most ACIC-groups do not occur as Frattini subgroups, but I don’t have a stronger version that will narrow the gap.

The ACIC-problem is interesting as it combines the “overgroup” search that I was alluding to in earlier problems, with things that can be tested more tangibly. What it lacks is something to make it important enough for people to work on.

To summarize, I’ve mentioned three problems that I’ve come up with. All of them are noteworthy in that they come by looking at the structure of simple proofs and manipulating a few assumptions. They don’t use any of the deeper intuition into finite groups. All of them are noteworthy in the sense that, as of now, they don’t hold a promise for group theory. Solving any of these problems will not change the group theory world. They’re also noteworthy because the kind of tools needed to prove or establish the final results in these are likely to be completely different from the tools that were used to bring up the questions. But they all involve a challenge: the challenge of “overgroup search”. Thus, solving these problems, or developing approaches to solving similar problems, might allow people to start looking more aggressively “over” a group and at bigger groups, rather than the current trend of focusing on the structure of subgroups and automorphisms.

At the core: diversity matters. Diverse ways of coming up with problems. People looking at a subject from a purely formal angle. People looking at specific examples that they love. People looking at group theory as a special kind of universal algebra. People looking at group theory as a beautiful particular language in model theory. And we need that diversity, and we need to know that the most interesting questions can have answers from diametrically opposite fields. Much of research is, and should remain, incremental, and in the fashionable directions. But we also need that stream of new and freaky questions that come from totally new perspectives to keep determining the relevance and sturdiness of the stuff that’s already around.

## March 21, 2008

### The mind’s eye

Recently, when talking to an Olympiad aspirant Ashwath Rabindranath about how to prepare effectively for the International Mathematical Olympiad, I came up with a formulation that I realized I’d implicitly been using for some time. After I discussed it with him, he said that he’s been trying it a lot and it’s been fairly helpful to him.

The concept is called the Mind’s Eye.

The idea is simple: everything should be in the mind’s eye. In mathematics, it is not enough to know that something can be proved. Truth is there only when you know how it has been proved. But even knowing, in the abstract, how it can be proved, isn’t enough. To really feel that a proof is correct, one should be able to behold it in the mind’s eye. Thus, if I tell you that every nilpotent group is solvable, you shouldn’t be satisfied withknowing that there’s some proof in a dusty book somewhere. You should see why the statement is true, and you should see it immediately, in your mind’s eye. By that I mean you should behold the proof conceptually, or pictorially, in a way that you can magnify any component of the proof at will. You should be able to tell me what the related facts are, what the applications and lemmas used are, and what the possible generalizations could be.

The mind’s eye is particularly important for Olympiad preparation because of the format of Olympiads: students are expected to solve a few challenging problems in a short time-frame, and they cannot refer any existing texts. A lot of Olympiad students waste precious in-exam time going down wrong alleys. If the student has in her/his mind’s eye all the possible things that could be done with the problem, what the consequences of each path would be, and what the likelihood of success on each path would be, then the time and effort spent online (during the exam) reduces proportionately.

But the importance of the mind’s eye is not merely limited to closed-book examinations, or time-crunched examinations. The importance extends to the more general scenario of learning and teaching. I can look up in a book a proof that not every normal subgroup is characteristic, but having the counterexample in my mind’s eye means that I can explore variations more easily, go forward, generalize. Books and online references are useful to supplement the mind’s eye in storing information — they cannot supplant the mind’s eye. The greatest research, insights and breakthrough come by immersing oneself in a problem, which means one can see it in the mind’s eye.

The idea is so breathtakingly simple that it amazes me why people do not use it more often. The mind’s eye can begin right in high school, in fact, when students are studying physics, chemistry, mathematics, history, economics, geography or just about anything. For those who’re more tuned to sound, they could use the mind’s ear, and for those more tuned to touch, they could use the mind’s touch. And it can begin simply. You look at a long-winding text or explanation. It’s too big for the mind’s eye. You look at it again. You break it down, you think about it. You mull over it. You sleep over it, and your subconscious reorganizes the ideas and the next day, it fits into the mind’s eye. Now, anybody can ask you about that idea and you can explain it offhand. More importantly, though, you can see (or hear or feel) it.

The mind’s eye could do well if supplemented by other resources that specifically prod it on. One of these techniques I’ve been exploring is an idea for math-related wikis, that I’ve started implementing. The first mathematics wiki I started is a wiki in group theory. The ”one thing, one page” paradigm on this wiki, as well as the diverse ways in which pages are linked together, using different relational paradigms, supplements the mind’s eye pretty well. I’ve often got new insights simply by surfing the wiki — and that’s saying something considering I’ve written almost all of it. The wiki has pages on things that might get short shrift in a textbook; for instance, normality is not transitive.

This isn’t the only way to supplement the mind’s eye. I came up with some ideas long ago about the use of a method of properties to organize information. That didn’t take off too well, though some of its features have been incorporated pretty effectively in the group theory wiki. Then, of course, we can learn from the way advertisers work: they tie in the core idea using a number of different paradigms. In his series on the Palace of Possibilities, Gary Craig talks of two tools to reinforce concepts: repetition and emotion. Instructional design texts emphasize the importance of reiterating the same basic point from a number of different perspective,s appealing to audio, visual and kinesthetic sense in the students over and above their cognitive abilities.

It is important to distinguish between the mind’s eye, and rote (or memorization). In fact, rote is a very special case of the mind’s eye; basically where you juts memorize the text as text, or in a specific form. The mind’s eye, in its more general form, encourages a complete immediate grasp of the material, but not from a specific angle, but rather from a large number of angles. The mind’s eye works best by building redundancy: by having not just an eye, but several eyes, several ears, and several hands to touch and feel.

Another important point is that the specific methods one uses to build the mind’s eye could vary widely, which is why I’m not listing here how to do this. The core idea is to increase the number of ways ideas linked together in the mind, and this could be done through random association attempts, by using systematic paradigms, or just by exposing oneself to a lot of material and letting the subconscious do the organizing. The key is to get the mind’s eye in action.

### The dissemination of science

This is a somewhat unusual post for this blog. I typically use the What is Research? blog to describe issues related to my day-to-day study and to-be research life, and my own experiments. In this post, I’m going to talk about something broader that has, of late, been concerning me.

Recently, I’ve been reading The Future of Ideas and Code, Version 2, two fantastic books by Lawrence Lessig. Lawrence Lessig is the man who gave birth to the Creative Commons movement. The core idea of the Creative Commons is simple: authors and creators of original work get to specify exactly how they’re okay with their work being reused. For instance, an ”attribution-share like” license means that the author allows others to reuse the work and create derivatives, as long as all derivatives attribute the original work, and also have the same or a compatible license. There are no-derivative licenses (which forbid the creation of derivative works) and noncommercial licenses (which forbid commercial use).

Lessig was motivated to start the Creative Commons, roughly by concerns he had about big corporations pushing the government to extend the term of copyright. Copyright law in the United States currently gives the author’s descendants the copyright on the author’s work for 70 years. Just a few years ago, this number was 50 years: the increase to 70 was one of the things that raised Lessig’s eyebrows. Lessig points out that increasing the term of copyright beyond 50 years after an author’s death is hardly an incentive to create new works, and plays more the role of protecting old works against new challenges.

Lessig isn’t the first of his kind. Richard Stallman probably takes a more extreme stand on the issue: he wants all software to be licensed under the GNU Public License (GPL) which forces software to reveal its source code, and requires that the source code always be free to modify and tinker with. Stallman has been responsible for the development of a lot of excellent software, including the text editor Emacs, and he certainly knows of the merits of free software, as he calls it.

What’s interesting is that Lessig and Stallman both come from universities. Like Donald Knuth, the man who created TeX, they don’t come from a profit-maximizing corporate perspective. They come with a clear aim and work towards it. In a sense, they’re representative of the best traditions in American universities: some of the most radical ideas stem from universities. Google, currently the world’s biggest search company, also grew out of a student project at a university. Universities give rise to the best ideas, perhaps precisely because they’re not pressured by or responsible to existing corporations that are entrenched in old ways of doing things.

Yet, it is ironic that academics largely remains unaffected by the improvements in science, technology, by the new methods of communication and interaction and the new paradigms developed by academics. In his book The Future of Ideas, Lessig talks about how old corporations are entrenched in old ways of doing things, and hence are more resistant to change than people who have nothing to lose. I see great evidence of this in academics, as I’m going to explain here.

Academics has arguably evolved some of the bes traditions for peer-to-peer sharing, scientific publishing, and knowledge dissemination. Indeed, by making publication necessary to get credit and move higher on the tenure track, scientists are forced to publish, rather than hoard, their findings. Universities in the United States also have an excellent tradition of sincere teaching, where professors are involved not only with teaching students, but also with setting challenging examinations, regular and challenging assignments, and maintaining office hours for students to contact them and discuss specific issues. Academics at the good American university combines professional high-quality service with a culture and ethic of sharing, mixing, and reusing.

Yet, academics hasn’t scaled, or benefited from the kind of economies of scale and large participation. The first and most obvious reason for this is that there aren’t that many academicians. What we gain in quality, we lose in quantity. For instance, there’s a significant difference in the scope and nature of Wikipedia articles on Harry Potter topics, and the scope of Wikipedia articles in mathematics (and probably the Wikipedia articles in other sciences are in a similar situation). It’s very easy to find loads of online discussion on the World of Warcraft or Star Wars but hard to find quality discussions in group theory (a part of mathematics).

As I mentioned above, part of the reason is that academics doesn’t have that many people. Google Scholar notwithstanding, there isn’t much scope, either in terms of commerce or in terms of numbers, for building the kind of communities around academic topics as there are around a lot of trivia. Entry barriers are high. Also, the general tendency in academics to be careful, to have your facts right before coming to the table, means that there is less quick, rapid and spontaneous participation.

But I think the deeper problem lies with the fact that people in academia do not see the reason to challenge the way things have always been done. True, we now have email, online journal access, and a host of other facilities made possible by modern technology. Lecturers put up freely available lecture notes online. Yet, the language of thinking, at least in mathematics, hasn’t come online. We haven’t exploited the tremendous opportunities that cyberspace can offer us.

Mathematics Doctoral Programs, then and now, a Notices Letter from the Editor, describes some of these. Even today, the standard way of teaching is for the lecturer to stand in front of the board, and write stuff on it from carefully prepared notes, as students struggle to take notes/copy and ask questions. True, students now have Google and Wikipedia to help in solving assignments, in addition to the large number of books available for the purpose. But the fundamental methodology of looking at and solving problems hasn’t changed. Most alarmingly, mathematicians haven’t come around and said Wikipedia provides information, and it’s good; but we could use the same technology to provide much better, more reliable, easy-to-locate information. Let’s do it. My impression is that a lot of precious class time, and a lot of the effort of researchers, is wasted simply in resolving trivial questions and doubts of students that should have readily available answers online.

What are the reasons for this? One, of course, is that a mathematician’s job (and probably the same for any academic’s job) is a full-time one. Knowledge dissemination isn’t in the main a part of the job, so the mathematicians have little reason to put in effort for it beyond what is needed for preparing the classes. However, what I think this misses is the fact that the one-time investments needed to disseminate knowledge on a wider scale, have long-lasting repercussions, because they improve the intelligence of the audience, at very little additional effort or cost to the mathematician. When you write a book in mathematics, this means that people can read the book while you’re sleeping, and gain from the knowledge. This doesn’t make you useless to them; it means instead that they start off interacting with you from a higher plane. Similarly, if we have more mechanisms for putting academic information and ideas in cyberspace, more people can access and learn from those ideas while we sleep. That means that more people get into the subject, and more people ask us questions that require actual thought, rather than question us about trivalities.

What I fear is that the importance of having reliable and quality information available in a way that a lot of people with different needs can use it, is underestimated. True, we have public seminars and colloquia, and a lot of good work has been done, specially by MIT Open CourseWare, but this, again, remains more the exception than the rule.

I’m also aware that a lot of work has been going on, recently, with the so-called Semantic Web, particularly in the biological sciences. Some good projects have been taken on by the Science Commons, the science branch of Creative Commons. Yet, I see something missing, and strange, in these endeavors. They declare standards and protocols for scientists to follow, suggesting ways to integrate large existing amounts of data. Not surprisingly, the main push for these initiatives is biology, and specifically genomics, where commercial interests are also strong. However, what we do not see that much is entrepreneurial bottom-up spirit.

What we do not see is individual scientists, educators, researchers, graduate students, undergraduate students, and high school students, exploring a lot of new ways to make knowledge reach out to more people. The typical impression/response, it seems, is the lack of demand. I can say from my personal experience, with various small-scale initiatives I have tried and continue trying, that demand is low for bottom-up initiatives. Start a discussion on Harry Potter, or on some problem with Mozilla Thunderbird, and you’re likely to get a reasonable number of responses. Start a discussion on some obscure area of mathematics, and responses will be slow.

But I don’t think that the initial low demand is a reason to be greatly worried. It does mean that we need to be careful when porting methods and ideas from the worlds of commerce and fandom to academics, and it means we need to blend and modify to go well with the best traditions of academics. But potentially, there is a lot of demand for a good knowledge dissemination tool for academics. I think Wikipedia can prove that point: sloppy source though it is, Wikipedia is used by a large number of students. And if you actually think of it, not that many person-hours have gone into the mathematics part of Wikipedia. if we could leverage the same ideas in other endeavors, it’d be great.

(Wikipedia has had challengers in the past, most of them poorly architected; there is a new and growing threat to Wikipedia called Citizendium. From what I’ve seen, the Citizendium is based on sound principles and is likely to soon be able to offer value that is endemically missing from Wikipedia. But our thirst for knowledge, information and ideas is too large to be quenched by either Wikipedia or Citizendium).

This is what motivates me in part to work on the group theory wiki, topology wiki, commutative algebra wiki, and some other wikis that I am gradually developing. They’re based on a way of making basic mathematical knowledge available in a very structured and easily navigable way ,suggesting new insights and ideas. These aren’t the only endeavors I’m experimenting with; there are some others nascent in my mind, that I’ll blog about when I’ve got enough to say on them. And I’m not sure if these endeavors, specifically, will catch on with the masses within mathematics. They’re not likely to get an exponentially increasing audience in the near future. My hope, rather, is that with a lot of people trying a lot of new things, we’ll be able to understand what dissemination tools work, and how.

## March 8, 2008

### Tryst with functional analysis

It’s the end of the ninth of the eleven-week winter quarter, and the next two weeks are probably going to be fairly hectic: we have examinations/final homeworks to submit in all subjects, and I’m guessing that from tomorrow onwards, work on these will begin full-force. So I’m taking a little time off right now to describe my tryst with functional analysis so far.

During the first 1-2 weeks of the functional analysis course taught by Professor Ryzhik, I was enjoying the material, more or less keeping pace with the material, and also reading ahead some topics that I thought he might cover. However, from around the third week onwards, the nature of topics being covered in the course changed somewhat and I started getting out of sync with the material. Then came an assignment with problems that I had no idea of how to solve. Eventually, solutions to these problems were found in an expository paper by David (one of my batchmates) and the first years worked out the details of the solution on the chalkboard.

At the time, I was feeling tired, so I didn’t try to keep pace with and understand all the details of these solutions. I wrote down a reasonable bit of them to muster a decent score on the assignment but I didn’t internalize the problem statements (I did have some ideas about the problems but not from the angle that Prof. Ryzhik was targeting).

So, in the next week’s problem set, I wasn’t able to solve any of the problems. This wasn’t because the problems were individually hard (though some of them were) but because even the easy problems needed a kind of tuning in that I hadn’t done/ I learned of the solutions from others and understood enough of them to submit my assignment, but they hadn’t sunk in. At the same time, I was handling a number of other things and I didn’t have a clear idea of how to proceed with studying analysis.

Some time during this uncomfortable period with the subject, I remembered that the previous quarter, I had overcome my discomfort with noncommutative algebra by writing Flavour of noncommutative algebra part 1 and Flavour of noncommutative algebra part 2. Noncommutative algebra differed from functional analysis: in the former, I was reasonably good at solving individual problems but just hadn’t had the time to look back and get the bigger picture. In functional analysis, I didn’t start off with a good problem-solving ability or an understanding of the bigger picture.

Nonetheless, I knew that trying to prepare a write-up on the subject was probably the best way of utilizing my energies and probably a way that would also be useful to other students, which could partly be a way of contributing back, considering that I hadn’t solved any of the recent assignment problems. Moreover, it was something I knew I’d enjoy doing and I hoped to learn a lot from. So I got started. The first attempt at preparing notes was just aroudn the corner from the mid-term. I got a lot of help from Rita Jimenez Rolland (one of my batchmates) who explained various parts of the course to me as I typed them in. (Here’s the write-up).

However, after the examination (where I didn’t do too well — notes are more useful if not prepared at the last minute) and as I learned more and more of the subject, I felt that it’s good to restart the notes-making process. I brainstormed myself about what kind of write-up would be most useful. Instead of just trying to cover whatever has been done in the course, I tried to look at the problems from a more basic angle, like: what are the fundamental objects here? What are the things we’re fundamentally interested in? I also brainstormed Mike Miller, who provided some more useful suggestions, and I got started with the write-up.

Preparing the analysis write-up hasn’t been plain sailing. The problem isn’t so much lack of time, as it is lack of richness of engagement. When I’m working on my group theory wiki or writing this blog entry, or doing something where I have a very rich and vivid idea of what’s going on, every part of my mind is engaged. There isn’t scope for distraction or going lax, because I’m engaging myself completely. However, when writing functional analysis notes, I faced the problem of my own ignorance and lack of depth and ideas in the subject. So, when I got stuck at something, I didn’t have enough alternate routes to keep myself engaged with the subject. The result? I kept distracting myself by checking email, catching up with other stuff, and what-not.

The contrast was most striking some time about a week ago. Through one hour of interrupted and not-very-focussed work on the functional analysis notes, I was getting somewhat frustrated. On a whim, I decided to switch to working on the group theory wiki. I did that, and was surprised to observe that for the next one hour, I didn’t check my email even once.

The complete concentration on the subject isn’t merely explained by the fact that I like group theory more, or am better at it. It is more the fact that I can see a bigger picture. Even if I’m concentrating on a couple of trees in the forest of group theory, I can see a much larger part of the forest. But when working on a couple of trees in functional analysis, all I can see is those and a bunch of other trees. So distractions find their way more easily.

I consider this illustrative because we often think of concentration as a kind of tool of willpower. True, the exertion of willpower is necessary to concentrate at some times (e.g. to pull myself back from the group theory wiki and back to functional analysis). But more fundamentally, I think it’s the intrinsic ability to see something as very rich and beautiful and to keep oneself completely engaged, that matters. Do determination and hardwork play a role? Yes, they do, but they do so because they help build that internal richness. Which explains why I love writing so much: in a number of areas, writing allows me the most to explore the inner richness. And I think this is a factor in explaining why, although many different people work hard, there are not so many who, at the end of their hardwork, find the work enjoyable. That’s because most of us use a very small part of the tremendous hardwork that we put in, into creating an internal richness that can engage us better.

What about functional analysis and me? Do I see the richness in functional analysis yet? Not to the level that’d help me cope very effectively with the course, but yes, I do feel a lot better about the subject. And I think the new notes on function spaces, even though they may seem amateurish right now, do indicate some of the insight and richness that I have gathered over the past few weeks. Let’s hope I can augment these notes in the coming days to a level that really gets me prepared for the examination!

Create a free website or blog at WordPress.com.