What Is Research?

October 31, 2009

Math websites falling into disuse?

Filed under: Uncategorized — vipulnaik @ 3:06 pm
Tags: ,

Since I recently blogged about Math Overflow website, I’ve been wondering what happened to various other math websites that once looked promising, and how they’re faring. Some of them seem to be going strong, but none of them seem to have been exploding in popularity.

Tricki

I blogged twice about Tricki, the Tricks Wiki, which went live in April 2009 (see the annoucement by Tim Gowers). Tricki held a lot of promise. Of late, the enthusiasm seems to have slowed down, though this might be a temporary phenomenon. The most recently created article and the most recent comments appear to be two weeks old as of today (October 27, 2009). According to Alexa data, the site has a rank of 1,200,000+ worldwide and about 550,000-600,000 in the United States. For comparison, subwiki.org, which I run, has Alexa data showing a site rank of 500,000-550,000 in the world and 150,000-200,000 in the United States, while Math Overflow has Alexa data showing a rank of 350,000 worldwide and about 60,000 in the United States (the numbers you see clicking on the links may be different if you don’t view this post within a few hours of my writing it).

Tricki also hasn’t been mentioned on Gowers’ blog since June 25, 2009 and on Terence Tao’s blog since August 2009.

Is the Tricki falling into disuse? Clearly, the initial spate of interst seems to have subsided, but it might well regain a slower and steadier momentum in some time.

Planetmath

I remember a time when Wikipedia had much less mathematical content than planetmath, which was one of the first places to check mathematics on the Internet. Planetmath appears to be going strong, though not as strong as before. While their message forum seems reasonably active, their latest addition was about a week ago, and they seem to be getting somewhere between 0 and 2 new articles in a day, and around the same number of revisions a day. Not exactly dead, but not bubbling with life. Their Alexa data indicates fairly steady performance with a traffic rank of around 130,000 over the last six months, but a decline over a longer timeframe — setting the drop-down parameter to “max” below the chart shows that their traffic rank and daily pageviews have been following over the longer run. Why? Decline in quality? Probably not — it’s more likely that people are increasingly using Wikipedia.

Advertisements

October 27, 2009

Are textbooks getting too expensive?

Filed under: Uncategorized — vipulnaik @ 11:26 pm

I recently came across a post by John Baez on the n-category cafe titled Cheaper Online Textbooks?. Baez’s post has a number of interesting links: a piece on “Affordable Higher Education” by CALPIRG, a piece on the legislation based on this report by Capital Campus News, an article in the Christan Science Monitor on the rising cost of textbooks, and a blog post in the Chronicle of Higher Education on an e-textbook program. So, reading about all these posts, I began to wonder: are textbooks getting too expensive? And should anything be “done” about it?

Are textbook prices soaring?

So I decided to look at the range of calculus books. The general impression from the things I read seemed to be that it would be hard to get a decent textbook for under $100. So, I went and typed calculus on Amazon, and looked at the first page of search results. Among these search results was Calculus for Dummies ($12.99), Forgotten Calculus ($11.53), Calculus Made Easy ($26.95), Schaum’s Outline of Calculus ($12.89), and The Complete Idiot’s Guide to Calculus ($12.89). Most of these books would seem reasonable for a low-level introductory semester or two quarters in calculus — admittedly, they may not be suitable for all calculus courses, but if price really is a primary consideration, it isn’t as if there are no options. There is also a wikibook on Calculus and an old public domain book on calculus. If you want somewhat more advanced stuff for free, you can try MIT’s OpenCourseWare course on single variable course, which includes video lectures, their course on calculus with applications, and their course on multivariable calculus.

Okay, so perhaps calculus is a bad example? Well, I decided to pick point set topology. The standard book for this is the second edition of Munkres’ book, which I think is one of the best, and it costs $107.73 on Amazon. But searching for topology on Amazon gives a number of other considerably cheaper books, such as Mendelson ($7.88), Gamelin and Greene ($10.17), Springer Undergraduate Math Series book by Crossley ($23.40), Schaum’s Outline of General Topology ($12.89), among many others. None of them seem as good as Munkres, but they all cover the basic material — and reasonably well, it seems.

Of course, I have picked on calculus and topology, both topics that are more than fifty years old, and where most of the material that should be included in an elementary textbook is widely known. In other words, the field for writing books is wide open. No publisher or author has significant scarcity power. When we are looking at exotic topics such as the theory of locally finite groups, then yes, you probably wouldn’t find cheap textbooks. But most undergraduate-level textbooks would likely be of the level of calculus or topology texts, and not exotic texts on locally finite groups.

Why do instructors choose expensive textbooks when cheaper alternatives exist?

Why do instructors choose $100+ calculus textbooks or Munkres’ topology textbook when there are so many cheaper books available on the market? One explanation, pointed out in a comment to Baez’s post, is the “moral hazard” explanation. This states that instructors do not need to bear the costs of buying the textbooks, so they just prescribe the “best” textbook based on their personal criteria rather than taking the price into consideration.
(more…)

October 26, 2009

Math overflow

Filed under: Uncategorized — vipulnaik @ 10:26 pm
Tags:

In recent times, the Math Overflow website has been getting a lot of “press”, which is to say, it has been mentioned in some highly prominent math blogs. It was reviewed in Secret Blogging Seminar by Scott Morrison, who is also involved with Math Overflow, and it was mentioned by quomodocumque, Timothy Gowers, Terence Tao, the n-category cafe and others.

Math Overflow is a website where people can ask math-related questions (the questions should be of interest to people at the level of Ph.D. student or higher), answer the questions, and rate the answers. It uses the Stack Exchange software, which is used for many other websites, such as Stack Overflow. Funding for the website is being provided by Ravi Vakil of Stanford University, and it has a bunch of moderators — but anybody who earns enough points through participation can rise to the status of moderator. For more information, see the Math Overflow FAQ.

Participation on the website has been increasing rapidly since the first post (September 28). Here’s the Alexa data, which seem to indicate that usage has been growing (Alexa is not very reliable for low-volume sites, since it uses a small sample of users and most Math Overflow users may not be using Alexa’s toolbar).

The software and site layout seem well-designed to encourage participation. The long-term performance seems unclear, since a lot depends on how effectively the site is able to allow users to fruitfully explore past questions and answers and discover things similar to what interests them. But, as of now, it has a bunch of interesting questions, and seems to have reached the ears of a lot of people who’re interested in asking good questions and giving good answers.

July 29, 2009

Information costs and open access

Filed under: Uncategorized — vipulnaik @ 4:59 pm
Tags: , ,

In recent years, there has been a growing trend towards “open access” among librarians and academics. For instance, the University of Michigan recently held an Open Access Week, where they describe open access as:

free, permanent, full-text, online access to peer-reviewed scientific and scholarly material.

In an earlier blog post, I discussed some issues related to open access. Here, I attempt to look at the matter more comprehensively.

Rationales for open access

There are many rationales for open access. The simplest rationale is that open access means reduced cost of access to information, which allows more people to use the same research. Since the marginal cost of making research available via the Internet to more people is near zero, it makes sense from the point of view of efficiency to price access by yet another person to the research at zero.

Another rationale is a more romantic one: making scientific and scholarly publishing available openly allowsfor a free flow of ideas and a grander “conversation”. Support for this rationale also indicates that open access should be more than just free (in the sense of zero cost) access to materials, but also a license that permits liberal reuse of research materials in new contexts. Academia already has strong traditions of quoting from, linking to, and building upon, past work, but this form of open access seeks to provide a legal framework that explicitly specifies reuse rights that go beyond the traditional copyright framework of countries such as the United States. An example of such permissive licensing is the Creative Commons licenses.

As shorthand for these two rationales, I shall use cost rationale and conversation rationale.

Open access policies/mandates

One of the major problems the open access movement has faced so far is getting people to publish papers in open access journals. As long as the best papers continue to be published in closed-access journals, academics who want to read these journals will pressure their university libraries to subscribe to these journals, even when the journals overcharge. Thus, librarians are unable to push open-access terms on publishers. (more…)

November 15, 2008

Small fry or big fish?

Filed under: Uncategorized — vipulnaik @ 11:17 pm

How do we acquire the practice necessary to become perfect? This is a very general question, and I’m considering the question with regard to mathematical skill. Suppose my aim is to become a mathematics researcher. How do I acquire the practice necessary to do mathematical research?

In this blog post, I consider a specific trade-off: is it better to develop practice and intuition by considering a large number of simple problems, or simple things, or is it better to go after some of the big fish?

I’m personally in the “small fry” camp, and I’ll explain my reasons here.

Building a balanced repository of experience

As I mentioned in a previous post, the main advantage of experience is the presence of a large repository of knowledge that allows for more efficient pattern identification. I’ve been studying group theory for many years now, and thus, when confronted with a question in group theory, I am likely to either have seen the question before or at least have some meaningful closely related past experiences. Within a few years, by which time I should hopefully have explored more of the subject, I should be even better at tackling new questions in the subject.

A large repository of experience depends on knowing a lot of small facts here and there. These facts are connected in different ways. By tackling the small fry, either randomly or systematically, I am likely to cover many of these small facts. If I concentrate on the big fish, I may get to know very well all the small stuff that leads to that big fish, but many other things may have poor foundations.

Here’s an analogy. Suppose I want to explore the city of Chennai (Chennai is an Indian city, formerly known as Madras). One approach (the big fish approach) may be to identify a particularly difficult-to-locate spot in the city, and decide to reach that spot, with no help whatsoever. So I start walking around the streets of Chennai, going into some blind alleys and getting stuck at times, but I soon find my way and reach my destination. I go through a lot of parts of the city but my eyes are always seeking the destination point. Another approach would be to explore a new street each day. I might do this with an explicit ordering of the streets to explore, or I might do it in a pseudo-random way: each time I pick a new street that is slightly beyond the area I am currently familiar with. In the big fish approach, I might get to know the streets that lead to my destination very well, and I may also get to know very well the streets that misled me for a long time. In the small fry approach, I have a little knowledge of a much larger number of streets, but there is no overarching organizing framework to my knowledge, and no single goal.

The thrust of my argument is that the big fish approach leads to a less balanced and comprehensive repository of experience, as opposed to the small fry approach, leading to less preparedness for later research life. This is particularly important keeping in mind that most of us aren’t great at predicting what research problems we will work on a few years from now — so having a broader base makes more sense.

An argument for big fish: a more authentic research experience

There are at least a few ways in which the big fish approach seems more appealing. Because there are bigger fruits and bigger fish at the end, it can be more motivating and inspiring than simply doing a random collection of things on different days. I don’t disagree that big fish can be more exiciting to fish for, and juicier and larger fruits can be more exciting to reach for. In some cases, the greater excitement of something bigger can make up for the lack of breadth that may result from chasing it too hard.

But it is a mistake to look down upon, or sneer at, the tackling of small problems that aren’t aligned towards a specific big goal. In a sense, tackling a host of small problems without an overarching agenda is harder and more challenging than going out after a clearly defined problem. This is analogous to the fact that it may be a greater indication of inner strength to wander aimlessly rather than stride briskly and purposefully. At the same time, tackling small problems can be more rewarding, because it reduces the extent of commitment to a particular big problem and increases the amount of serendipity.

My final argument is that it is more efficient and less risky to consider and tackle a large number of small problems, or even settle wrinkles in many little definitions, than to try to prove big things. Just as we’re taught to diversify monetary investments in order to get a better average rate of return and be less prone to extreme risks, diversifying the problems being worked on is a good strategy against ruin. Some might view this as a “thinking small” attitude, citing people such as Andrew Wiles and John Nash who tackled and successfully solved hard problems. But there are a lot of people who tackled hard problems and did not solve them — and when you start out, you don’t really have an idea which camp you’re in (if you’re really really sure you can get the big fish, reading this blog post isn’t going to change your mind).

How do small fry and big fish compare with the theory versus practice divide?

There is a dichotomy between the theory builders and problem solvers in mathematics (something I alluded to earlier). Theory-building, a la Grothendieck, involves building general theories, while problem solving tackles specific problems.

The dichotomy between small and big is, as far as I understand, largely independent of the dichotomy between theory-building and problem-solving. Both theory-building and problem-solving can be done in minor incremental steps as well as in major, directed steps. Andrew Wiles, for instance, wanted to solve a problem (the so-called Fermat’s last theorem) and spent years doing that — his intention wasn’t to solve a theory. On the other hand, most problem-solvers are tackling separate isolated problems without the aim of making it to the national newspapers. Similarly, some theory-builders like Grothendieck seek to alter the foundations of geometry and mathematics. Others add in a few definitions here and there, introduce new symbol calculi or formalisms, and adapt past ideas to increase the strength of existing theories.

The difference between theory-building and problem-solving possibly lies with the inherent risks associated. With reasonable levels of rigor having entered mathematics, few published mathematical results have errors. Theory-builders, who are working incrementally based on what is known, are less likely to develop wrong theories, but run greater risks of being irrelevant. Problem-solvers, who are working on problems that others have identified as important, are more likely to do relevant work, but they are also more likely to not get anywhere or not succeed at all.

Can small fry lead to big fish, and vice versa?

Can a person chasing small fry end up netting the big fish? Can people chasing the big fish end up getting good at all the small stuff?

Paradoxically, it seems that the less efficient one is at chasing the big fish, the more one may learn about the small stuff. This follows from the I learn more when I do it wrong phenomenon, and is conditional to having a continued (and misplaced) sense of optimism on getting it right the next time. Chasing big fish, specially those totally out of reach, may therefore be an appealing strategy to learning more small stuff through self-deception.

Can a person chasing small stuff land a big fish? This is unlikely, and at any rate, a person chasing small stuff is unlikely to have the multiple insights needed to land the big fish. Nonetheless, the person may, without aiming to do so, develop some incremental insights that make the big fish look a little smaller for other people. Thus, even while a single individual who decides not to try for the big stuff foregoes the opportunity to hit it big, the mathematical community as a whole may not be adversely impacted in terms of the number of big problems it gets solved.

Big fish — later or earlier in life?

It would be folly for me to argue that people who spend many years tackling big problems are doing a disservice to mathematics by spending their time inefficiently. Tackling the big fish has positive externalities beyond the mathematical value it creates. First, it generates buzz about mathematics outside the mathematics community, and provides meat to popular math writers who can help entice more people to the subject. It is hard to entice kids into math by telling them that they can do a little more stuff every day and become cogs in the mathematical wheel. Big conjectures carry the romance of jackpots of lottery tickets.

Second, it makes the mathematical community bolder and braver and more confident of its abilities when a long-standing conjecture is resolved. Apart from the specific techniques developed to solve the conjecture, the idea that conjectures that have withstood assault for so long have yielded to perseverance and hardwork speaks to that ideal we so often want to believe in and yet keep doubting: “There is nothing that fails to yield to intelligence, hardwork, and sheer perseverance.”

Third, and perhaps most importantly, it saves other less talented people the agony of trying to prove the conjecture. With Wiles having settled Fermat’s last theorem, there are fewer people spending hours trying to settle it in the hope of winning fame.

Nonetheless, the question remains: when trying to build one’s research skills and abilities, is it a good idea to tackle relatively bigger fish? Here, I think the answer is no. Bigger fish may be incorporated as further inputs for random exploration, but a systematic attempt to go after a big fish is likely to lead nowhere.

June 14, 2008

Google, Wikipedia and blogosphere

Filed under: Uncategorized — vipulnaik @ 10:58 pm

(This blog post is a collection of links and random observations. No central point here.)

We’ve often been accused of being a generation with attention deficit, a generation spoiled by Google, Wikipedia, and the general ease of availability of information. Here are a few interesting articles to get started with this:

How the Internet is changing what we think we know: In this article, Larry Sanger, co-founder and initially the chief organizer of Wikipedia, says that “Information is easy, knowledge is diffcult”. His argument is that as information becomes easier and easier to find, knowledge, with the attendant hardwork and thought it entails, seems less and less lucrative. In an age where search engines answer our queries almost instantly, we may be all the less motivated to do the hardwork needed to figure things out.

It’s important to note that Sanger isn’t an anti-Internet reactionary in any sense; Sanger has been working on Internet-based projects with varying degrees of success (including Wikipedia, and a new encyclopedia project called Citizendium). Nor does he paint a rosy picture of a past where neither information nor knowledge was easy to find fast. Sanger, however, urges people to take seriously the responsibility that comes with gathering knowledge, to develop critical facilities and thinking, and to apply these critical facilities to the consumption of online information.

Nicholas Carr’s essay “Is Google making us stupid?” is in a somewhat different vein. Here, Carr laments the fact that as people do more and more of their reading online, they lose the attention and concentration needed to read longer, more involved books and arguments. Carr frames his argument more as a possibility to be warned against, than as a certainty that has come to pass. Carr is not quite an anti-Internet reactionary either, though he might be considered somewhat closer in description to one. Needless to say, there have been many thoughtful and thoughtless critiques of this, including this one by net evangelist Kevin Kelly. (Have you already left the site in an effort to keep up with the links?)

What do mathematicians and other academics have to say about the easy information that Google and other tools offer? A common refrain among academics and librarians is that Google and Wikipedia are fine starting points, but one should always go ahead and read primary sources. In fact, Wikipedia itself has a number of pages on how to do “research” with Wikipedia, for instance, this one. For the most part, mathematicians seem to be ignoring the effects of Google and Wikipedia on the structure and nature of mathematical knowledge. In my graduate year at the University of Chicago, I’ve so far caught three mentions of Wikipedia by professors. One professor, in an assignment, warned us that a certain page on Wikipedia had a subtle error in a definition. Another professor, while writing a good reference for material he taught, winked at us saying we could anyway find it all on Wikipedia. In a third instance, a professor pointed out, during a talk, that a Wikipedia entry on a topic had a subtle but grave error.

Google, too, has received a number of side mentions. In one notable instance, Professor Alperin said that, out of curiosity as well as professional need, he once asked Google how to classify all cyclic subgroups of an Abelian group, and Google churned out a paper written in the 1930s that answered the question. Another professor pointed out that Google was a very effective calculator. On other occasions, professors who do not remember URLs or websites simply tell us to Google them.

These mentions notwithstanding, there does not seem to, in general, be any cognizance of a fundamental shift in knowledge acquisition being brought about by sources like Google and Wikipedia. However, there are some mathematicians who’re moving into the new web era, and providing short chunky stuff that can be served in web-sized spoons (i.e., that can fit the attention span of surfers). Notable in this regard are the large number of blogs and wikis started by mathematicians. For instance, there is the Noncommutative geometry blog, where some noncommutative geometers post quick information about conferences, seminars, and ideas in the subject. There’s the Dispersive Wiki, which is an attempt to put together some stuff on PDEs related to dispersion. And then there are the large number of mathematicians who’ve got into blogging, including Fields Medalists like Terence Tao and Richard Borcherds. Their blog posts range from “today, in class we did this” to “hey, I have an idea” to the more well-thought-out articles discussing pros and cons of something or how to go about doing something.

Terence Tao, a great proponent of letting the public at large get an idea of what goes on inside mathematics, has experimented with a number of ventures, ranging from a blog book (a book in blog form) to making a contribution to Scholarpedia, a site that aims to aggregate scholarly articles on a wiki. However, enthusiasm such as Tao’s is still largely unshared by the mathematical community.

The mathematical community has also made efforts to recognize the new challenges and opportunities provided by tools like Google Scholar. For instance, This AMS report talks of the problem of searching a vast database of content using Google Scholar, which has no way of responding to questions like “find an expository article on this topic suitable for a first-year graduate student”. Certain solutions and approaches have been suggested.

On the whole, however, it seems to me that the mathematical community (and the academic community at large) has not fully registered the implications of the changing dynamic of knowledge. That’s because mathematicians, like all other human beings, are trapped in things as they stand now, rather than things as they could be. This is probably best exemplified by the passive way in which mathematicians have come to accept the growing role that Google and Wikipedia have come to play, without pausing to ask, “Okay, what’s going on!” Some have transformed this passive acceptance into jumping into the fray. In the biological sciences, where funding is replete, attempts to create impressive online databases and concept collections have received more attention; for instance, there’s Wiki Professional.

Abstract versus concrete

Filed under: Uncategorized — vipulnaik @ 10:01 pm

Abstract versus Concrete

The notions of abstract and concrete change with time, and with one’s level of experience. As the picture above indicates, what seems to be abstract to people at one stage of their experience, is very concrete at another.

What does it really mean for one thing to be more “abstract” than another? As the examples above illustrate, the abstract thing deals with something more generic, more unknown, and more flexible. Let’s look at the examples shown here.

Kindergarten time:

Concrete: A picture of five people

Abstract: The notion of five

A picture of five people, after all, is just that — a picture of five people. But the number five carries with it a much greater richness of possible interpretation. Five could refer to five people, five boats, five senses, five birds. It could refer to the five fingers of the hand (including the thumb). It could refer to five as a quantitative measure (for instance, the volume of the jug is five times the volume of the cup). It could refer to five as an ordinal: I ended fifth in the horserace.

Concrete: 3 + 7 = 10

Abstract: 3 + x = x^ 3 - 17

In the middle school example, the difference between the concrete and the abstract is less pronounced. Here, the abstract represents not so much a leap in generality as a leap in ignorance. While the concrete equation has only known quantities figuring in it (3, 7, 10), the abstract equation involves an unknown quantity, that we’ve denoted by x. Abstraction (which, at this stage, is introduced with the word algebra) is the tool which allows us to talk of the unknown, without fearing it.

In high school:

Concrete: \frac{\sin x}{x^2 + 1}

Abstract:

Concrete: f''(x) = f'(x)f(x)

This level of abstraction is akin to that in middle school. At the middle school stage, the idea of using variables for unknown, or arbitrary, numbers, is already well-established. But the idea of having functions as unknown quantities to be solved for, is still new, and somewhat puzzling.<

What do these examples show?

Abstraction is often introduced to unify existing concrete ideas, and to allow for the possibility of dealing with existing concrete ideas in a general fashion. This has the advantage of allowing us to solve concrete problems. Without the abstract notion of “five”, it is hard to systematically count a collection of objects and confirm that they are five in number, Without the abstract idea of an arbitrary unknown number, and the abstract study of how to manipulate equations involving unknowns, it would be hard to create systematic and general procedures for finding numbers satisfying certain equations. Similarly, without a general theory of functions, and an abstract study of how to manipulate general conditions on a function (as the theory of differential equations provides) it is hard to compute a specific function arising in a concrete situation, based on general conditions.

But there is something more to abstraction than simply putting a label on a general idea. That “something more” is figuring out general rules and laws of manipulation. Without such general laws, there’s little advantage in giving a generic all-encompassing name to everything. Algebra isn’t just about the use of the symbol x for a variable: it is about the fact that there are general rules that hold for any x. These general rules include commutativity and associativity of addition and multiplication, the distributive laws, the properties of zero and one with respect to addition and multiplication, and so on.

The importance of this point is often overlooked in general discussions of abstract versus concrete. Abstractness is often confused with the overuse of symbols, the absence of examples, or being in general hard to comprehend. “Abstract” is often confused with “abstruse” and people often indicate their difficulty in understanding something by saying It’s just too abstract.

In fact, symbol use, level of difficulty, and absence of examples have little to do with abstractness. Abstraction can be viewed as the art of identifying general and common patterns across similar objects (for instance, across all numbers, across all nice functions, across all groups, across all measure spaces). Sometimes, these common patterns are useful in solving concrete problems (for instance, solving an equation, solving a differential equation, finding a group or ring subject to certain constraints). At other times, it may give a feel for how objects in general behave, which may in itself be useful.

Symbol use has little to do with this. Saying that the total number of words in a document is N isn’t abstraction. The abstraction lies in the act of identifying the “total number of words in the document” as a number, despite the number being unknown, and possibly subject to change. Using a single letter for this is merely a matter of notational convenience. In some contexts, this notational convenience is useful — for instance, if a person plans to solve equations or manipulate expressions involving the total number of words in the document, then using the letter N may be more convenient than writing “total number of words in the document”.

In fact, there are contexts where the use of letters as variables is discouraged. One such context is computer programming, where variable names need to be chosen to be reflective of what they are representing. That’s so that different programmers can understand what a given variable name was for. So the total number of words in the document may be called “WordCount” or “NumWordsInDoc” rather than the uninformative N.

The fact that, in a diagram of supply and demand curves, price is denoted by P and quantity by Q, is completely incidental and irrelevant. The real abstraction lies in the conversion of the way people respond to economic incentives, to a property on the graph: namely, the demand drops as the price increases, while supply increases as price increases. The specific formula that relates price and demand (if such a formula exists) is also not as relevant. In fact, a specific and artificially imposed formula (for instance, saying that the demand is inversely proportional to price) goes against the grain of abstraction — because it puts numbers where they didn’t exist.

Mathematics does rely a lot on symbols, and sometimes, good symbols can aid in abstraction, as they allow for compacter and more revealing forms of expression of mathematical truths. However, a “symbol-free” way of expressing an idea carries with it its own power. Symbol-free expressions use natural language, and the connecting techniques of natural language, to convey the same idea in a more memorable fashion. For instance, saying that a “product of Hausdorff spaces is Hausdorff” is a more compact and expressive statement than “if A, B are Hausdorff, so is their product A \times B“.

Next, does abstraction necessarily conflict with the ideal of “examples”? To consider this, we should consider what it means to give an “example”. An example should be something illustrative of an idea, but it should not have distracting features that make one think it is about another idea. Thus, for example, if one needs to describe what an American person is, then a list of people like Bill Gates, Steve Jobs, Steve Ballmer, is poor on the example front. All these people aren’t just American. They share a number of other similarities — they are rich, they own and run huge multibillion dollar technology-based enterprises.

Examples are not magic pills that can cure a boring definition or abstraction. Ironically, a number of educations seem to champion the use of examples, occasionally to the point of absurdity. For instance, in middle school, we were urged to give at least two examples whenever asked to define a term in an examination. But as pointed above, the real power of examples lies when they are used to illustrate the central idea, and when a sufficiently broad range of representative examples is chosen, which differ in other important respects.

This suggests that, rather than think of examples, we should think in terms of highlighting the common and crucial features behind the idea; features that might be brought out through a combination of existing examples, hypothetical examples, non-examples, and “abstract” definitions. At its core, an abstract definition is something that, once properly read and understood, gives every example and non-example.

Here’s an example. Suppose I gave you the sequence 2, 6, 20, 70, 252, … and asked you to fill it in. Unless you’ve had some experience with combinatorics, it is unlikely that you’ll guess where this sequence is headed. The “example” terms of the sequence don’t describe where it goes. But now here’s the abstract rule:

The n^{th} term of the sequence is the number of ways of writing a string of length 2n with an equal number of 0s and 1s.

Now, if you haven’t studied combinatorics, this definition may still not give you an insight into how the numbers grow. At least, though, you can in principle compute any term. If you’ve seen some combinatorics, you’ll identify the formula for this as \binom{2n}{n}. The formula then tells more: the sequence grows exponentially, roughly like 4^n. Looking at the sequence without knowing the formula may also have led you to a suspicion that it grows exponentially, but the formula gives a precise number — something that is simultaneously more abstract and more concrete.

The art of abstracting, “laying bare the essential features”, is closely allied with the art of choosing and selecting good and representative examples. Some educators prefer to first lay out an abstract definition, then guide students through representative examples, or let students figure out the representative examples. Some educator prefer to guide the students through representative examples, and let the students abstract out the definitions. Both approaches have their power and their place. Educators who want to be bad and boring can succeed in this equally well by giving abstractions and by giving examples.

Finally, is abstract stuff inherently more difficult to understand than concrete stuff? This is probably true, but it’s equally well true that one can never get to grips with certain aspects of the concrete stuff unless one has understood some of the abstract stuff behind it. A lot of the mathematical giants before the twentieth century (such as Poincare, trying to develop topology), who did great work, were hampered in their concrete stuff because some of the abstract frameworks underlying their stuff hadn’t been developed.

But the biggest and hardest abstractions that we need to make, were the ones we made in kindergarten — identifying the notion of a number. The leap in abstraction from collections of size five to the notion of five, is considerably bigger than the leap from a known number to an unknown variable. That said, the leap from arithmetic to algebra is probably much bigger than most of the leaps people are expected to take in high school, college and graduate school. Learning how to count is like learning how to walk; hard when one first tries, but simple and natural by the time we reach adulthood.

Next time you catch yourself calling something too “abstract”, or wishing there was more “concrete” stuff to mathematics, stop yourself and ask yourself: okay, what is the real problem with the stuff? Does it really use abstraction in a bad way? Does the problem lie with your perceptions, or expectations? Or is the problem withthe material really in a totally different direction, that you’ve confusedly lumped with “abstractness”?

March 28, 2007

Talks: how important are they?

Filed under: Uncategorized — vipulnaik @ 4:32 pm

Last semester, I took the initiative to formally start a system of Student Talks in CMI. The student talks started off well but then due to some technical glitches with transportation (this was before CMI got its hostel), the frequency of talks went down and there were in fact only four talks in the second half of the August – November semester.

In January, CMI shifted to its hostel, and we started off Student Talks with a fresh slate, scheduling the talks in the evening when all students were free. Soon the logistic issues with the talks got settled and it was now only a question of finding a steady stream of people who could give talks. This proved a challenge for a variety of reasons.

For one, a large number of people who were in principle interested in giving talks, either didn’t have a topic they wanted to talk about or didn’t feel they had enough knowledge in the topic to given a talk on. Some people who actually came up with talk plans realized that they don’t have enough time to prepare a talk in proper depth.

Towards the fourth month, there have at last started coming some other people who plan to give student talks. However, most of the talks were delivered by me (mathematics), Ramprasad (computer science), Anirbit (mathematics and physics). Sometimes I wonder if my overenthusiasm to give student talks myself is leading to a misuse of my power as student talk coordinator, but since there are hardly any others who compete with me for student talk slots, I don’t see any reason to feel guilty about giving too many talks myself.

This whole business of student talks really makes me wonder: what really is the goal/achievement of a student talk/student presentation? How does it help that particular student? And how does it help students to listen to talks from other students when they already have so many courses to attend and so many seminars by far more qualified people to go to?

I explore some answers based on my own experiences with student talks.

First, how the process of giving a student talk helps me. I have so far given ten student talks, some of them being on subjects that I already know or that are “pet” topics, while others were on topics which I didn’t really know but wanted to learn.

The very act of trying to express what I have to say in slides helps me sharpen my thoughts. For one, the total space available in a slide is limited, so I have to break all my ideas into small enough chunks so that each chunk carries a central theme that can be put into a slide. I often don’t do a very great job of this (as per the criteria of each slide having a central theme) but at least keeping that in mind helps me better understand the subject.

Also, while giving a talk, it is important to create a story, a build-up to the subject, that I may cut out on completely while learning myself. For instance, while I read up on Fourier series myself, I just picked up a collection of separate isolated facts: Fourier series, Fourier transforms, dual groups, Fourier transforms on reals, Fourier transforms on finite fields. While giving a student talk, I had to organize all these ideas into a particular thread so that they seemed to flow naturally. Similarly, my talk on approximating solutions to equations was based on many tidbits that I already knew, but while giving the talk, I had to formally understand each part and I also ended up doing experimentation by writing the codes in Haskell.

Another aspect is that the very act of delivery of a talk often makes one feel good, particularly when one is able to share certain insights (however trivial) with others and feel the joy of their understanding the stuff. For me at least, student talks have been an important way of getting me to feel a greater sense of enjoyment in studying the subject. This is a bit like the cook for whom having the food eaten by others gives an altogether different quality of satisfaction from simply eating the food oneself.

I also think that being able to talk on a technical topic to an audience composed of peers (rather than junior people) is an exercise in developing the confidence to express and present oneself. Many people are (rightly, perhaps) reluctant to give student talks on account of not having much to say, or not being sure whether what they will say will be useful to others. However, those who do end up giving talks realize that once you come up on the stage to say something, you can usually say it. And I suppose that since mathematics involves a lot of teaching and learning, student talks are a genuinely good preparation for a later life of mathematics.

Coming to the other half of the question: what are the advantages of attending a talk by a fellow student?

I have observed that by and large, I tend to feel less sleepy while attending talks by other students as compared to attending regular lectures or seminars or expository lectures by more senior people. One reason for this possibly is that in the student talks, I tend to feel more involved with what is being discussed, and feel more of a commitment to try to follow what is being said. This is because, firstly, students usually make a sincere effort to ensure that what they are saying makes sense to other students, and secondly, they may be a better judge of how their fellow students are understanding what they are saying.

Another interesting thing about the student talks is that they are generally more relaxed. This is probably beacuse student speakers don’t have the same baggage of expectations in terms of rigour, speed and correctness that more senior speakers might. For instance, I myself, while giving my talks, use prepared slides but I don’t try to rush through any slide or finish each side in a particular allotted time; I also avoid “leaving things as exercises” and I feel free to go into digressions and tangent if they will help my cause.

In more formal talks, the speaker has to keep in mind that the audience comprises people who are already well-versed with the basics and that if he/she spends too much time on basic stuff, the others may lose interest. Also, many of the formal talks are actually a way for the speaker to present some original work or ideas, and the contents may be used by others to evaluate the rigour of the ideas. Hence, there is more pressure on the speaker to go fast, be more rigourous at times when rigour compromises understanding, and not go into tangents.

Student talks do fill an important niche, but despite this, participation in student talks is rather low. This is where there could lie a possible disadvantage with an event that involves only the students: student do not feel any kind of compulsion/moral obligation to attend student talks, and therefore may choose to not attend if they are feeling tired, etc. Personally I don’t think this is a serious and important problem, because, after all, if a person doesn’t feel that he/she will get much from the talk, then it doesn’t make sense for the person to come all the way to attend the talk and fatigue himself/herself. Nonetheless, it is important for the phenomenon of student talks to create enough of a reputation for itself that people feel more enthused to attend a student talk and more confident of the irreplacable value/enjoyment they’ll get out of it.

I’ve also been wondering what will happen to the student talks initiative once I leave CMI (this is my final semester at CMI). More than 50% of the talks are being given by me and Ramprasad, and once we both pass out, it may be difficult to sustain the momentum of student talks. This actually brings about the more important question — how important is an individual to a community initiative, and what kind of steps should and can be taken to make sure that community activities continue even when the individuals change?

I’ll explore these issues in a later post.

October 11, 2006

A new theory of mine

Filed under: Uncategorized — vipulnaik @ 2:56 pm

I have this new theory for sequences of objects of various kinds, and I’m trying to figure out what to do with the theory. I haev prepared lots of write-ups on the theory, as well as fanned out my ideas in many directions. But as yet, I haven’t somehow been able to share my idea with others, or bring my write-ups into a cogent and consistent form.

In this blog post, I plan to give a basic outline of the theory, along with references to more detailed write-ups (which I will put on my homepage). Thus begins a rather loose introduction:

Consider the matrix groups GLn(k) where k is a field. For any fixed n, this is the group of invertible matrices of order n. The question I wanted to ask was: what is the relationship between the matrix guops of different orders? There is a nice relationship by block concatenation. Given a matrix in GLm(k) and a matrix in GLn(k) we can obtain a matrix in GLm+n(k) by putting the matrix of order m in the top left corner and the matrix of order n in the bottom right corner, and the remaining entries as zero.

This is a homomorphism GLm(k) X GLn(k) to GL(m+n)(k). If we call this homomorphism Phim,n, then we have some associativity rulse for the mappings Phi, the mappings Phi are all injective, and there aer also some interesting refinement conditions.

This led me to consider the abstract situation: a sequence of groups Gm with m varying over nonnegative integers, along with block concatenation maps Phim,n:Gm X Gn to Gm+n. I assumed conditions of associativity and refinability, and christened the resulting general structure as Addition to Product Sequence (APS). If all the block concatenation maps are injective, then it is termed an Injective Addition to Product Sequence (IAPS).

From the above discussion, the general linear groups over a field (and more generally, over a commutative ring with identity) form an APS.

Question: what are the general properties of APSes? What are the examples of APSes?

A lot of what we do over individual groups can be done over IAPSes of groups. We can defien the cnocept of a sub-IAPS, and a normal sub-IAPS. The quotient of an IAPS by a normal sub-IAPS is again an APS, but the quotient APS may not be injective. The quotient of an IAPS by a sub-IAPS is in general an APS of sets only (not of groups). The quotient is injective if and only if a certain condition called being saturated is satisfied for the sub-IAPS.

Some examples of IAPSes of groups within the matrix algebra setting:

  • The orthogonal groups form a sub-IAPS of the IAPS of general linear groups. That’s because the block concatenation of two orthogonal matrices is an orthogonal matrix. This sub-IAPS is saturated in the following sense: given an orthogonal matix obtained as the block concatenation of two invertible matrices, both the invertible matrices are themselves orthogonal. The quotient space of the general linear IAPS by the orthogonal IAPS forms an IAPS of sets: this can be thought of as the IAPS of smymetric positive definite bilinear forms.
  • The symplectic groups form a sub-IAPS of the IAPS of general linear groups. That’s again because the block concatenation of two symplectic matrices is a symplectic matrix. This is again saturated, and the quotient space is the spcae of nondegenerate alternating forms.
  • The special linear groups form a sub-IAPS of the IAPS of general linear groups. In fact, this is a normal sub-IAPS. The quotient APS is a constant Abelian group with block concatenation simply being the multiplication map within the Abelian group. The sub-IAPS is not saturated, because there can be invertible matrices that are not unimodular, but whose block concatenation is unimodular.
  • Given a homomorphism of rings, there is an induced homomorphism of the corresponding general linear IAPSes. The kernel of this homomorphism is termed an IAPS of congruence subgroups. Here’s the typical example: the ring of integers and the quotient map from that ring of integers to the ring of integers modulo an integer m. The kernel of this quotient map, forms an IAPS, which is called the IAPS of congruence subgroups.

Once I started looking for APSes, I didn’t cease finding them. Roughly the raison d’etre for IAPSes is as follows: take an object and take the sequence of its powers (direct powers or free powers, in some suitable sense). Then, the automorphism groups of these powers form an IAPS of groups. Guess how? Roughly, for the block concatenation map Phim,n, the automorphism of the mth power acts on the first m coordinates and the automorphism of the nth acts on the last n coordinates.

Here are specific situations:

  • The general linear IAPS over a ring R assigns to each n the automorphism group of the free module Rn.
  • The permutation IAPS assigns to each n the symmetric group on n elements. Note that the permutation IAPS can be embedded inside the orthogonal IAPS over any ring.
  • The general affine IAPS over a ring assigns to each n the affine group of order n over the ring, which is the semidirect product of Rn by GL(n,R) under the usual action.
  • The polynomial automorphism IAPS. Fix a base ring (or base field). Then, consider the IAPS whose nth member is the automorphism group of the polynomial ring in n variables over that base ring or field. These form an IAPS. And this IAPS clearly contains the general affine IAPS.
  • The function field automorphism IAPS. Fix a base field. Consider the IAPS whose nth member is the automorphism group of the pure transcendental extension of the base field of transcendence degree n. This IAPS contains the polynomial automorphism IAPS.
  • The free group automorphism IAPS. This is the IAPS that assigns to each n the automorphism group of the free group on n letters.
  • The tensor algebra automorphism IAPS over a base ring or base field. This assigns to each n the automorphism group of the free tensor algebra in n variables over the base ring (or base field).

There are other IAPSes that don’t quite fit into the above framework but arise naturally: for instance, the mapping class groups form IAPSes, the braid groups form IAPSes. And then, various subs of IAPSes can be defined.

What I’m interested in getting out of IAPS theory is the following:

  • See under what circumstances we can come up with suitable notions of determinant, transpose, parabolic structure, unipotent structure and so on.
  • Analyze the conjugacy classes and see under what circumstances we can get a canonical form. For instance, the permutation IAPS has a canonical form for conjugacy classes through the cycle decomposition, while the general linear IAPS over a field has a canonical form for conjugacy classes through the rational canonical form.
  • Try to understand the generating sets and see whether we can get certain special generating sets that are present in members of small index.

Another interesting observation I made is that just like we do representation theory in the general linear IAPS, we can do representation theory of a group in an arbitrary IAPS. COncepts such as direct sum decomposition of representations canbe formulated in the IAPS language. Concepts such as irreducible representation and complete reducibility can be formulated in the language of IAPSes with an additional parabolic structure.

And reversing roles, we can try representing the members of an IAPS inside another IAPS. For instance, we can study the representations of SLn(Fp) in the general linear IAPS over complex numbers. Here, the IAPS theory of both these IAPSes comes out.

By looking at what happens in the case of the representation theory of the permutation IAPS and of the linear IAPS over finite fields, I have tried to see what we can say in general about the representation theory of one IAPS inside the other. I have got some promising frameworks into which the permutation and linear case both fit.

A quick summary:

  • An APS is a sequence of groups indexed by the natural numbers along with block concatenation maps which are homomorphisms from the direct product of two members to the member whose index is the sum of their indices. The block concatenation maps are required to satisfy some conditions, most notably associativity.
  • When the block concatenation maps are injective, I call the APS an injective APS or IAPS.
  • Though I defined an APS of groups, one can also define APS of rings, APS of sets, APS of monoids etc.
  • There are notions of sub-IAPSes and quotient IAPSes. The quotient of an IAPS by a sub-IAPS is again an APS of groups if and only if the sub-IAPS is normal at every member. It is an IAPS if the sub-IAPS is saturated. These are terms I introduced myself.
  • The general linear IAPS has interesting sub-IAPSes: the special linear IAPS (normal but not saturated, the quotient is a constant Abelian group IAPS), the orthogonal IAPS (saturated but not normal), the symplectic IAPS.
  • IAPSes arise as automorphisms of power sequences: the automorphisms of free modules give the general linear IAPS, the automorphisms of free groups give another IAPS, the automorphisms of polynomial rings give the polynomial automorphism IAPS, the automorphisms of the function field give the function field IAPS. Other IAPSes: the braid group, the affine group, the mapping class group.
  • There is a notion of a parabolic structure on a general IAPS, and such a structure comes naturally for IAPSes that arise as automorphisms of power sequences.
  • I am keen on figuring out when additional structure such as determinant, transpose and parabolic structure can be imposed on the IAPS.
  • I am keen on studying representation theory in an arbitrary IAPS, or possibly in an IAPS with a parabolic structure.
  • I am keen on looking at canonical forms for conjugacy classes for arbitrary IAPSes.
  • I am keen on looking at representation theory of the individual members of arbitrary IAPSes, to tie in the similarities between the representation theory of the symmetric groups and of the linear groups over finite fields.

Please do post your comments on the following:

  • Is the general idea of IAPS clear?
  • Does the notion of IAPS seem a useful abstraction (at a conceptual level)?
  • Do IAPSes holda promise of providing uniform tools for studying the very diverse range of IAPSes?
  • Do IAPSes hold a promise of providing a better language for discussing representation thery?
  • What are the aspects that seem to interest you and on which you would like clear expostulation/elaboration?
  • Do you think I should put in the effort of presenting the theory formally or should I wait for something more from it? If so, what kind of thing should I wait for?

August 29, 2006

What others have to say…

Filed under: Uncategorized — vipulnaik @ 4:14 am

Indraneel gave me a nice link:

A Graduate School Survival Guide.

This piece is written by Ronald T. Azuma, a postdoctoral student in Computer Science who did his Ph.D. in 6.5 years. He tells us that while many of the lessons of graduate school are best learnt by experiencing it, there are some that one might as well know right at the outset. He details some of these lessons:

  • Be very clear about why you want a Ph.D. A Ph.D. is not lucrative on the surface: longer hours, delay in entering the workplace, frugal living, and many other demotivators. On the plus side, there is: the qualification and the preparation to do cutting edge research and expand the frontiers of knowledge. The author said he himself chose research because he wanted to contribute to knowledge, and he did not want that, five years hence, he would be in a job or position that did not satisfy him.
  • Understand that academics is a business and a full-time job: Academia is a peculiar type of business but a business nonetheless. The research guides and professors need to prove themselves to funders, and the students need to prove themselves to the professors. Resources are few and competition is intense.
  • Graduate school is about what you pick up and not what you are taught: Much of the learning in graduate school, especially for Ph.D., happens not in the formal courses, but outside the classroom, from books, from conferences, from discussions.
  • Many skills are needed: initiative, tenacity, flexibility, interpersonal skills, organizational skills and communication skills. The author details how each one is critical to performing well in graduate school.
  • Choose the advisor and committee carefully: The author lists advantages of choosing a non-tenured advisor: greater availability, greater personal involvement in the student’s research, a disposition for working hard (in order to get tenure). Advantages of a tenured advisor: greater experience, more resources and influence. The author balanced both by choosing a non-tenured advisor and a committee including some tenured persons. Other factors he says are important for choosing one’s advisor: (a) does the advisor push you to work? (b) is the advisor approachable? (c) is the advisor knowledgeable in all areas you want to work on?
  • Maintain balance and perspective: Getting the Ph.D. and churning out great research work is indeed top priority, but too much narrow focus on it can be damaging. A Ph.D. is like a marathon. Spurting unnecessary may tire one out early. The author alludes to Repetitive Stress Injury (RSI) as one of the possible consequences of an unbalanced focus on the Ph.D. goal.

A very well-written and illuminating piece. I think I find answers to many of my earlier questions in this piece, particularly my question on primary and secondary responsibilities.

The primary responsibilities of a researcher are indeed working on problems; helping others work on problems; reading, learning and attending seminars and conferences; and guiding younger students. But for a Ph.D. student, the relative priorities differ. From what I understand, the primary responsibilities of a Ph.D. student are more towards acquiring basic capabilities and
establishing credentials. So, the list for a Ph.D. student runs as:

(i) Reading, learning and attending seminars/conferences both in order to get a working knowledge of all fields and in order to decide the topic of study.
(ii) Interacting with people and building good connections to be able to choose a research advisor and committee
(iii) Developing skills and competencies related to working on specific problems that can realistically by completed within the framework of a Ph.D.

Thus, guiding younger students and working on improving the theory or working towards a magnum opus are not responsibilities of the typical Ph.D. student. The Ph.D. student should focus on demonstrating his/her potential by doing something in a short span of time that sets the stage for later magnum opa.

I have heard a few stories about how people with highly ambitious proejcts for their Ph.D. ended up taking 12 yars to do their Ph.D.

The research student also has seconday responsibilities, one of which is continually getting resources, time, money etc. towards primary responsibilies. This is extremely challenging, and Azuma discusses it quite a bit.

Hope you have a nice time reading Azuma’s piece!

Looking forward to your comments.

Next Page »

Blog at WordPress.com.