Tuesday, September 27, 2005

To Hell with Intelligent Design

Today I came to read a news article, which mentions about a case in court about whether Intelligent Design (ID) theory should be taught in schools or not. To place everything in context let me mention that ID theory is an alternative to the standard "Theory of Evolution by Natural Selection". The basic tenet of this ID theory is that the complexity of life seen on earth is the result of some intelligent designer (which can be called GOD) and is in sharp contrast with the theory of evolution. The theory of evolution maintains that the entire diversity and complexity of life is brought about the process of natural selection which does not have any plan or purpose. To use Dawkins language, the watchmaker is blind.

The theory of evolution has been vindicated many times in past many decades, but in US there seems to be strong aversion towards this theory. People there are so desperate enough to believe in God and his supernatural powers, that many schools have started teaching the ID theory side by side the evolution theory. This is pretty disgusting because the ID theory is no theory at all (there being no testable predictions). I wonder how can a developed country like USA be so irrational enough to allow such stuff in schools. This is sharp contrast to their engineering and technological achievements and it bothers me too much.

The theory of evolution does have some gaps (many theories have this feature, for example Newtonian mechanics was not perfect), but that does not give the ID theory any privilege of being superior to it. To me this whole ID thing looks like another kind of creationism popularized by Bible and other similar literature.

I think that ideas propounded in Bible and other similar literature should be a part of some subject like "Moral Science" or history or pure literature, but in no way should it be included in the curriculum of science. Science is supposed to be based on testable theories and not some arbitrary ideas propounded by religious leaders.

I don't know what the outcome of the case in Pennsylvania will be, but I wish good luck to the parents who have filed the case against teaching of ID theory. To the ID supporters (as there seem to be many in US) I just say, "Go to hell with your ID".

Sunday, September 25, 2005

My First Flight

Two months back I went to Bangalore to meet one of my friends from college. One of my friends had come back from US for around a month here and we two had plans to meet our common friend in Bangalore. Since we did not have much time, we had to plan for a flight. I was damn excited about this because it was my first flight (till now its my only flight). I had told my friend, while in the plane, that someday I would write about my flight experience. After a long period I finally managed to get some time for this undertaking.

We left our home at around 5.30 in morning and we had to catch the 7.20 am flight from Delhi to Bangalore and reached an hour earlier than the departure. We had done this to ensure that we could get window seats. And we did get those precious window seats.

We boarded the plane at around 7:30 am. The interior was nice except for the fact that the seats were somewhat cramped (not much to expect in economy class). When the plane was on runway, they started giving us some instructions which I did not care to listen (I was busy with my iPod). Contrary to our expectations, we were greeted by an air host (not hostess). Anyway this did not matter much, as I was overwhelmed by the sheer excitement of flying for the first time.

When the plane took off, I felt the acceleration, and the plane seemed to be tilted at some angle. The noise of the engines was not much audible, but I was thinking that it would have been thunderous outside. I looked through the window. That was the real thing. At once the scenes shown in AXN and Discovery channels became real. In minutes I was on top of the world. The land beneath looked something like the pictures in Google Earth.

Then I moved through the clouds and crossed them. The experience of watching clouds from above was exhilarating. The scenes of heaven normally shown in religious serials on TV are nothing compared to what is actually visible when you go over the clouds. Unfortunately we didn't have camera to record the experience.

I started talking to my friend about the human endeavor. The Wright brothers and a generation of engineers behind them. I was emotionally charged about the engineering profession which made all this possible. In the same instant I remembered John Galt. (Now don't ask me "Who is John Galt?". That immortal query is best answered in Ayn Rand's masterpiece "Atlas Shrugged".) I used to hear in childhood that "Sky is the Limit", but I think people have crossed that limit. These and many other related thoughts about human achievement were coming into my mind.

My friend wasn't that much excited because it wasn't his first flight, and besides he used to get bored during long flight hours from US to India. But I tried to keep him in high spirits through my talk on engineering and stuff like that.

Just after two and a half hours when I looked out in the window I saw land beneath. It was now time to land up in Bangalore. I couldn't imagine that in such a short span of time I crossed more than 2000 Kms from Delhi to Bangalore.

The 5000 bucks spent were worth the journey. I felt proud of myself that I am in a position to spend that much and able to witness the grandest engineering feat. Thanks to my employer for that. And thanks to the Sahara people who have invested in this business (engineering alone is not sufficient, unless somebody invests in it). And last but not the least, thanks to my friend Devashish who was there with me during this journey.

The return journey during night (8.00 pm) had something different to offer. First, we had luck having an air hostess. Secondly, when the plane took off from the ground, we could see lights in the city of Bangalore and these looked like small candles tearing through the darkness around. The lights moved as the plane moved, and it was great watching this. After some minutes, it was all dark outside. I guess we were above clouds again, but couldn't see outside. And then I turned to my iPod.

That was all I had to say about my first flight. Those who have had similar experience will relate to this post much better than others who did not have a chance to fly. To these other readers I would suggest to spend some bucks just for the excitement of flight. I assure you that it would be worth your while.

Wednesday, September 21, 2005

Square Roots (Contd...)

After waiting for a week, I found little response from readers. Perhaps you don't find square roots interesting enough to delve into, and may be you would like to handle bigger challenges. Anyway since I promised that I would explain the rationale behind the long division process of square roots sometime, so here in this post I discuss something about that.

It is pretty obvious (and not difficult to prove) that square of an 'n' digit number is either of (2n - 1) or (2n) digits. And reversing the logic, we see that the square root of a number of n digits is of [n/2] digits where the brackets [] represent the ceiling function. This simple fact explains the grouping by two digits while beginning the long division.

So assume that to find the square root of a given number 'a' (take as an example 121), we have grouped it by two digits (1, 21). Now the number of such groups gives us the number of digits in the final square root. And we find the digits of the square root one by one. First we guess the first digit which is simply the best guess for the square root of the number formed by the digits in first group (in our example, first group is 1, so the first digit in square root is 1).

Now assume that we have found k digits of the square roots (that means k groups of number 'a' have been consumed in the process). Let the number formed by the k digits of square root be x (in example x = 1). We have to find the next digit y. Taking this y together we have (k + 1) digits and the value of the square root is (10x + y). Square it up and we get:
(10x + y) * (10x + y) = 100xx + 20xy + yy = 100xx + (20x + y)y
= 100xx + (10*(2x) + y)y

Now out of this we already have consumed the 100xx of the dividend and we have to find y such that (10*(2x) + y)*y is nearest to the remainder left at that stage. Note this expression (10*(2x) + y) carefully. This is the same as putting a digit y after a number (2x). And we have to multiply this with y again to match the remainder. Thus we have the rule: if the quotient at each stage is x, then divisor is (2x) and we find a y such by placing y to the right of divisor (2x) and multiplying this number by y again, we get nearest to the remainder. If you actually work out the long division process you will see that this is what you do.

Let's work out the example where we want to find the square root of a = 121. We have two groups (1, 21) and so square root is of two digits. First digit x is 1, and remainder at this stage is a - 100xx = 121 - 100 = 21. The divisor at this stage become 2x i.e. 2 and we need to place a digit y after 2 (which makes it (20 + y)) and we need to multiply it with y so that we reach nearest to the remainder 21. Thus we need (20 + y)*y nearest to 21. Clearly y = 1, and the remainder becomes zero, so that 10x + y = 11 is the answer.

To illustrate using another example, let us take a = 12345 and after grouping we have (1, 23, 45), so we need three digits and first digit is x = 1. The divisor at next step is 2x i.e. 2 and we need y such that (10 * 2 + y)*y = 123 - 100*xx = 123 - 100 = 23. Clearly we need y = 1. and we get remainder 23 - 21 = 2. In the next step, quotient becomes x = 11 (two digits), divisor is 2x i.e. 22 and the remainder becomes 245. And we need to guess y such that (10 * 22 + y)*y = 245. Again y = 1. So the answer is 111.

The principles involved are as follows:
  1. We get digits two by two from the dividend.
  2. If quotient at any stage is x, the divisor at that stage is 2x.
  3. In the next step we need to place a digit y to the right of divisor 2x and multiply it with y to get nearest to remainder i.e. (10 * 2x + y)y = remainder.
This is what we studied in school. The curious part in the above process is the fact that divisor is twice the quotient at any stage, and this is not quite obvious why it is so. After this post, I hope this will be obvious.

By the way, this long division method is not fast and in practice we use the Newton's method of finding square roots. This is quite easy. To find the square root of a we begin by an approximation x0 and improve upon it to get x1, then further x2, x3 and so on. The rule of improvement is:

x(n + 1) = (x(n) + a / x(n))/2

All operations need to be carried upto the number of decimals required. The beauty of this approach is that you can start from any positive value x0 and in 8-10 steps you will have a fairly good answer. I will leave it to the reader to figure out why this works.

Saturday, September 17, 2005

Dilbert's Universe

Last week I had the experience of being in a Dilbert's Universe. Some big shot of our company had arrived here and we had a big gathering of around 1200 or so employees in a cramped place to listen to his gibberish. In fact people were flown in from various offices in the country to attend this meeting.

To be frank, the entire programme seemed to be a fiasco. Not to mention the boring speech by the big shot, and the useless Q & A session, the food arrangement was damn frustrating. Add to that the hunger and impatience of employees and there was a hell lot of 'hungama' for food. I guess the quality of food was OK, but kept on thinking that it could have been better, especially during such a mega event.

The transportation arrangements were nothing sort of spectacular. Putting 70-80 employees (not general staff, but software engineers, managers etc.) in a cramped bus (which has a capacity of not more than 55) was a nice idea. Outside the bus it was raining heavily so we couldn't open glass windows to let the air in. Luckily I had a seat, so I did not have that much trouble. To top it all, the traffic of Delhi was worse. The journey which would have taken around 45 mins took almost something like an hour and 40 minutes.

I don't know the cost of entire arrangement, but I guess it would have been hefty, considering that 50 or so employees had been flown from Bangalore to Delhi for the purpose and that the event took place in a big hotel (I don't know how many stars it had, but I am pretty sure it wasn't a 5-star). But the hidden cost of 8000 man hours, which were wasted like anything, is staggering. I don't know how much that cost is (ask some job consultant/analyst for that).

If you think that this was enough to give you a picture of Dilbert's Paradise, then perhaps you have underestimated. 2-3 days later there was a "sorry-and-thank-you" mail from the HR people (remember the HR in Dilbert strips?) mentioning that they received some feedback and were sorry for the transportation arrangements. They ensured us that such issues will be handled with proper care in future events of this nature. Wow!! That's so nice of you. I will not be there to receive the above mentioned proper care.

Thursday, September 15, 2005

Square Roots

I hope most of you are familiar with the process of long division of finding square roots. Have you ever wondered why the method works? If yes, probably you would have figured out the answer to that query, and you need not waste your precious time reading this post further.

However, if you haven't pondered over this matter, then its nice for me, as I have some people to read this post. Well, I learnt this long division process way back in my 5th grade, and at that time such a thought did not occur to me as to why the method worked. That's no big deal, very few people at that age are expected to think in that way.

I never inquired about this matter till my 10th grade, but sometime after my 10th grade, I happened to come across some book on "Theory of Equations" and in that book I studied about numerical solutions of equations. Then the thought occurred to me, as to why not apply the methods on a quadratic equation. The process almost matched the long division process of finding square roots, although it was more general enough to handle equations of any degree. Even then I was not satisfied and I thought that there must be a much simpler explanation for the square root method, and after some thought I grasped the idea behind that long division process.

Just to keep you curious, I will not explain the idea in this post, but will rather let you ponder over this simple problem. If you have got any ideas to offer please comment on this post. If not, please wait for a future entry in the blog.

What I find strange is that the idea, although simple in execution and proof, is not explained anywhere in any textbook whatsoever, and I stumbled upon it only after getting to "Theory of Equations". Contrary to this, the rationale behind the Euclid's algorithm of finding HCF (highest common factor) of two numbers is explained at length in many textbooks on number theory. It is based on two simple facts that:
  1. if b divides a, then HCF (a, b) = b
  2. if b does not divide a, and leaves remainder r, then HCF (a, b) = HCF (b, r)
So applying the division after a finite number of times you will get a zero remainder and the last divisor will be the required HCF.

The tragedy with books is that this simple rationale behind the Euclid's algorithm is explained at such a later stage that by the time you understand it your quest for knowing it dies. Moreover, number theory texts emphasize using this technique to find numbers m & n such that:

m*a + n*b = HCF (a, b)

rather than using it to find the HCF.

Hope this slight digression does not detract you from square roots. Please give it a thought to find the rationale behind the long division process. (Just to encourage: the idea is very simple and surely you don't need "Theory of Equations").

Musings on 1, 2, 3 ... (contd...)

For those who find my previous two posts a bit abstract, I ensure that this one is the last piece and completes the "Musings" trilogy. And yes, I would keep it short.

We invented the naturals and the integers in the previous posts and we next proceed to the rationals. Let's assume then that a, b, c, d etc. stand for integers. We know that division is not always possible in the system of integers and we wish to get rid of that limitation. The strategy is similar to the one used for creating integers out of rationals. We observe that an integer can be expressed as the ratio of two integers in many different ways. We thus denote an integer as an ordered pair (a, b) where a is divisible by b and b != 0. The integer being represented is a/b. Two ordered pairs are equivalent if they represent the same integer. Thus (a, b) = (c, d) iff a*d = b*c.

We then consider the system of all ordered pairs of integers of the form (a, b) with the only restrictions that b != 0. Thus we may have a case when a is not divisible by b. The definition of equivalence is the same and as before we put all the equivalent pairs in a group. Such a group is called a rational number, and we denote such groups by [a, b] where (a, b) is any pair belonging to the group. The definitions for rationals are now very clear:

[a, b] = [c, d] iff a * d = b * c
[a, b] + [c, d] = [a*d + b*c, b*d]
[a, b] * [c, d] = [a*c, b*d]

The rational number [0, a] with a != 0 is denoted by 0 (zero) and [a, a] is denoted by 1 (one) and they are the additive and multiplicative identities. If [a, b] is such that a != 0 != b, then [b, a] is also rational and then [a, b] * [b, a] = [a*b, b*a] = 1, and so [b, a] is the reciprocal of [a, b] and is denoted by 1/[a, b]. The division is now defined to be:

[a, b]/[c, d] = [a, b]*(1/[c, d])
= [a, b] * [d, c] = [a*d, b*c] provided b != 0 != c, d != 0

The rationals are adequate enough for the needs of arithmetic, but human mind does not seem to be content with it, and it goes on to introduce irrationals. I won't demystify the irrationals here, because that is already done (in the best possible way) in G. H. Hardy's "A Course of Pure Mathematics" (see Chapter 1). I would have a lot more to say about this classic and influential work, but not now.

I know that there are some people out there who might not be satisfied with such an abrupt ending of the "Musings" and would wish to go on with the "reals" and the "complex numbers", but these are best provided in books rather than blogs. For others, I hope this posts completes the trilogy effectively.

Sunday, September 11, 2005

Musings on 1, 2, 3 ... (contd...)

In the last post I discussed the basic properties of the natural numbers. This approach to natural numbers was first adopted by Peano in 1889, and as such these properties of the natural numbers are known as Peano's Axioms. Since then the natural numbers have been put on sound and rigorous footing.

Let's continue from where we left in the last post. We turn our attention from natural numbers to integers. You might be aware that integers were invented to solve the problem of subtraction. A number can be subtracted only from a greater number and not vice versa. Incidentally subtraction gives us a new way of looking at the natural numbers. Any natural number can be expressed as the difference of two natural numbers in many (actually infinite number of) different ways. For example:

2 = (4 - 2) = (5 - 3) = (100 - 98) = ... (and so on)

We now represent the natural number 2 as an ordered pair of natural numbers (4, 2). This representation is not unique, for we can write 2 as (5, 3) or (100, 98). The idea is that the difference between the numbers in the pair gives the number being represented. Also right now the first number in the pair is greater than the second to make the subtraction possible. Two such representations (a, b) and (c, d) are said to be equivalent if they represent the same natural number. Thus (a, b) = (c, d) if (a - b) = (c - d) which is the same as the condition (a + d) = (b + c).

From this observation we make our advance. We consider all the pairs of natural numbers (a, b) even considering the case when a = b or a < b and not only a > b. Two such pairs (a, b) and (c, d) will be said to be equivalent if (a + d) = (b + d). We club all the equivalent pairs in one group. We represent each such group of equivalent pairs by [a, b] where (a, b) is one member of the group. It does matter which pair is used to represent the group. Such a group is called an integer. The equality of such integers is defined as follows:

[a, b] = [c, d] iff (a + d) = (b + c)

The special integer [a, a] is of fundamental importance is denoted by a special symbol 0 (zero). The integer [a + 1, a] is also very important and is denoted by symbol 1 (one). The addition and multiplication of integers is now defined as follows:

[a, b] + [c, d] = [a + c, b + d]
[a, b] * [c, d] = [a*c + b*d, b*c + a*d]

The reader can verify that the usual properties of addition and multiplication hold, and further that integers 0 and 1 defined above are respectively additive and multiplicative identity. One curious fact is that [a, b] + [b, a] = [a + b, b + a] = [a + b, a + b] = [a, a] = 0. So [b, a] is the additive inverse of [a, b] and is more properly denoted by -[a, b] (with a minus sign).

The integers of the form [a, b] when a > b behave exactly the same way as the system of natural numbers with [a + 1, a] as unity and [a + 1, b] the successor of [a, b]. If we identify such integers with the natural numbers (as we can in view of Principle of Mathematical Induction) we almost have integers as an extension of the natural numbers. Subtraction is now possible by defining:

[a, b] - [c, d] = [a, b] + (-[c, d])
= [a, b] + [d, c] = [a + d, b + c]

So we have now got integers at hand and the problem of subtraction is solved. I guess you must be asking why the hell did we use the machinery of ordered pairs and equivalence groups. Why didn't we simply added stuff like 0, -1, -2, -3 to our repertoire of natural numbers and done away with it? The point is that in mathematics we build up from the given, and not bring in alien stuff as and when required. The integers had to be created somehow out of the naturals and not by the use of negative ghosts like -1, -2.

The post needs to be concluded to keep it short but the journey into number systems will continue in the next post also. Till then read someone else's blog. Bye.

Musings on 1, 2, 3 ...

Kronecker once remarked, "God created the integers, all else is the work of man." Being a Bright I think that integer arithmetic is also the work of man. In the following paragraphs I would explain how this work was done.

Let's begin at the beginning, and assume the concept of unity i.e. there is a thing called 'one' or 'unity' and is normally denoted by the symbol '1'. Another fundamental concept which we need to develop the whole integer arithmetic is that of a successor. So we propose the existence of another entity which is the successor of unity, and denote it by s(1) for the sake of brevity. As a human being, we are not content with 1 and s(1) so we go on and create successor of s(1) which we denote by s(s(1)). We go on creating successors one after another and obtain the following stuff:

1, s(1), s(s(1)), s(s(s(1))), s(s(s(s(1)))), ...

You would have guessed that this is the same as the sequence 1, 2, 3, 4, ..., but for the time being let us not go into decimal symbols and assume that we will use 1 and s (for successor) as our symbols. What I would like to point out is that these two simple concepts, unity and successor, are enough to explain whole of arithmetic.

The numbers obtained using 1 and the successor mechanism will be called natural numbers. If we follow the above sequence of natural numbers carefully, we observe that every term in that sequence (except the first one) is a successor of the previous. Note that unity itself is not the successor of any entity. We note the basic properties of this number system:
  • There is a distinguished entity called unity (which we denote by the symbol 1).
  • Every entity other than unity is a successor of some unique entity.
  • A successor of an entity is not the same as the entity itself (i.e. s(x) != x).
  • Every entity has a unique successor.
  • Unity is not the successor of any entity.
These properties define what we call a system of natural numbers and the individual entities in the system are called natural numbers. The principle that "any two systems of entities with the above properties are essentially equivalent as far as their mathematical use is concerned" is called the Principle of Mathematical Induction.

I guess most of you people have never happened to see the principle stated in this form, but the essence of this statement is the same as the one provided in many textbooks. To see the equivalence of the version given here with the one provided in textbooks, you only need to observe that s(x) (here) is the equivalent of (x + 1) (textbooks).

Now the introduction of operations like addition and multiplication is fairly straightforward. We begin with addition first (because multiplication is defined in terms of addition). The basic idea is: first learn how to add 1 to a natural number. You must have guessed the following definition:
x + 1 = s(x) ... (adding unity is the same as finding successor)

After learning how to add 1 to a number x, we want to add a number y (different from unity) to a given number x. Since y is different from 1, it is a successor of some number say z. We now define:
x + y = x + s(z) = s(x + z)

This completes the inductive definition of addition. To illustrate this with examples, let's bring back the decimal symbols and add 3 to 4. To add 3 (y) to 4 (x), we first find z such that s(z) = 3. Clearly z = 2. So we need to add this 2 to 4 and then find the successor of the result. Thus

4 + 3 = 4 + s(2) = s(4 + 2)
= s(4 + s(1)) = s(s(4 + 1))
= s(s(s(4))) = s(s(5)) = s(6)
= 7.

Thus adding a number to another is like adding a succession of 1's to the number. The reader will now find it easy to grasp the following inductive definition of multiplication:
x * 1 = x
x * y = x * s(z) = x * z + x provided (y != 1)

Notice how addition is used in the definition of multiplication. I would advise the reader to multiply 2 by 3 using this definition. I need not elaborate further on + and *, it is sufficient to remark that using these definitions one can deduce all the properties of addition and multiplication. One can then define subtraction and division as the inverse of addition and multiplication respectively.

The post is getting a little lengthy and so I would stop here. The discussion on integers (positive and negative) will be continued in the next post and till that time the reader can use these definitions of addition and multiplication to prove commutative, associative and distributive laws.

Saturday, September 10, 2005

Design in C++, code in C

When I came fresh out of college, I was a huge fan of C language. Actually, to be frank, I was in love with C. I had the attitude of discouraging Java programmers in college (this attitude is still there and I will justify it in some other post). Not much emphasis was there on C++ in my college and nobody bothered to learn C++. I just wrote a few programs in C++ because I was learning Qt GUI toolkit. Apart from that, I never thought seriously about C++. But things have changed since then, and now I feel much better while using C++.

What's the big deal about learning OO stuff and languages like C++? To answer that question I am providing an example from own experience during the college days. In my 1st year, I used to work on VT100 terminals in a computer lab, and there I was inspired by text based programs like PICO and PINE available on Unix machines there. There was no GUI and we were alien to the Windows world.

I felt a striking difference between these programs like PINE and the ones I used write during C assignments. My C assignments used the C standard I/O library, but looking at those PINE programs I felt that they didn't use this C stdio. They had menus and each key press was followed by some response on the screen. I did not have to press ENTER key to get a response. I learnt enough of Unix and terminal I/O stuff in the coming years, so that I could finally manage writing programs which responded to single key presses. But that wasn't enough for me. I still wanted the look and feel of text based menus. I figured out a way to do this.

I reasoned that I should scan for single key presses in a while loop and process each key input. So I used a switch/case mechanism to handle each key input. The idea was to draw the screen after each key press. Each key press changed some parameters (say, index of the menu item to be highlighted) and the screen was redrawn based on those parameters. Initially I had only one menu. And the parameter needed for its drawing was the index of current menu item. Then I tried displaying two menus. This required two indexes and another variable to indicate which of the menus was active. Things got complicated when I added things like OK/Cancel buttons. I had to keep track which of the menus/buttons were active. Then after a lot of work I added UI elements like check box, list box, a line editor, and finally full brown multi line text editor. I also wrote data structures for these beasts (menus, buttons, editor etc.). I implemented functions to draw each of these on the screen. My mechanism was simple. Get a key press, search through the myriad of structures to find the one corresponding to the active UI element on the screen, update some field of the structure, and then redraw that UI element. The code, although understandable to me, had become a big mess (luckily, everything looked nice on the screen, and many Unix guys were impressed with my character based UI).

Did you notice something there? This is the way a typical C programmer thinks. Starting from the main(), he goes statement by statement, function by function and accomplishes whatever he wants to do. He tries to see the entire program as a sequence of statements, groups frequently occurring statements into functions and calls them when required. The problem comes when you have a hell lot of statements (code) and data at your disposal. You are engulfed by a myriad of data structures and functions. This is exactly where object oriented design comes to your aid.

Central to OOP is the key idea that the code and data in your program are related. You have some data and some functions to manipulate them. You club these to make an Object (or class, there is a slight difference between class and object, but let's not bother too much about it). So what's the great deal? I am still calling functions one after another. Right! But there is very different way of thinking which emerges from clubbing code and data into an object. You make the object responsible for all its functions. The object presents a set of functions through which it can be manipulated. The user interacts with the object without any knowledge of the way the object implements its features. The advantage here to the programmer is that he is able to think on a higher level, dividing the program into a set of independent and responsible objects and focuses more on the interaction between various objects. You no longer modify the object's data directly, just make a request to the object for a particular action and the object does its job properly.

To top it all, the OOP languages introduce the powerful feature of inheritance and polymorphism. When you find that some objects behave in a very similar way, you abstract the common features and put them in a base class. The objects are derived from this base class and inherit all the properties of base class from it. The client accesses the object through the base class interface only. This has an advantage if in future you want to add another derived class. The client code uses only the base class and need not be changed. This leads to fairly extensible designs.

So let's see how much does this OOP help in my text based UI example. First I could create classes for each kind of UI element, having only two features for the time being: they know how to draw themselves, and they can process any key press. Since all these UI elements have these two common features I could as well create a base class UIElement and have them derived from this class. This has an important advantage in UI design. Say I needed a complex UI, say a list box where some entries are plain strings and some are check boxes. This would not be problem at all, because a list box is a list of UIElements and it does not care what kinds of UI elements are actually there in the list. While populating the list box we can then add normal strings as well as check boxes without any pain.

To take our example further, let's say I want to change the color scheme of the UI elements. We could have a class ColorScheme and we change the drawing function of UIElement so that it worked in coordination with this ColorScheme class. The client code does not change. Notice that since now each UI knows how to process key press, I don't have to put a big switch/case in main() to handle user input. These are conveniently delegated to individual UI elements.

So much of OOP! Now which language should I use in practice. For C lovers the choice is definitely C++, and there is something divine in pointers which I like so I would not opt for Java. Where does this leave the good old C? Well, we need to make the objects responsible enough and for that we need the if/else, switch/case, while/for/do while, library functions and other C stuff. So it is down to implementation details when we should start thinking in C. Think in terms of objects/classes during the design phase and think of C while writing code. That's what the title of this post says: "Design in C++, Code in C"

Friday, September 09, 2005


Finally, I have started wearing T-Shirts, a move which (I hope) will be liked by many of my friends. However don't hope for jeans as of yet, that might be for never.

To those who are following my blog seriously, I have unloaded one more DLL.

Tuesday, September 06, 2005

Review: Hindi Movie "Dansh"

Yesterday evening I went out with my roommate to watch a movie named "Dansh". There wasn't much advertisement about the movie, but we thought it would be somehow better than the Salman crap "No Entry" and the disgusting "Ramji Londonwaley". Spending 150 bucks for cheap comedy isn't my style.

When we were entering the theatre, the security guard asked during checkup, "Didn't you guys get tickets for any other movie?" We told him that we had intentionally come to watch this movie and its not because we didn't get tickets for "No Entry". Entering into the theatre we found not more 50 people for this movie.

The movie was about the Mizoram National Front (MNF) and Indian Govt. settlement. Well, it seemed to be like that, but in fact it had much more philosophical content rather than political content. The entire plot is about the leader of the MNF, his wife, and the person who raped his wife some years ago.

By some chance happening, the rapist lands up in the house of this leader (played by K. K. Menon). His wife (played by Sonali Kulkarni) recognizes the guy without fail and tries to torture and kill him. Quite contrary to my expectations, the leader on the other hand urges his wife to forget the past and forgive the guy.

The whole film consists of arguments/debate among these three people and the audience is hooked on the movie to see whether the rapist dies or is forgiven. The nature of the debate is philosophical/sentimental and highly serious.

Acting by both K. K. and Sonali is superb. I was impressed with K. K. in Sarkar too. But the performance of Sonali was unexpected (compare this with her role in DCH and Pyaar Tune Kya Kiya). I think she needs to be compared with serious actresses like Urmila and Tabu.

The plot is highly unusual compared to other movies based on militancy. Not for the casual movie goer and certainly it is not for entertainment. Watch it if you need some food for thought. (Well, it might be too much thought for some people.)

I and my roommate liked the movie very much, but I guess this movie will not attract much audience and will be over in a week. So watch it by this Friday.

Synchronization Mechanisms and Yahoo! (contd...)

Hey, we are back to Yahoo! Messenger stuff. My earlier entry contained details about a Yahoo! Messenger client for Linux. That was done way back in 2001-2002 when I didn't have much programming experience (I mostly wrote console applications and that too were mainly mathematical or network oriented).

Now, after some years of experience working under Windows/WinCE, I look back on the Yahoo! client and find some problems.

One of the problems was that using 3 timers was a bad idea. The functions were called at regular intervals of some millisecs. The time between 2 calls was wasted. So the network response would be slow. Ideally they should have been put in a while loop, and this loop should have been in a separate thread. I still don't know about threads in Linux, but that should not deter me from my discussion here. Using threads simplifies one more thing. All our networking code was non-blocking (the sockets were opened in non-blocking mode). This wastes unnecessary CPU cycles. Putting those functions in threads could have allowed the networking code to blocking in nature.

Another problem was that although, we wrote code in C++, our design of classes was arbitrary. Luckily we had some sense to keep networking code separate from the GUI code. That's why we made queues. These queues were the link between networking and GUI code. Let's not go into issues of object-oriented design in C++ here (that might be done in some other post).

Let's now see the design of our client from the perspective of threads. Assuming that those 3 functions are in separate threads, we have another problem. These functions access same queue for reading and writing and without any synchro mechanisms, the data in queue will be corrupted. We could use any of the mechanisms mentioned in the previous blog entry to solve this problem. But the approach of using Message Queues seems to be the best one, and I discuss it next. Ah! finally, the synchro stuff is linked with Yahoo!

So we replace the Send and Receive queues with OS provided message queues. Note that reads and writes to a message queue are handled atomically so that the reads and writes do not interfere. The operating system takes care of the synchronization issues. Now the function SendToNetwork() would simply read from Send queue and deliver it to network. Reads from a message queue are blocking in nature, so that does not waste CPU cycles. Only when there is data in the send queue, the SendToNetwork() function will consume CPU (compare with previous design: the function was called whether there was data available or not). Similar is the case with function GetMessage().

By the way did you notice, that we could combine the two functions ReceiveFromNetwork() and GetMessage() into one function which would get data from network and deliver it to the GUI, without putting it in a Receive queue. We had kept a queue only to separate network/GUI code. Hey, but you can not remove the Send queue in the same way. It is required because we don't want user interaction thread to wait for the messages to be sent to network.

So you see how useful message queues are when used properly. Compared to other mechanisms like events and semaphores they are costly (message queues need memory for messages), but when you need to pass data along with the synchronization signal, you need to message queues in some form or the other.

At last, let me come to whole point of this entry. Understanding these synchronization mechanisms is not possible without using some multi-threaded application. Reading an OS book won't help unless you apply the ideas in real world. The worst part of technical books is the jargons like Producers and Cosumers which complicate things. So, instead of telling students to write crafted applications (these apps are compelled to use pipes/message queues/events just for the sake of using them), professors should concentrate on genuine ones.

Monday, September 05, 2005

Synchronization Mechanisms and Yahoo!

From the title you must have guessed that this one is going to be a damn technical entry. To be fair enough, if you are not a computer geek you might have some difficulty in getting to the point which I am trying to make. But then you can always google up and learn things which you don't know.

Anyway I assume that the reader has some idea of synchronization mechanisms used by various operating systems (another comp jargon). I am listing the ones I know and use frequently:
  • Events
  • Semaphores
  • Pipes and Message Queues
  • Critical Sections
  • Mutex
When I was in college, I didn't understand any of this hifi stuff except Pipes. Thanks to STMicroelectronics, I have understood all of them during my job here as a software engineer.

Hey what about the Yahoo! in the title. Just wait a little longer.

We were taught all these synchro mechanisms in Operating Systems course during our B. Tech. But the way of teaching there was highly orthodox. For example, we had Producers & Consumers associated with a message queue, and there was the highly infamous Dining Philosophers problem. I understood literally nothing. No practical code samples were shown about the application of these mechanisms. Luckily I knew something about Pipes from own Unix experience there. And I could manage to pass the course with a C grade.

Let's come to the Yahoo! part. During our final year (2001-2002), I and my friend Dhar started designing a Yahoo! Messenger client for Unix systems (actually Linux). There wasn't any good client for Linux at that time. There was GAIM, but it didn't support HTTP Proxy interface with GET and POST requests. There were other clients (one of them was the official beta from Yahoo! itself) which supported the HTTP Proxy, but they used to hang in the middle of some operation (damn frustrating!@#%$%).

Dhar started analyzing the messenger protocol by studying data dumps from the windows IM client (how he got those dumps is another story and it wont be revealed to the public for security reasons) and we began with a small console application, which would log on Yahoo!, list your buddies and logout. Then we added functionality to send a message to a buddy. Next we added the ability to add buddies. But the console application was not interactive. It was used to test whether we had decoded the protocol properly or not. So to test any feature we would code that part and then run the program. It would log on, execute the feature, logout and then end itself.

In a course of 3 months we had decoded enough of the YMSG protocol to be able to write a fairly good IM client. So we thought why not have a full fledged GUI application. Let me tell you that writing GUI stuff in Linux/Unix X11 environment was a hell lot difficult in those days. You didn't have the equivalent of Microsoft Visual C++ (Windows) in Linux. A lot of thanks to guys at Trolltech, who developed Qt, the best multi-platform GUI Toolkit in C++. We selected Qt and I started developing GUI stuff. Some of the fancy art work (icons, progress bar, visual UI) was designed by Dhar.

We had only one design goal to meet: Our app should not hang!! Of course, another goal was to have sufficient features so that people could actually use it. In order that the application does not hang, we had to ensure that all operation were non blocking in nature. This restraint meant that user interaction should not interfere with network operation. We had the following approach:
  • We had one queue (normal queues implemented through linked lists, not message queues) to store the messages to be sent from the user to the network. When user sent a message, there were not delivered directly to the network (otherwise the GUI would hang for the time this message is being sent). Let's call this queue Send.
  • Another queue (call it Receive) was used to store the message received from the network.
  • We had a function, say SendToNetwork(), to get message from the Send queue and deliver it to network.
  • Another function, say ReceiveFromNetwork(), used to put messages in the Receive queue.
  • Hell, we had one more function, say GetMessage(), which used to get messages from the Receive queue and show it on the screen.
The strategy we had to use was to somehow keep these functions independent of one another, so that one did not interfere with another and they have to keep doing their work in a while loop unless user exits the program. (You must be thinking to put them in three different threads, but sorry! we didn't have any idea of multi-threading on Linux).

Again, Qt came to our help. It has a concept of timer, through which you can call a function when the timer expires. We used 3 timers for these three functions which ensured that these will be called at regular intervals of 10, 50 and 100 millisecs. So these functions were sort of running concurrently and doing their job without any need of multithreading.

Needless to say, the public was impressed by the nice GUI and the special feature of multiple logins through a single PC (actually it is not a big deal, the big deal is to prevent multiple logins). We developed the client to include a lot of features including multiple profiles for a single login, getting buddy names from Yahoo! addressbook, Yahoo! smileys in all their glamour, typing notification, and notably the most complex file transfer. And our design goal was met. The application never hung in the midst of any damn operation. The user could always give some input and get a response from the system. It was much better than any Yahoo! client available on Linux.

What's the connection between synchro mechanisms and Yahoo! Messenger as the title of the post suggests? Well, that will be continued in the next post to keep the length of this entry manageable.


In my last post, I introduced the concept of Best Friend. In this entry I will add another term to your vocabulary.

DLL is actually a computer jargon (which stands for Dynamic Linked Library about which I will talk some other time), but I and my friends (mostly my wingmates from college) use it to convey a totally different meaning.

The term arose with its new meaning during my college days. I behaved somewhat differently compared to most guys there in many respects. So people thought of me as an eccentric guy. I would always get into argument with my wingmates over some eccentric behavior of mine. I would try to justify my stand based on some moral principle. Most of the time the argument would get heated and neither side would win. The arguments sometimes seemed futile enough.

But whenever it seemed like I will lose the debate, I would bring in a new principle of my own and start defending on its basis. My wingmates got frustrated. They told that I am a man of principles but my principles are not so rigid. They change when required, especially in the midst of a heated debate.

One of my wingmates, Phanish, had an idea and he gave a name to my eccentric and dynamic principles. The term, as you must have guessed, he chose was DLL. He told that I just loaded any damn idiosyncratic principle during an argument to justify myself. This is more like when a computer program loads and unloads a DLL as and when required. This new use of DLL terminology was accepted by all my wingmates and some college friends.

Since then DLL has another meaning namely, idiosyncratic/eccentric principles. I had a hell lot of DLLs to frustrate my friends in college. I did not have much to gain through those DLLs, they were mostly used to initiate some debate and I just wanted to know what other people think about a particular topic.

Hey, please don't think I am just a man of arbitrary principles which can be used and thrown at my whim. I do have some moral principles which guide me all the time (everybody uses some principles of this kind, whether they accept it or not), but those ones are not dynamic in nature (they are damn rigid man!). So they are not included in the list of my DLLs.

Wanna have an idea of the kind of DLLs I used to have during my time in college? Well here is the one which was most frustrating to people in my college:

We had 8 semesters in our B. Tech. course at IIT Kharagpur. There was a movie show on Friday every week which was shown in Netaji Auditorium. We used to get tickets for an entire semester for a mere 30-40 rupees. I used to take tickets only in the odd semesters (1, 3, 5, 7) and didn't watch movies in even semesters. When guys asked me about this, I told them that this is one of my DLLs and I am going to follow it for four years of my stay in college. Please don't start throwing rotten eggs, tomatoes (and what not!) at me for telling about this DLL. I did take tickets in 8th and final semester and unloaded the DLL.

Stay tuned for Pandy's new jargons in forthcoming posts.

My Best Friend

This entry is going to a bit personal, but I would try to make it interesting to a general reader as well. So you must be guessing that I would introduce you to my Best Friend and would bore you with some sentimental stuff. Well, I will definitely introduce you to Best Friend, and instead of boring you, I will introduce him in an illuminating way.

Our first meeting happened 3 years ago. I was employed in STMicroelectronics, Noida, India. I, with my roommates, was searching for a house on rent. We didn't know anybody in the town and were moving place to place and asking anyone who met us on the way about a house on rent. It was very frustrating especially in the heat of Delhi-Noida during summers. One fine day we met a guy on our way who told that he knows about 2-3 houses which we could have on rent. He showed us a house, but we didn't like that one. He then told us to see another house. In the meantime he was talking that he was in "Property" business and was saying how eager he was to help us.

One of my roommates guessed that he is a broker and asked him flatly about it. He told us that he was a broker. Now my roommate Gautam said, "We don't believe in Brokerage stuff. We will find house on our own. We don't need your help". The guy was a bit shocked at the way my roommate spoke. Well, that's when all three of us met our Best Friend, the broker. The term "Best Friend" was coined then and there by my roommate Sunil, which is used by us (our circle of friends) to tell about a person whom we hate like hell and wish to avoid as much as possible. You could think of it as a pun or as an euphemism.

I am sure as hell that you also must have met many Best Friends in your life. Please use this term to introduce them to your friends. Even if you don't like the term or don't wish to use it, please take note of it as it will be used in a lot of my blog entries. I will use capital letters to distinguish it from the normal "best friend" concept.

At last, I hope that this entry does not enrage any of my Best Friends.

The Da Vinci Code and others ...

I met Dan Brown through one of his books which topped NYTimes bestseller list when it was published. I am talking of "The Da Vinci Code". I bought the illustrated version and frankly speaking devoured it for 2-3 days. The illustrated version brings to you all the art/architecture details. You would feel that instead of Robert Langdon you are visiting Louvre. Do spend some extra bucks to get that hardbound illustrated version.

Next I come to "others". I have the habit of buying books by looking at the author's name. If I happen to like one book of a particular author, I tend to buy all of his books (if they are available...). Authors like Richard Dawkins, Steven Pinker, Ayn Rand and G. H. Hardy are my favorites. But let's not talk about them now, because this post is about Dan Brown and his books.

So following my tradition, I got hold of "Angels & Demons", "Deception Point" and "Digital Fortress" (in that order). I read all his four novels (although in reverse order of their publication). Although the plots in each of the novels differ entirely from each other, I have observed some common features.
  • In the prologue somebody is murdered and the whole novel is about solving this murder mystery.
  • The hero is living his cool life and all of sudden he finds himself forced to solve the murder mystery. At this point he must be saying to himself: "What the heck is all this happening to me?"
  • Luckily his frustration is relieved somewhat by the presence of a beautiful and intelligent (surprise!!) heroine. The heroine has a lot of knowledge about the context of the murder but she is unable to solve the mystery and expects the novice hero to help her.
  • The villain is normally introduced as an Authority (in terms of administrative power or knowledge) and he seems to be helping both hero and heroine to find out the murderer.
  • The act of murder itself is outsourced to some third party who is expert in executing orders and does not care who his master is. Thus murderer is also unaware of the villain.
  • The contract killer is a "stud" who almost always outpaces the hero (and the heroine too). Only he is not able to kill the hero (which is again a surprise! as the hero is not a fighter by profession). This is what frustrates the villain and he then intervenes and is face to face with the hero.
  • The entire novel is divided into many small chapters (3-6 pages) and they are designed in such a way that the reader is hooked for the entire length of the novel.
  • The novel contains a lot of information about the context of the plot and this information is real. Dan Brown does a hell lot of research before writing his stuff. I came to know to much more about CERN from "Angels & Demons" than from my physics books. Around 30% of the "The Da Vinci code" is full of information about Christianity. This is where his novels stand out.
  • The entire plot takes place in a span of not more 2-3 days and the hero is always short of time to solve the mystery. This leads to a fast paced novel which is good for people like me who can't wait too long for the secrets to unfold. I have read all of his novels in 2-3 days almost matching the timespan of the plot.
In short I have abstracted out the base class (sorry for the computer jargon if you are not a geek) of all his novels. Somebody looking at this base class would say that all his novels are the same old stuff. But despite so many similarities, the plot in his novels are highly original and completely different from each other. This is what made me read all of his novels.

Watch out for his sequel to "The Da Vinci Code" which will be released in a year or two.

Sunday, September 04, 2005

Change, Change, Change ...

I wrote this article long back (about a year ago), when one fine morning I woke up at 5.00 AM out of frustration. Hope the topic does not frustrate anyone anymore.

Change is the law of Nature. Nobody denies this fact. Had there been no change there would not have been the concept of time and all physics would have been dead. Thus change is inevitable. Some changes are favorable to us while others are not. There is nice little book “Who Moved My Cheese?” by “…” (Well, right now I don’t remember the author’s name) which tells you very nicely about change. The book is highly popular and well acclaimed all over the world. The lesson in the book goes something like this: -

Do not be afraid of change.
Anticipate change.
Adapt to change.
(Else you will perish).

I think that the author had made his point very well, but it seems to me (it’s my personal view) that he has left some aspects of change totally untouched. I was thinking about those aspects today morning and couldn’t resist putting my ideas into this document. My original motivation for this document are my friends who have long since called me a rigid person (immune to change).

Anyone who has even the slightest idea about evolution and natural selection will come to understand the basic fact that those who don’t adapt to change will not survive in nature. However many don’t realize the fact that the organism doesn’t adapt consciously to change (i.e. by his own will). The genes simply mutate randomly and the ones more suited to environment are selected by nature. Genes have no foresight and no intention to change as such.

But we humans are a bit different. We do have foresight and have free will. So it is really us who can adapt to change by our conscious efforts. And many people do adapt to situations. But the important point the author of “Who Moved My Cheese?” misses is that we are the only creatures who are able to bring about change. We have the free will to act and bring about changes in our environment, rather than just sitting passively waiting for a change to occur and adapt to it. Nobody, I believe, really thinks much about how a change occurs in the first place. Nowadays much of the change is brought about not by Nature, but by humans.

Adapting to change is what I will call an opportunistic view of life, where you don’t have foresight and you adapt yourself to situation to reap maximum short term benefit from it. This is a typical selfish behavior hardwired (or programmed) into human nature. The genes who made us lack foresight and work toward short term goal of replication. But I will believe that if all people start having this view of life, then we will be reduced to selfish automata, with no long term plan or purpose. I believe that the entire human progress will be halted and the civilization will come to an end.

The really important faculty of humans which has brought about all this development is our free will, the ability to act consciously, to bring about change (and not adapt to change). If we are not using that faculty, then why at all are we blessed with minds or brains? Apparently for no purpose? I hope everybody agrees, the answer is no. We have minds specifically for bringing about change. Adapting to change does not require any brains (because this is what all other organisms, having no brains or little brains, do). There have been some people (rather smart people, I think the author of that book is also one of them) who have somehow publicized this adaptation idea so much so that we have practically no desires left for bringing about change.

So I ask the readers of this document. Where does this adaptive thinking lead to? To find an answer you don’t need to do a research. Just look around yourself. See how people have embraced corruption. They have adapted to it remarkably well (I believe such an adaptation is unparalleled in the entire world of living beings). Look at the way people justify corruption in name of this adaptation and provoke others to adapt as well. See the way, the girls (not all) have adapted to the new standards of fashion (which have nothing about fashion, just about sex). I do not want to add further items to this list of adaptations. The ones which came to mind quickly I have mentioned them.

We have lost all the rationale to decide whether a change is worth adapting in the long run or not. We are blindly adapting. Nobody is bothered about changing the state of affairs. We have becomes slaves at the hands of few smart and cunning people who are bringing about these bad changes in our society. Rather than opposing them, we embrace them. Take an example, most of the teachers discourage new ideas and want the classroom problems to be done in the way they have instructed the students. When the student discovers a new way of doing things, the teacher gets frustrated that he is not following me. You might be thinking that this just happens in small schools where the teachers don’t have a broad outlook. Well, such a case happened with one of my friend in IIT Kharagpur. In general there is a strong aversion towards a new idea. The people having new ideas are thought to be eccentric and word eccentric has become almost derogatory. But people fail to realize that all the modern scientific development is based on the work of few eccentrics. Almost all the ideas of modern science were at first treated with aversion. Had the scientific thinkers been adaptive, we would not have enjoyed the so called benefits or applications of science.

There are some people who have come to realize that others are too adaptive to change and they reap a large benefit from it. These people are what I will call the leaders of present day India. They do not have any ability as such to become leaders in true sense of the word, but they rely on the fact that a common Indian will follow whatever they say. People adapt to the new political ideas, without thinking about its impact in the long run. And those who are able to think about it are in minority and their effect is nullified. We have chosen to become followers, rather than leaders. Following is not all that bad, but it’s a disaster when you follow the wrong guy. Whenever there are many followers, there will emerge a smart non-follower, who will capitalize on the fact that others are good followers and will popularize his ideas to the maximum. That’s what has happened practically in India.

The problem I find is that, it is much more difficult to bring about change rather than adapting to change. So there should have been a book about “How to Move Cheese?” rather than “Who Moved My Cheese?” I wonder why it is not so. (Perhaps writing such a book is much more difficult and its chances of being sold in the market very less).

Don’t you all think that it’s high time we put our minds to use and think actively rather passively about change? Well, I think so and that’s why I get into heated arguments with many people over this topic.

I hope I have made my point clear. Further discussion can be made by posting comment for this blog entry.

Saturday, September 03, 2005

Finally I too started blogging!!

After an year or so, I have started blogging. After lot of password recovery through email, I have managed to start using this blog account. Watch out for interesting stuff in topics like: computer programming, maths, science, philosophy, book reviews and finally sex... (how could I miss that?)