Optimizations in code.

Certainly, using a framework like .Net or Java alleviate the work for developers, allowing you to do much more in less time. But also, like spiderman says, “great power comes with great responsibility”.

A few weeks earlier I was talking with a very experienced developer from another country. In this talk, he pointed that one of the common flaws of the Chilean developer is the lack of technical background, specially maths (for logic and abstraction) and the misconception of use and abuse of styles of coding without take into account what occurs at the machine level. Well, he comes from the assemly world and it sounds pretty reasonable for him to see the frameworks like a menace for the good and optimal coding practices. Not far from reality, it’s very important to not just develop in a “single and procedural” way, but also to know why the frameworks are there. I know a lot of people that says “I know Java/.Net” mostly because they use it, they start Visual Studio/Netbeans or any other IDE and then start to code. But it’s true that often they just sit and start to code in a procedural way, more than in an object oriented way. It’s neccessary to overwhelm your project with a lot of overhead in object creation when you simply start with a method that calls another one and expect the return? Certainly, we are wasting the great majority of the advantages that the frameworks give us.

Years ago, I readed an excellent paper about optimizations in .Net. Unfortunately I don’t remember where I read it, so I will try to reply one of the examples.

Consider a matrix of 10.000 x 10.000 integers and the task: sum every value from the matrix to a variable, avoiding overflow exception.

Two simple solutions are:

          
            int sum = 0;
            for(i = 0; i < 10000; i++)
              for(j = 0; j < 10000; j++)
                sum = (sum + matrix[i,j]) % int.MaxValue;
          
        
Firstoption

          
            int sum = 0;
            for(i = 0; i < 10000; i++)
              for(j = 0; j < 10000; j++)
                sum = (sum + matrix[j,i]) % int.MaxValue;
          
        
SecondOption

You can see that the code is practically the same. The difference is the index in the matrix. While the first start in a row and then loop through the columns, the other makes it in opposite way: first start in a column and then loop through rows.

Let’s see what’s the result with this code:

    
      int sum = 0;
      Stopwatch stopWatch = new Stopwatch();
      stopWatch.Start();
      for (int i = 0; i < 10000; i++)
        for (int j = 0; j < 10000; j++)
          sum = (sum + matrix[i, j]) % int.MaxValue; // here you change between i,j and j,i
      stopWatch.Stop();
      TimeSpan ts = stopWatch.Elapsed;
    

In the first case (i,j) the operation takes about 1-2 seconds (1.7, 1.8, etc.) on my laptop, while the second case (j,i) takes about 13 seconds.

As you can see, a little change in the index makes a big difference in the performance of the code. The reason is that a matrix is nothing but a simple array in the first place. The index i,j are just operators to help in getting the information. Let’s change our matrix so we can work it as an array:

  
    int arrayLength = 10000;
    int[] myArray = new int[arrayLength * arrayLength];
  

If we access matrix[25, 100] it’s the same as access myArray[25 * arrayLength + 100]. Now, why the difference?

The difference occurs because in the second option (first column and then row) you are “jumping” between elements in the array/matrix (represented as linear array):

  
    int sum = 0;
    for(i = 0; i < 10000; i++)
      for(j = 0; j < 10000; j++)
        sum = (sum + matrix[j,i]) % int.MaxValue;
  

…is the same as

  
    int sum = 0;
    for(i = 0; i < 10000; i++)
      for(j = 0; j < 10000; j++)
        sum = (sum + myArray[j * arrayLength + i]) % int.MaxValue;
  

The result in the second iteration is the access in this order:

  
    sum = 0 + myArray[0] % int.MaxValue;
    sum = sum + myArray[1 * 10000 = 10000] % int.MaxValue;
    sum = sum + myArray[2 * 10000 = 20000] % int.MaxValue;
    sum = sum + myArray[3 * 10000 = 30000] % int.MaxValue;
    ... and so on
    sum = sum + myArray[1] % int.MaxValue; // here we are at i = 1, j = 0
  

This means the code has to “jump” between positions in the array. Because the array is in the RAM (the heap, but finally in RAM) and the representation was linear (in the best of the cases. If there is no space for allocating the entire array, the operation of allocation depends on the operating system) then you are telling the program to go from one memory position to another, 10.000 elements far away. Worst, when you finish with the j-loop, i = 1 and you have to return to position 1 of the array.

WorstLoop

In the first case (i,j) you just have to advance one position in each loop. This means that you advance in the order of the array. The access in RAM is the most efficient because it only has to advance one position to read the next value. In the case (j,i) the access is the slowest; it has to travel between all the array and then return. Think it as an arm. The operating system needs to move his arm to catch the next value, and once captured needs to move again, and again. The more the arm need to move, the more the time in the process.

As you can see, this code has nothing to do with the framework. The framework try to compile the best code for the CLR, making some optimizations, but the best part of the optimization starts from the guy that’s seated in front of the PC. It’s very important to know this kind of things so the next time we want to loop through a matrix we can know what the best option is. Or maybe you have a nested loop. Now you can think what’s the best option in the iteration, or maybe you can change the algorithm to transform an O(n3), O(n2) to a single O(n).

I’m not expecting that you know all the process of the CLR, how the operating systems work or something else. I feel very comfortable with linq operators, dynamic objects, getters and setters in .Net and also. It’s just a little warning to remember that’s always possible to make a more efficient code without sacrificing readability. For example, a code that I saw the other day:

  
    var enumerableElements = a_linq_to_SQL_Call();
    var returnList = from p in enumerableElements select p;
    return returnList;
  

I don’t know if the compiler interprets it as “You are a joke. Let me fix it for you”, removing the second operation in the CLI maded or if it has to execute the code, no matter if it does nothing.