SPO – Lab#5

Algorithm Selection

 

In this lab we are going to look at a few different algorithms to enhance the volume of audio samples. For this, we will use an array of signed integers to simulate as sound samples.

 

Potential Algorithms:

  1. Floating point multiplication
  2. Fixed point multiplication
  3. Look up tables

 

In the Floating Point algorithm, we will use a floating point number and use it to multiply the existing values by iterating through the loop and changing the values that are already there.

 

In Fixed Point Multiplication, we will take a decimal value representing the volume factor and multiply it by a binary number to get a fixed-point integer. We’ll then use this number to change the values in the array.

 

In the Look-up Table method, we pre calculate all the possible value of the volume factor and save it in memory. This increases the memory usage as the pre-calculated values have to be retained in the memory so that they can be accessed by the program quickly.

SPO – Lab#3

Compilation Flags and Assembler Code

This lab is focused on the effects of different compiler options on the compilation of code and how the binary generated on compilation, changes with different flags.

The code used here is the world famous, Hello World! program

#include <stdio.h>
int main() {
        printf("Hello World!\n");
}

 

As suspected compiling the above program doesn’t do much and we get “Hello World!” on the screen.

Next we’ll compile with the following compilation flags

  1. “-g”  –  This flag generates the debugging information for the program and stores it in the compiled executable of the program.
  2. “O0”  –  This compiler option tells the compiler to not apply any optimization and keep everything on default.
  3. “-fno-builtin”  –  This option tells the compiler to not apply any kind of bulit in function optimizations on its own.

On compilation, we get and ELF file. We can view and analyze the code inside this file by using the “objdump” program.

The “-g” compiler flag increased the file size of the binary by 20% because of the debugging information added to the binary.

Using other compiler options:

1) -static

The binary generated with this flag made it 700% larger. This flag makes sure that the program runs in the exact same way every time.

2) removing -fno-builtin

The difference in the size of the file was almost negligible and the only difference in the instructions was the addition of an instruction telling the system to not do anything in terms of loading data into the cache.

3) Adding arguments to the printf function

If we add more than 6 arguments to the function, the first 6 arguments are stored in 32-bit registers and the following arguments are pushed to the stack to be pulled out later when needed.

4) Putting printf() in a separate function

Putting the printf function in  a separate function doesn’t have much effect. The disassembly changes to include another section <output> and this section will have all the printf statements that the main function had before.

5) Removing O0 and adding O3

This will set the optimization level in the compiler to the highest. O3 optimized programs aren’t always stable because of some experimental features of the compiler.

SPO – Lab#4

Writing Assembler Code

 

This lab is about writing the code in assembler and compiling the code successfully. In this lab, I’ll be writing code to iterate through a loop and print values from 00-30.

Following is the code used for this lab.

Screenshot from 2018-12-12 21-14-33

 

During this lab, I specifically had problems getting the 2-digit numbers to properly print onto the console.

The experience of debugging in Assembler was a nightmare. I had never written code in assembler before and I now know why no one ever told me to get my hands dirty with assembler.

In other high level programming languages, you can check if you’re making mistakes  in your logic by adding debugging print statements to output the results of calculations or inputs. But in assembler doing that was next to impossible for me. I was unable to get the code to even compile properly so the logic debugging was not really my first priority.

Although, once I got more comfortable with assembler, I was able to do walkthroughs like I’d do on any other language and ultimately completed the lab.

From my experience with assembler, I’d say no language is too tough to learn.

SPO – Lab#2

Code Building

For this lab I chose the Nano text editor to perform the lab. I am going to perform this lab on my personal Fedora 28 machine with 64-bit x86-64 processor.

 

Downloading the .tar file and extracting it was easy enough. There were also detailed documentation about the various configuration options and the installation process.

Following the instruction was easy and simple. I did need to install the ncurses library as it wasn’t already installed and then everything ran fine as it should have.

I then ran the ‘Make’ command after the ‘configure’ command. This installed the Nano text editor in my current library and I was able to use it from there.

I did have some trouble getting it to work globally but eventually I got it done with some research on Google.

SPO600 – Project – Post #3

This post is about the optimization of the FreeTiger hashing algorithm.

FreeTiger hasn’t been updated in over 6 years and its C++ implementation was based on an even older version. I had high hopes that I’ll be able to make it more efficient and optimized but I haven’t been able to.

The algorithm mainly uses memory copy with aligned memory to hash a given string. I have been unable to get any performance improvement over the current design of the algorithm.

The function I was focusing on for the optimization was tiger_1025. Vectorization was out of the question because of the really limited use of loops. Memory copy is sensitive and I didn’t have much success with that either. However, using the compilation flags, I did see and improvement in the numbers. The improvement was really small and substantial enough only when dealing with really large strings, such as strings which were 2MB in size.

Here are the results:

Optimization level O0 (O-zero) O3 (O-three)
Time taken (secs) 13.31463184 13.12705364

Now it was at all possible that the numbers I got were just within margin of error. So, to avoid that I used a large data set. The results posted above above are averages of 50,000 runs of the tester.

The tests were also run on a Rasberry Pi 3 on Fedora, and the results were basically same with similar improvement when using large strings in the data set.

The algorithm is already really nicely implemented and I couldn’t find anything that I could do to make it better. I did find another version of the Tiger algorithm, Cryptonite Tiger, this version has added support for multi-processor support for up-to 8 cores and also rectifies several cases of unaligned access but still doesn’t have any changes to the original working algorithm, other than some streamlining in the code by efficiently separating out declarations for different functions.

No changes were made to the code, so upstream requests don’t make sense and even if they were viable, there is no one maintaining the original repository.

 

This concludes my work on the project for SPO600.

SPO600 – Project – Post #2

This post is about the bench-marking of the Free Tiger hashing algorithm, Stage-II of the project.

 

Compilation and build:

Although the package was last updated in 2012, I was able to build the package successfully without any changes to the code. The package has an included tester which tests the operation of a particular hashing functions with string data ranging from a few bytes to 2MBs. The build ran successfully and put out numbers in milliseconds indicating how much time it took to hash the given string. The results are attached below. The package doesn’t have any architect specific files and the test I ran was on a 64-bit, x86-64 machine with Fedora 28.

 

Data Setup

The test setup inside the tester is pretty straight forward. The test data is in the form of c-strings and the tester tests the functioning of one hashing function at a time. So I changed the tester to use the tiger_1025 hashing function which is optimized to handle large string values. I was able to add a block of code which reads a large text file, in this case it’s 2MBs, and feeds it to the hashing function.

 

Bench-marking Results and Analysis

The numbers I got are an average based on 50,000 runs of the tester using the same data set on each iteration. The test gives a pretty good idea of how much time each type of data takes to process and at the end of the test you also get the rate of processing data in bits/sec which can be essential in scenarios where we’re dealing with huge data sets.

I bench-marked  the tiger_1025 function, but to benchmark other functions, you only have to change the call at one other location in the tester. So, the same data set and same tester can be used over and over for different functions or for all the functions in the algorithm. The arguments for each function do differ a little but there are detailed pretty nicely so changing up the function call is easy.

Bench-mark run-times:

String type Small strings 512 bit strings 128 byte/1024 bit strings 64Kb strings 2+MB strings
Time taken (secs) 0.0041363 0.004556093 0.005078058 0.39185025 13.31463184

During the hashing most of the time is spent in randomizing and salting, and this may explain why the time taken to has Small strings, 512 bit strings, and 1024 bit strings is so close to each other.

As shown in this graph, Size – time relationship of the algorithm, the relationship looks exponential but this is not true because the points plotted in the graph do not have the same intervals. To analyze the exact relationship, a lot more data would be required.

 

For the 3rd stage of this project I’ll be working on the function in discussion here to see if I can make improvements over the current implementation and hopefully improve the run-times or the algorithm overall.

 

SPO600 – Project – Post #1

This blog is about the project I am doing as part of the deliverables for this course. The project is about finding an open source package related to hashes, checksums or compression and then optimizing the main algorithm to the best we can using the knowledge gained through this course.

The package that I chose for this project is Free Tiger. It is an implementation of the tiger algorithm in C language.

My strategy for bench-marking:

  1. The package is really old and only has the essential source files. I am currently planning to create my own automated testing script which will test the main hashing functions and report on the times. The test is supposed to run a number of times to get the average run-time.
  2. I am planning to use a good chuck of data tailored to test multiple functions within the package.

Strategy for optimization:

  1. From what I have analyzed so far, there are a number of nested ‘for’ loops inside the functions. I will try to get the run-times down by introducing vectorization.
  2. My second approach would be to try to use various compilation flags. If the first approach proves useful, I’ll try to use the compilation flags in addition to improve the performance even further.
  3. There are functions in the package which generate two hashes by taking two strings as input. I’ll try to add multi-threading in that case to see if it has a positive impact on the performance if any.

 

 

Fun? facts..

The algorithm was introduced in 1996 by two developers, Ross Anderson and Eli Biham, its 12 years old and was last updated in March 2012.

The SPO600 Blog

This blog is part of my academic endeavours. Specifically for the course SPO600 – Software Portability and Optimization.

During this course I’ll be working with open-source communities. Researching the cultures of two communities and contributing to their projects using my knowledge and skills in programming which I’ll develop over the course of this course. LOL