Blog

New Windows Phone is Here

We all watched Apple and Samsung going at each other in court in past few months. The court resulted bad for Samsung, as the jury found Samsung guilty of patent infringement and entitled Apple to a compensation of billion dollars. I think things start to change for Android systems that day. Even though the patent case was between the smartphone manufacturers, I believe that this case will provide a solid ground for Apple’s future possible fillings against Google for the Android operating system. But, what does that all mean, why is this significant? I think that this result will drastically effect the decision makers in the IT departments of corporations. The foundation of Android is on the bull’s eye for Apple and I’m sure every IT manager in the industry is try to put a distance between them and Android devices for now.

Now, if we come back to my post’s title again, we watched Microsoft introduce its new mobile operating system which comes with exciting features which enables the phone to be more and more integrated with the Microsoft Windows operating system. With Android handset manufacturers being under attack from Apple and RIM’s Blackberry struggling with weak sales, I think this is the time that we will see the increasing market share of Windows Phone 8. This is the one shot that Microsoft can aim both its end users and enterprise customers with its new platform. It’s new slick Metro UI is integrated in the desktop version of Windows 8 and provides a familiar environment for the user on the smartphone as well. Taking into account that, PC industry is dominated by Windows operating system, with the right marketing strategy, Windows 8 phone can disturb the smartphone market.

If you haven’t seen already, you can take a look at Windows Phone’s official website. I’m really excited and feeling optimistic about the new release of Windows Phone. I think the time that Redmond has been waiting for has come and we will be seeing the rise of Windows Phone platform with its integration to the PC and the Xbox platform.

VoIP and Emergency Calls

If you travel often, both inside your country or abroad, you are very likely to face roaming charges if you are to use your mobile phone. For most people and especially the generation who grew up and got used to the rapid technology evolution, mobile phones are symbiotic part of our lives. With the spread of smartphones, we are basically carrying out our laptops with us wherever we go. Now, we see a great shift in traffic types for mobile operators from call traffic to data traffic. Internet is the backbone that connects us to our loved ones, to our school, to our work. In this process, we will see a shift to voIP services even for mobile phones. The reason is simple, with data plans you can do anything and everything and pay a single price.

However, is VoIP technology and services are ready for such a shift? That is a big question that the industry has to sit back and think. The VoIP market is getting bigger and bigger and even the big search engine giant Google wants to be a part of it as it implemented call service to its widely used mail service, Gmail. Most people will think as we have broadband access in most areas and we are able to make calls using such applications, it won’t be a problem to switch to such services. Currently widely used services like Skype, do NOT work for emergency calls. There is just one thing, VoIP needs to solve its problem with the emergency call regulations.

There are few problems with VoIP related to emergency calls such as calling 911 in the States or calling 112 in Europe. One problem is rather technical, and it is the fact that when the power is down, your internet connection will be cut at your home. However, as we are thinking mostly for mobile devices, in most parts your mobile coverage is not that affected by a possible power outage. Backup power system can be a solution for your home VoIP phone or simply having enough charge in your smartphone will be sufficient if you are using it as your main VoIP phone in that case.

Another issue with emergency calls and VoIP is that, for example when you call 112, your location is also available to the authorities for cases like the caller is not even able to state his location. Now, how to handle this case with VoIP? I think that we can utilize location services of the mobile carriers as well as your mobile devices capacity such as GPS for such cases. We need to integrate the whole system in a more functional manner so that this information exchange can be utilized.

I think that, soon enough we will see more and more people considering this path and switching to VoIP services for replacing their legacy land phones and main mobile calling services. It is an important issue we need to think carefully and engineer smartly so that we can eradicate such problems from even occurring in the near future.

For more information about emergency calls and the philosophy behind having a widely used and recognized number, you can take a look at the European Emergency Number Association’s website and find out about it. We are becoming more global every day and it is important to address and solve such issues for improving the life quality of the whole society by enabling emergency help available at an acceptable rate.

Startup Turkey

Startups

Startups

With undergoing economy around the Globe and possible multi-dip economic recessions at the door, various countries taking initiatives and trying to find out a new way to boost up their job market and economy. The most important aspect of this research is to find out a new proposal for rescuing the economy as government funded cash injections couldn’t create a stabilized environment as people understood these kind of stimulus packages as “temporary” as they actually were.

Now, we see a trending topic for these “way-out” programs: Start-ups. Researches indicate that, of course conducted in the United States where people take statistics more seriously compared to anywhere in the world, instead of funding rescue packages to large corporations, it creates a lot of new jobs to fund young-driven start-ups. I think the reason for that is pretty simple and in the very foundation of very famous quote “… be a pirate than to join the navy…” Large corporations, being the navy here, tend to be more conservative and really hard to change the way they work. When they talk about a economic savings package in the corporations, almost all the time they mean they are going to start firing people. However, pirates, being the start-ups here, most of the time have nothing to lose and they contain the youth’s power to adapt to changes and fight for their ideas.

Now, I would like to focus on Turkey and try to compare it with the United States as they recently started a program for entrepreneurs called “Startup America Partnership”. This programs has a really respectable and powerful board of directors with really close ties to the government, in fact President of the USA repeatedly said that he saw tech startups as great potential job creators. On the other hand, Turkey is just struggling his way out with the entrepreneurs. Several government agencies have funding programs for possible product ideas but they mainly the lack the very culture of entrepreneurship, mentors. At this point of time, government just provides the funds and almost in all of the cases, funded programs fail due to the fact that they actually have no or very little idea of what to do and how to spend that funding.

However, now we are seeing a little movement from the private companies with their own small-scale investment spinoffs where they backup the incubators and try to provide funds to a very limited group of startups. I see a one big problem here: in most of the cases those investment firms actually invest in their parent companies other spinoffs which I think that is just a weird strategy as it is in self a vicious cycle. It is understandable and natural for an investment company to try to minimize its risks when it is investing but I think it is completely useless and against the nature of startups as they limit this funding to a really small circle and in most of the cases exclude the real entrepreneurs of our day, college students or newly grads.

Another barrier for startups in Turkey is that societies “idea” on small sized companies and the current “love affair” with foreign brands. Based on my experience, people tend to not trust in young startups as think of them as the youths’ “summer love”. Also, compared to the United States, which is pretty ironic in my opinion, in Turkey people show really little support or no support at all to the local businesses. One would expect the other way as the USA is the “flagship” of globalization. Thus, small startups have fierce competitions with large corporations  and even when they come up with a better product, public tends to go and buy the product from their “corporate” provider. I think that in order for us to see some real movement with startups in Turkey, the public need to change its approach to those startups or else we will be only hearing successful startups in Silicon Valley.

To summarize, even though I painted a dark picture for startups in Turkey, I still have some hope for this business model to survive and flourish in next 10 years or so where the country can finally accumulate the much needed mentors and a positive public opinion on startups with the help of new organaztions providing help to the new enterprenurs such as “e-tohum”, “girisimfabrikasi.com” , “Galata Business Angels” and many others.

What do you think? Lets drop few lines of comments and get a debate going!

Git Connection With Redmine Issue Utility

For a large scale project, it is important to have a decent project management tool. Also, for the sake of development ease and testing process I would suggest any developer team to use project management tools well integrated into their development process.

Redmine is a open source software which serves as a project management tool where you can assign roles to different users, arrange work schedule, create Gantt charts as well as have a decent bug tracking system. In Redmine, issues can be bug, feature or support tickets. In theory and most everyday cases, ticker creator is responsible for testing and odds are that he is not a developer who has access to source repositories to check the code. Assigned developer will take a look at the issue and continue his work with regarding to that ticket. When there is an addition to the source code, it is important to notify the ticker creator. Instead of going back on Redmine and doing this manually, how can we set up a framework such that when there is a new push to repository, it will automatically update the ticket description for the follow up?

The answer is using the hooks build in Git version tracking system. Hooks are predefined processes, scripts, that run after a certain action regarding to the git repository. To have a start up, I have developed a “post-update” for connecting my git repository updates with their respective issues in Redmine. This project is at its beginning era for me as for now, it partially relies on the user. My goal is to set up my hooks such that there will be no commit without an issue in Redmine system. This will enable the project developers to create a readable project history which can be easily used by the project managers and can also populate a know-how database for future projects.

Currently, my “post-update” checks Redmine database to see if there is an open ticket for the coming commit to the git repository. I have set a rule for the commits, every commit message have to start with “Issue:” then go on with the usual commit message. My hook will parse the issue number first to see if the commit message is in laid out format, if not it will send a feedback to the user stating that the system will accept the commit, however there will be no action in the Redmine system. Verifying that use used the correct format and entered a issue number, then we will check to see if that numbered issue actually exists. If it doesn’t again the system currently accepts the commit but sends a message stating there is no connection with the issue number in Redmine framework. After passing these two tests, post-update hook will connect to the Redmine database and update the issue description with the commit details, including links to commit history and difference links to the user. Also, my system will make the older description for the issue look like quotations so that we can have a full history on the issue. Here is the post-update script that I have developed:

#!/bin/sh
#Author: Egemen Gozoglu egemen@egemengozoglu.com
#Date: 08.07.2011

#Color Codes
BOLD="33[1;39m"
NORMAL="33[0;39m"
RED="33[31m"

#Database variables
db_socket=/home/egemen/redmine-1.2.0-0/mysql/tmp/mysql.sock
db_username=bitnami
db_name=bitnami_redmine
db_password=3e79743af9
server_ip=`ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 | awk '{print $1}'`

commit_message=$(git show --format=format:%s | head -1)
commit_id=$(git rev-parse HEAD)

issue_id_tmp=$(echo $commit_message | awk -F " " '{print $1}')
issue_id=$(echo $issue_id_tmp | awk -F ":" '{print $2}')

if [ "$issue_id" = ''  ];then
    echo $BOLD$RED"Invalid issue number. No Redmine update will be made."$NORMAL
	echo $BOLD$RED"Please start your commmit message like the following format:"$NORMAL;
	echo $BOLD$RED"Issue: ....."$NORMAL
	echo $BOLD$RED"Ony push to your repository will be processed."$NORMAL
	exit 1;
fi

count_issue=`mysql --socket=$db_socket -u $db_username $db_name --password=$db_password -e "SELECT COUNT(*) from issues WHERE id = $issue_id"`

check_issue=$(echo $count_issue | awk -F " " '{print $2}')

if [ "$check_issue" = '0'  ];then
    echo $BOLD$RED"No issue with is found with the issue number you provided!"$NORMAL
	echo $BOLD$RED"No Redmine update will be made."$NORMAL
	echo $BOLD$RED"Ony push to your repository will be processed."$NORMAL
	exit 1;
fi

commit_date=$(date);

old_desc=`mysql --skip-column-names --socket=$db_socket -u $db_username $db_name --password=$db_password -e "SELECT description FROM issues WHERE id = $issue_id"`

project_id_tmp=`mysql --socket=$db_socket -u $db_username $db_name --password=$db_password -e "SELECT project_id FROM issues WHERE id = $issue_id"`

project_id=`echo $project_id_tmp | awk -F "project_id " '{print $2}'`

project_html_tmp=`mysql --socket=$db_socket -u $db_username $db_name --password=$db_password -e "SELECT identifier FROM projects WHERE id = $project_id"`

project_html=`echo $project_html_tmp | awk -F "identifier " '{print $2}'`

mysql --socket=$db_socket -u $db_username $db_name --password=$db_password << EOFMYSQL
UPDATE issues SET description = 'Commit Date: $commit_date n Commit History: "$commit_id":http://$server_ip:8080/redmine/projects/$project_html/repository/revisions/$commit_id n Diff: "$commit_id":http://$server_ip:8080/redmine/projects/$project_html/repository/revisions/$commit_id/diff n Commit Message: $commit_message nn $old_desc ' WHERE id = $issue_id;
EOFMYSQL

You need to adjust the database socket, database username, database name and database password for your own system. For generating links, the system automatically gets the server’s IP address. Also, you need to pay attention to one detail about Redmine, Redmine only fetches repository data when you actually click the “Repository” tab for your project. So, when you update your project and go to issue directly and try to click on the commit hash link right away, Redmine will fail to show you the page. Once you click on the repository tab all the links regarding to previous commits work perfectly well. This problem is related to Redmine and it can be solved by adding cronjobs to your system. For more information you can check the following link:

http://www.redmine.org/projects/redmine/wiki/FAQ

So far, this project for me is just beginning, I am hoping to develop this more based on my needs and feedback I receive, so please feel free to contact me about this project! For those of you who wants to get the snapshot of the code you can go to my Github repository by following the link below:

https://github.com/egemengozoglu/egemen_repo/blob/master/Redmine_Git_Auto_Issue/post-update

Enjoy!

LXR Installation on Ubuntu 10.10

If you are into programming and dealt with large scale projects I’m sure you had your time thinking about how great it would have been to have an application that you can look up your previous functions, layouts and definitions. While developing and maintaining a large scale on going project, it is crucial to have access and chance to compare your current version with the previous versions with ease. Cross referencing is what you are totally looking for. It came out mainly from GNU/Linux kernel development process where there are constant version releases and tons of lines of code.

Linux cross referencing (LXR) let programmers to browse the different versions of the code, look up functions by their names, see where you used that exact same variable or function definition or even just search random words that you hope to find in the code. It gives the programmer and opportunity to quickly skim the previous code, find the needed references and continue his development process. Luckily, cross referencing layout has been made public so you can set up your own cross referencing site and start to use this great tool.

In this article, I will give you every step you need to go through on an Ubuntu 10.10 clean system. Before we start, you need to download some packages from your Ubuntu universe repositories. You need to have apache2, MySQL and exuberant-ctags. To get them, simply use the following line of code:

sudo apt-get install apache2 mysql-server exuberant-ctags

While installing mysql-server package, you will be asked to create a root user password for MySQL. Go ahead and create your password and finish installation. After that, you need to install a perl library, in order for your XR to connect with your database. Overall, you need to download the following packages related to perl, as it is our backbone for the cross referencing system:

sudo apt-get install libdbi-perl libdbd-mysql-perl libfile-mmagic-perl libapache2-mod-perl2

 

In this instructions, I will use the latest avaible realse of LXR, which is 0.9.10 at the moment. LXR source codes are hosted on sourforge.net with the following address http://sourceforge.net/projects/lxr/files/stable . You will download the code in a tgz format from here. I will assume that you download that package to your Desktop with a directory path of ~/Desktop/lxr-0.9.10.tgz .

First of all you need to open the contents of this compressed package in to /usr/local/share . I will also change the name of the resulting directory to “lxr” for ease. Use the following code in order to do that:

cd /usr/local/share
sudo mv /home/aselsan/Desktop/LXR/lxr-0.9.10.tgz .
sudo tar -zxf lxr-0.9.10.tgz
sudo mv lxr-0.9.10 lxr
cd lxr

Now, at this point of time, we are done with getting the required packages to their places. Now, it is time for configuration of the system components. Firstly, we will start with MySQL. You will login to your MySQL sever with the root user using the password that you created previously.

mysql -u root -p #Enter the password when asked.

Inside your MySQL server, run the following queries to set up the system for your cross referencing site:

CREATE USER 'lxr'@'localhost' identified by 'foo'; #Creates the user "lxr" with the password "foo"
. initdb-mysql #Initializes the lxr database by creating the tables and give "lxr" user the needed permissions.

What you need to be careful about is that, to pay attention where you are when you log in to your MySQL. In order to do that initialization process you need to be in your lxr directory which is /usr/local/share/lxr. Otherwise, when you try to run the initdb-mysql script, it will complain about can’t finding the script. After the process is completed leave your MySQL server with the command “exit”. That’s all you need to do for configuring your database.

Now we need to configure the LXR. Remember, we are still in the lxr directory. In order to configure, we need to copy the default lxr.conf file to the root of lxr directory.

sudo cp templates/lxr.conf .

Now we can configure. Use the editor of your choice to do the configuration. I will use vim.

sudo vim lxr.conf

Now firstly, we will use swish-e instead of glimse for our sort of text search engine. The reason is, glimpse is not free for commercial use whereas swish-e is completely free software. Thus comment out the glimpse related lines as the following:

#, 'glimpsebin'   =>'/usr/local/bin/glimpse'
#, 'glimpseindex'   => '/usr/local/bin/glimpseindex'

Now we will give the executable swish-e path to lxr, which you can find out by using “which swish-e” command configure it according to your system. My system yielded the following result:

, 'swishbin'   => '/usr/bin/swish-e'

You should pay attention if the user can reach that executable. If you face any problems regarding to that, just “chmod” and enable users to have execution rights. Next step will be the configuration of ctags and configuration of lxr:

, 'ectagsconf' => '/usr/local/share/lxr/lib/LXR/Lang/ectags.conf'
, 'genericconf' => '/usr/local/share/lxr/lib/LXR/Lang/generic.conf'

Now we need to define the address for our server’s URL. Firstly give the base URL as:

,'baseurl' => '
http://localhost/lxr'

Then you can add IP addresses of your server and hostname related network addresses to URL aliases like the following:

, 'baseurl_aliases' =>
                        [ 'http://17.2.0.193/lxr']

Now it is time for us to point out the source folder that we would like to keep our source code and index for cross referencing. This will be the main folder which will contain subdirectories like v1, v2 and so on. So that you can add your respective version source under these subdirectories. As an example, I will be using a directory that I just created on my desktop like the following:

, 'sourceroot' => '/Desktop/lxr_source'

Now I have subdirectories named v1 and v2. I have my source code for my application under these subdirectories. We also need to tell LXR to look into these directories and index them. We will set this in the lxr.conf file as the following line:

, 'range' => [qw(v1 v2)]

You can also put the character “/” in order to tell LXR to index everything under your main source directory. However, if you do that, you will lose the option to have different versions to compare. Thus, I do not suggest that. Lets also set which version should come up when we get on the site by default. I will set it to version 2:

, 'default' => 'v2'

Now that we are almost done expect the database connection details and swish-e index directory. You can see that MySQL login information related part is commented out. Delete “#” characters and open them and set them to your regarding MySQL username and password. You have to use the username lxr, however if you set the password for this user, you need to use that instead of foo:

,'dbpass' =>'foo'
, 'dbuser' => 'lxr'

Finally, we only need to give swish-e a bit of privacy and point out a designated folder for it to keep its search indexes. Don’t put this directory under your main lxr path or the directory where you put your source codes. Setup another location. For this tutorial, I just used a directory on my desktop as the following:

, 'swishdir' => '/home/aselsan/Desktop/swish/'

Now we are practically done with the lxr.conf file. For more customization, you can read the file as it is pretty self explanatory and adjust it according to your needs. Now we can start our indexing operation. I would like to remind you that if you have a large source code, this process can take a bit of time and can load your cpu, so just be aware and while you are still in /usr/local/share/lxr directory execute the following command:

./genxref --url=http://localhost/lxr --allversions

This wil index every non-indexed part of your source code versions. If you want to only index one version, instead of “–allversions” you need to use a syntax like “–version=v1”. Once you index the whole of your source code whenever you run this indexing operation it will skip the already indexed part and only index the newly changed and added parts. Also, remember to index every time you add or take out something from your source directory.

Now I will move on to the apache2 server configuration. Just create a lxrserver.conf file with your editor:

sudo vim lxrserver.conf

Inside this file, write the following piece of apache directives:

Alias /lxr /usr/local/share/lxr

 AllowOverride All

Now we need to move this file into apache2’s conf.d directory which can be done as given below:

sudo cp lxrserver.conf /etc/apache2/conf.d/

To finish apache2 configuration, we just need to use one more command while we are still in the directory /usr/local/share/lxr :

sudo cp .htaccess_cgi .htaccess

Now, theoretically, you should be able to see your cross reference site at http://localhost/lxr/source . However, I faced some problems regarding to perl in my experience. After couple of hours of messing around and reading through a lot of material and code, I came to the a solution. If your LXR is not working at this point, it is mostly likely perl related and in order to solve this you need to change the first line in source, ident, diff and search files in /usr/local/share/lxr . You can see the in the first line of these files the following line is written:

#!perl -T

This caused confusion for my operating system and failed the execution of perl scripts. So that I decided to look up where my perl executable is with “which perl” command. I saw that it is located in /usr/bin/perl. So I changed all the first lines for these 4 lines as the following:

#!/usr/bin/perl

Also, make sure that you reboot your apache2 web server as it adjust itself to the changes are reboots. You can use the following command:

apache2ctl restart
LXR logo

http://lxr.linux.no/

Now, you should have a fully functional cross reference site for your projects and please remember to give back to the open source community! If you want you can also check for more information on LXR’s website. Enjoy!

How to set up Gitolite on Ubuntu 10.10

Sometimes it may look like a simple process, however I think that it is always handy to have a guide in hand to take a peak at when you are installing a system. That’s why I am writing this blog post right now, to create a layout for installing Gitolite.

To start off, I will give you a brief description of what Gitolite actually is. You like using Github or Sourceforge as your repository destinations but however, you have projects that needs to be kept in local network and not publicly reachable. For this purpose, you need a software like Gitolite in order to maintain and administer your git repositories. You can create repositories and set their read, write, clone, pull etc. permissions according to users or user groups. Mostly, ssh-keygen pairs are used to authenticate the users to the system.

So where to start now? I will be giving instructions to install Gitolite on a clean Ubuntu 10.10 system, in which I think without a hassle you can do it on other Ubuntu releases. Firstly, you need git and apache2 packages installed in your computer from your Ubuntu repository as well as apache2 server. In order to do that:

sudo apt-get install apache2 git

Now, you need to generate a ssh-keygen pair if you don’t have one already in your hand. Ssh-keygen pairs are located in your users home directory under the hidden directory .ssh. Path is like the following “~/.ssh”. If you don’t have ssh-keygen pair already use the following command:

ssh-keygen -t rsa

Now as you have your key pairs, copy your public key to your /tmp directory in a format of your name and surname as in the example. Public key has an extension of “.pub” and named “id_rsa.pub” in our case. After copying, also set the read and write permissions for this file.

sudo cp ~/.ssh/id_rsa.pub /tmp/_.pub
sudo chmod 666 /tmp/_.pub

Now, you can download the latest version of Gitolite from Github. It is address is https://github.com/sitaramc/gitolite . Download it and unzip it to your ~/src directory. You can create the src directory if you don’t have one already. Move your gitolite directory to here and change its name to “gitolite” for the sake of readability. Then get in the gitolite directoy:

cd ~/src/gitolite

When you are in there, you need to change your user as root,

sudo su

After that, while you are in your gitolite directory, use the following command:

src/gl-system-install

After that process is complete, now we need to create a user named “git” with a home directory of /home/git :

adduser --system --shell /bin/sh --gecos 'git version control' --group --disabled-password --home /home/git git

Now, lets get out of root user and change our user to git:

su - git

If you need to change the password for user git, you can use the following command:

sudo passwd git

While you are the user “git” and in its home directory (/home/git) use the following command which utilizes your previously created public ssh-key:

gl-setup /tmp/_.pub

When the setup starts, it will ask you if you need to change .gitolite.rc. Just don’t change anything and continue. After the setup is completed, while you are still in your git user, lets set its username and email for git:

git config --global user.name "git"
git config --global user.email your@email

Now you can get out of user git and get back to your regular user using the “exit” command. I like to use my ~/src folder so, I will now get into that directory and clone my gitolite admin repository so that I can create new repositories and users.

cd ~/src
git clone git@localhost:gitolite-admin

I used localhost here as the server address, assuming that your computer is your server, however, if you set up your gitolite on a remote server you need to put hostname or IP of the server instead of localhost.

Now at this point technically, we completed our setup for gitolite. Now you are ready to create new repositories and handle user permissions.

I would also like to give you some administration overview. When you pulled the gitolite-admin repository, now you have 2 different directories in your gitolite-admin directory. One is named “keydir” and the other is called “conf”. If you want to add new users to your repositories, you need to put their public ssh-keys in keydir directory under their username. In conf directory, you have a file named gitolite.conf where you mainly do your administration activities. To create a new repo, you can follow the following format:

repo test_repo
          RW+ = admin
          R      =@all

This configuration will result in read write and non-fast forward push rights to your and read permission to all the users in your git repository. Now you are all ready to use your gitolite for you own private repositories.

Read more

A Nice Bash Script for Adding New Git Repos with Dialog User Interface

If you have developed bash scripts with user interface , I’m sure you thought how “basic” they look in the terminal with some ASCII characters like “-” and “+” to provide a graphical so called menu. If you wanted to enhance your scripts user interface and make them more user friendly, you should take a look at the “dialog” frame work for creating such user interfaces. If you are an Ubuntu user you can easily get it by just typing sudo apt-get install dialog from your distros repository. Also you can download it from http://www.hightek.org/dialog/ as a tar ball and install it on your system.

To take a brief look at the capabilities of this dialog package we can list them as following:

–title: Lets you set the title of the dialog box.
–backtitle: Sets the title in the background for the main background frame. (Possibly you can set it as your company’s name)
–yesno: Creates a basic Yes or No menu for you to verify script actions with the user.
–msgbox: As you can guess from the name, it is a basic box with some info that you want to display to users.
–infobox: Nice message box version for letting your users that there are some processes are going behind the scenes.
–inputbox: A message box where you can take text input from the user.
–textbox: This box is sort of a ”cat’ command that will display a files content.
–checklist: If you want your users to select multiple options from a menu you can use this option.
–radiolist: Works similar to checklist option  with some addition as to display selected options.
–menu: As the name implies, a nice menu layout where your user can select his preferences.
–gauge: This option brings a “process” bar to the screen in which you can implement in a recursive call to display percentage wise process of your script’s process.
–file: Displays the contents of a directory with its subdirectories and files.
–stdout: In my honest opinion, this is the most critical option that you would like to implement in your script for dialog. It lets you get the user selections as inputs to your variables.

If you want more details about options and how you can use them including some nice coloring effects such as changing the background color or text colors you can always refer to man page for dialog.

So, how can you implement this package into your scripts? I will go through a really sort example. Lets take the case that you have a git repository which is maintained by an admin using ”gitolite” and all the other users have admin rights into the git repository so that they can handle their own repository creating processes. However, you don’t want your users to randomly wander around your administration repository and you want to automate this process. You can use this following bash script to implement such an application. Before I start, I would like to remind you, you need to have ”git’ package and “dialog” package in each user’s computer. You can get them as the following on an Ubuntu environment:

sudo apt-get install git dialog

First lets start with our script backbone. You can also get the code from my public git repo at Github from the following link:

https://github.com/egemengozoglu/egemen_repo/blob/master/Auto_Repo_Naming_by_Users_Gitolite/repo_name_dialog.sh

#Color Codes
BOLD="33[1;39m"
NORMAL="33[0;39m"
GREEN="33[32m"
RED="33[31m"
#Set a default repo server address and also give the change 
#to the user to change address if he likes
default_git_server=git@yourserver
#Creates the dialog box
git_server=`dialog --stdout --title "Server Address" --backtitle "Egemen Inc." --inputbox 'Please enter the git server address' 0 0 $default_git_server`
#Quit the script if the user selects "Cancel"
if [[ $git_server == '' ]]; then
echo;
echo -e $BOLD$RED"You left the repo name setting script."$NORMAL ;
exit;
fi

#Gets the name of the repo name that user wants to create
git_repo=`dialog --stdout --title "New Repo Name" --backtitle "Egemen Inc." --inputbox 'Please enter the new git repo name that you would like to add' 0 0`
#Quit the script if the user selects "Cancel"
if [[ $git_repo == '' ]]; then
echo;
echo -e $BOLD$RED"You left the repo name setting script."$NORMAL ;
exit;
fi
#Gets the description of the repo that user wants to create
#It is really usefull if you are using a web interface to browse
#your repos like GitWeb
repo_desc=`dialog --stdout --title "Repo Description" --backtitle "Egemen Inc." --inputbox 'Please enter the new git repo description' 0 0`
#Quit the script if the user selects "Cancel"
if [[ $repo_desc == '' ]]; then
echo;
echo -e $BOLD$RED"You left the repo name setting script."$NORMAL ;
exit;
fi
echo;
#This part is git related. Script clones the gitolite's admin 
#repo into the system, makes the necessary changes
#And pushes it back to the admin repo with commit description. 
#Also sets the visibility options if you are using gitweb. After the 
#process it deletes the gitolite admin repo from the user's hard-drive.
cd /tmp
git clone $git_server:gitolite-admin
cd gitolite-admin/conf
echo >> gitolite.conf
echo "repo   $git_repo" >> gitolite.conf
echo "          RW+ = @all" >> gitolite.conf
echo "          R   = gitweb daemon" >> gitolite.conf
echo "          $git_repo = "$repo_desc"">> gitolite.conf

git commit -a -m "New repo: $git_repo has just been added."
git push
rm -rf /tmp/gitolite-admin

I think that comments in the code are pretty self explanatory. I will just go through a quick walk through, the script first gets the git server address where the repos are created and stored. Then it asks user to enter the name for the new repo that they want to create. Also we get the repo description for the sake of completion. In this script, we set the user rights to the new repo as @all which means all the verified users in that git repo environment can reach this newly created one. After getting the user input, we pull the gitolite-admin repo (gitolite is what I use for administrating git repos in our local network.)  and make the necessary changes in its config file and then commit the changes and push it back to the server. After this process we clean out the gitolite-admin repo from the user’s computer. Note that if you are using some other git repo administration tool other than gitolite, you might need to adjust the configuration file syntax part.

Another tip about dialog framework is that, when a user selects “Cancel” button from the menus, –stdout returns a empty string, as in the above script you can control this by comparing the return value with ”. This way, your script can quit its process at any given time. Also in –menu option, I saw that “cancel” button returned “$” so, that was what I used to control menu box’s cancel option. It is useful to check all the possible return values from dialogs so that you can develop a more stable script.

I have used a simple flow for the script, however, I would suggest you to use functions for readability and allowing your script to have some options like help and and configuration. Also, keep it in mind that the order of the dialog’s options are important in the sense of input that they wait. You noticed that for height and weight inputs I always used “0” so that my dialog boxes will have as much size as they want. You can set a size for them if you like. I will also include the following dialog box screens that you should be getting if you use my sample script:

Server Address
Repo Name
Description

Your Own, Personal Base Station: Femtocell

Mobile devices are now in our lives more than they have ever been in the past. In today’s world, approximately 72% of the world’s population, which is over 5 billion people, owns a mobile phone. The development of mobile devices and their numbers are breath taking. However, we still have the problem of “coverage” area. “Sorry I guess, I didn’t have any reception.” still holds a valid ground in some parts of the world.

If we take a look back and remember, how the quality was once this mobile craze started while back in 1990s. It was a pain to talk on a mobile phone for the following reasons; firstly they were not this small and furthermore if you walk around or travel in a car and use your mobile phone it was really high likely that the call would be dropped. Wireless carriers have been working on this issue of coverage for years now and they are somewhat improving. However, the rate of improvement in “good” coverage areas is  nothing compared to todays “mobile race.”

Now, wireless carriers must have thought that it was time for their customers to give a hand on this issue and they are coming out with a new base station type: Femtocell. Femtocell is more of a wireless router than your generic unpleasant looking base stations in the city which no one is happy to see around. Femtocell uses the internet backbone in a really efficient way in the sense of mobile communication. It is a base station device literally looking like your wireless router, which is hooked up to your broadband internet connection. Using a Femtocell transceiver, you can provide a really good reception in your house. It has been stated that Femtocell covers around an area of a circle with a radius of approximately 12 meters (40 feet). This is an area of approximately 452 meter square (4865 Square Foot). It is more than enough for most of the modern houses and provides an excellent 3G coverage in that area.

Looking at the tablet take over in portable computer world and rapid increase in number of smartphones, one can assume soon enough you might not need a regular wireless modem, as you will have a Femtocell device in your house for your 3G-4G devices. The reason is that demand in data connection is now increasing faster and faster compared to traditional phone calls. As the reception level will be high, portable devices in the area will also have longer battery lives.

What’s in there for the customers? Or the wireless carriers have found a new way to make more money over their customers? Well, Femtocell is good and efficient for both sides of the table. This year we see that, wireless carriers are announcing their Femtocell roll outs and advertising data and calling plans only for Femtocell users. They have flat rates of unlimited nationwide calls for Femtocell stations. This is a really good deal at a really reasonable price for small businesses who need to make calls from their offices all year long. Looking at the wireless carrier side, only thing they need to provide is that a stable internet backbone and the rest will be taken care by the customers. No need to maintain the base stations or pay for their utilities and taxes. So I think that,in near future we will see more and more wireless network operators announcing their Femtocell service, so that they can cut on their spending budget for traditional base stations and increase their profits.

Genealogy and Future of Cellular Systems

I think it is a good time to take a look at where we came from in the sense of our cellular systems to understand where we will be headed in the coming future. I believe that this approach will give us the in sight of how our cellular systems and mobile devices have evolved in time.  I think that our decade will be the blooming age of hand held devices in the sense of increase in their communication capacity. We are seeing that companies are in a big race to win the market share in smart phones as well as the new trend tablet devices.

Now lets go back in time and take a look into how this evolution started. Demand for mobile devices and mobile communication systems came from a basic human need: to stay communicate with each other. This basic need was filled by the wide usage of public switched telephone network (PSTN). As the the landlines expand it became available to large part of the human society. However, this landline solution was not sufficient enough to fulfill the urge of communication for humans.

When we reach the 1980s first generation systems (1G) introduced by companies. This system was based on a digital network however, system mostly relied on analog air interface. First generation systems introduced the mobility for the phones so that people still could be connected on the go. In one sense, this was the first steps on the mobile revolution in telecommunications.

The demand on the market for mobile devices that have the capacity to communicate pushed the developers to improve the system network. When we came to the date 1990s, second generation (2G) systems introduced to networks and their improved capabilities such as roaming feature, security and value added services such as SMS. Regulations for 2G networks emerged in Europe as GSM standards and in the North America as cdmaOne which is also known as IS-95. SMS took advantage of our “twitting” need and enable for users to send each other short messages which created the so called “Thumb Generation” referring to the high trend in teenage community to communicate with each other  using text messages. I think the most interesting of this newly introduced services was Wireless Application Protocol which also known as WAP. This was a key feature which was developed in parallel with the increasing demand for internet services. WAP showed the companies that mobile devices can be more beneficial and profitable with the advantage of data transmission over the mobile networks.

The boom in usage of internet forced the network providers to and their vendors to increase their research intensity on data transmission. We saw a transition phase from 2G systems to  3G systems. This phase is coined as  2.5G. In this transition we saw the rise of the General Packet Radio Service which offered high data rates compared to 2G. GPRS is also followed by its modified version called Enchanted Data rates for Global Evolution, most commonly known as EDGE. Network providers offered to their customers data rates up to 384kbit/s. This new technologies provided a steady and continuous internet connection which enabled people to use internet based services such as e-mails and websites while they are mobile.

With the beginning of the new millennium 3G was commercially available. This evolutionary step was aimed to establish the true mobility around the globe with the services of voice calling, messaging, location based services and multimedia services.  The specifications and standards for 3G is based upon UMTS worldwide and mainly in Europe and CDMA2000 in the North America. Upgrades to 3G also made available such as named HSPA+ providing data transfer rates around 42Mb/s. This high rates of data transmission also brought in new devices other than hand held units. There are many different network providers offering USB sticks for mobile internet connection and we see that the main focus in mobile networks are shifting from telephony and short messages to interactive usage of the internet services. 3G networks also made video calls possible through mobile devices with high data transfer capabilities. However, video calls still form a small percentage in network usage due to ergonomical problems as it is not really comfortable to talk while looking at the small screen of your mobile device.

3GPP_LTE_ProgressionRecently we also see the first deployments of so called 4G networks. This new examples of Long Term Evolution (LTE) standards. This new born services are still in their growing era and they are not completely fulfilling the requirements of 4G.  LTE is offering data rates around 364Mb/sec where as the fully utilized LTE Advanced technology networks are expected to reach data transmission speeds around 1GB/sec on paper. As it can be seen, the evolution in cellular networks are now based on data transmission rates more than ever.

From all these information above, what I see is that, soon we will see devices that are specifically engineered to work with high data transfer rates and messaging, and even telephony practices will be based on applications that uses internet instead of the fundamental speech function that has been developed back in 1G. I think it would be logical to expect network providers to focus more on their data plans rather than texting and telephony bundles for their customers. The fast revolution in hand held devices are now taking place and more and more customers are switching from regular mobile phones to smartphones and also we are observing a new trend in tablet devices. This high data transfer rates also triggers a new solution aspect, cloud computing. Search engine giant Google has been offering a office suit for couple of years now which users can edit documents and spreadsheets and create presentation on Google’s cloud servers. Recently, Microsoft also announced that they are going to start offering cloud version of their Microsoft Office software which is also expected that Oracle with its OpenOffice suit will be taking its place in this cloud service boom. This evolution in mobile networks and demand from the customers will surely increase the importance of internet based applications to be available through mobile devices in the coming future.

Android Is Coming for Enterprise Market

Today, we see an on going war on smartphones for enterprise market between Apple’s trend setter IPhone and age old business phone Blackberry from Research in Motion. Is it possible to see a new player in this market? It looks like this war will heat up in near future as Google’s great success Android is charging for enterprise customers after securing a huge slice of end user smartphone market’s shares.

Android’s first attempt was Motorola Droid which has been provided with Verizon in the United States, however Droid didn’t succeed to create a decent share for Android in enterprise market due to its lack of IT management capabilities. Companies were complaining about that as it was not possible at that time for Android to control which applications you can install and uninstall from the phone. This raised big concerns for companies so they didn’t embrace Droid as a business phone for their employees. Google took the input from the companies and add some of the required features for IT management for Android with its Froyo release.

Froyo introduced Wifi tethering, Microsoft Exchange support which is a must for enterprise customers, APIs for enterprise device management, auto-update for apps and faster Javascript performance. One of the cool feature that has been added is the Send-To-Android, which launches the related application regarding to the send text message, such as Google Maps for an address. Another big advantage for Android is that its Flash support as Iphone users are having trouble with Flash based websites.

Also, Android offers a really good integration with Google’s other applications such as Gmail, Google Talk, Google Calendar and most importantly Google Docs. I think Google Docs will become really useful and widely embraced among the companies as I see that cloud computing becomes more and more useful. We now see even states moving their state wide operations to cloud computing based networks so I can say that it would not be surprising to see companies taking advantage of technologies based on cloud computing such as Google Docs which enables you to work on your documents from just your browser over the internet. We might see a decrease of usage of Microsoft Office like softwares in near future as usually employees don’t use all of its features and simple and cheap solutions like Google Docs will help companies cut on their spendings. Plus this trend will also be boost by the increasing usage of iPad like tablets in the industry.

Android also enables IT managers to remotely wipe all the data from company phones in case of theft and lost items. Such administration policies for company mobile network can be enforced now as locking idle devices after a period of time, requiring passwords for each device in the system and setting minimum levels for password security. Now it is also possible to exclude devices from the company network so that when a the relation between the company and employee is over, data security can be set without taking back the given phone.

However, even though it is good features there are still raising eyebrows for the security of Android devices as there isn’t strict regulation  on application market like Apple implemented on theirs. Also, IT managers are a bit skeptical about the reliability of the operating system as it is a relatively new to the smartphone market. To make people change their old habits, such as Blackberrys as business phones, Android needs to do more than providing what is already provided by other competitors. I think with Motorola’s dedication on creating an enterprise targeted phone such as Droid Pro, we may see other manufacturers such as HTC and Samsung to provide Android based enterprise smartphones in near future.