Categories
Landmark Rants

Experiment goes wrong

My blog was recently been taken out. I was experimenting on my Azure server that host the blog and accidently leave a unsecure port open on my server. Result, server was over used by hacker and Azure close me down. That is really nice of them that they did close me, because otherwise I might not know what is going wrong with my hosting as these things are easily ignored.

But I am back and close all issues and hope to continue with more precaution now on.

 

Categories
Management MySQL Project Management Server Configuration

Installing Redmine 2.3.3 on Ubuntu 13

Okay, today I install Redmine on Azure hosted Ubuntu 13.04. The steps are easy once I get it done after 2 days of working and finding what is right to do. So, let us start on installation, but a little history first.

I see that Bitnami has put quite a few Redmine VM Images on VM depot. But unfortunately all are on older version of Ubuntu. But I still thought I can upgrade them for my use, so I use the latest [Ubuntu 12] based image and try to do ‘do-release-upgrade’ it download lot of things, but in end it fail to update the machine. So, I now opt to use core Linux machine to install what I need. So, I create a VM using Ubuntu 13.04. It was easy as usual. Once the machine is up. I once run do-release-upgrade to install all latest package and there are quite a lot of them. It took around 20-30 minute for my Very small instance of VM to install them.

Once the machine is ready, now we need to install: Apache, MySQL [so I can host not just redmine but couple of my other website as well]. And to extract the latest source of Redmine we also need SVN [package name ‘subversion’] on server as well. I usually use svn version only, but you can download zip/tarball as well.

So install is as follow:

# sudo apt-get install subversion
# sudo apt-get install apache2 libapache2-mod-passenger
# sudo apt-get install mysql-server mysql-client
# sudo apt-get install ruby ruby-dev
# sudo apt-get install imagemagick libmagickwand-dev ruby-rmagick

Above statements will install subversion, apache, MySQL, ruby and ruby-dev, imagemagick and ruby-rmagick .. they all are prerequesties and you might already gave those. Once this is done..

Download Redmine 2.3.3 from svn using this Redmine Download link http://www.redmine.org/projects/redmine/wiki/Download

#svn co http://svn.redmine.org/redmine/branches/2.5-stable redmine-2.5

Now, we need GEM Bundler to be installed. so

 # sudo gem install bundler

Now navigate to Redmine folder, oh you can download redmine in any folder, as long as you are ready to use your <redmine folder>/public as your document root. if not you can use symbolic link of public folder as well in apache. Just thought to tell this now. In next step we will create a Gemfile.local to tell the installer to use rack’ version 1.4.5 as by default it install version 1.5.2 and it doesn’t work for me and I see lot of people had problem with it, so just create a Gemfile.local with one line in it and use your bundler install to do it.

# cd redmine-2.3 
# sudo cat > Gemfile.local << "EOF"
gem "rack", "~> 1.4.5"
EOF
# sudo bundle install --without development test mysql
# rake generate_secret_token

Once you did that it install redmine or rather just built it. We now need a database to store redmine data, so create MySQL database, username and password as you want. Obviously you don’t want to use root username. Once you create that user and database. then go to redmine/config folder, you will find database.yml.example file, copy it as database.yml go to production section of MySQL db, enter your login info and change database type t0 mysql2 [it is just new library of MySQL with ruby, you can still use [mysql] but it might give error so better change it.

production:
  adapter: mysql2
  database: redmine_default
  host: localhost
  username: redmine
  password: some-secure-plain-text-password
  encoding: utf8

Now run following commant to Create Database Table, clear unwanted data and session

rake db:migrate RAILS_ENV=production 
rake tmp:cache:clear
rake tmp:sessions:clear

Now, second last step: Creating a virtual Host or defining the DocumentRoot so our apache can use redmine installation. Add following Virtual Host Tag, you can add other information as you like, but keep this as minimum you need. Please change “[” and “]” with “<” and “>” as my editor doesn’t allow me to use them below…

[Virtualhost *:80]
DocumentRoot /usr/local/share/redmine-2.3.0/public
[Directory /usr/local/share/redmine-2.3.0/public]
AllowOverride all
Options -MultiViews
[/Directory]
[/VirtualHost]

Restart Apache

# sudo service apache2 restart

Now, go to your domain, IP address base url whaterver it is, and login using “admin” as username and password. You are most probably ready to rock. If not you should enable error login using /config/environment/production.rb file and then check what error you might get from Redmine.

Categories
Bookmarked Wordpress

Some Cool WordPress Plugin

Lately, I got lot of wordpress site, that were not just blog or business website. But they are ecommerce, Restaurant Ordering, Video streaming and much more. Been a developer I always remain engage in making some sort of plugin for them. Though following list of plugin assist me a big time in saving time to do some of works I usually have to do to make some really nasty site in wordpress:

The list is of plugin I usually use, but not restricted to. There are couple more of plugin but currently I am unable to recall them. Meanwhile we also create a small plugin used for very limited set of user:

https://github.com/Vikasumit/wpl [Allow only members to see the complete site].

I have few other plugin coming up, but still not ready to launch them in public as they are not yet finish or are mature products. Love the fact that WordPress plugin development is as cool as working on any other project. Just need some good idea to make it possible.

Categories
Bookmarked Database MS SQL Server SQL

Advance SQL: Finding Hierarchical data using Recursive CTE

Often we have a Table that store Hierarchical data, such as any shopping cart will have product category in table that store parent table within same table. We often use such information. The typical structure of table is

ID, Name , ParentID

Where ParentID is ID within same table or for Top level it is either Null or Zero. In such tables we often want to find Child of Child in order we can list them in tree view, i.e.  

L0
  L1
    L2
  L1-1
    L2-1
 ....

Now, to find this type of result you have two option: 1. Write complete logic in your code 2. Make SQL do it for you. For those who prefer this method here is the sample Query I used

 

DECLARE @ParentCompanyID INT = 9;
WITH RecComp
AS
(
    SELECT  crt.CompanyID,
            crt.Name,
            crt.ParentCompanyID,
            1 AS Lvl,
            N'/' + CONVERT(NVARCHAR(4000),crt.CompanyID) + N'/' AS CompanyNode_AsChar
    FROM    @Company crt
    WHERE   crt.ParentCompanyID = @ParentCompanyID
    UNION ALL
    SELECT  cld.CompanyID,
            cld.Name,
            cld.ParentCompanyID,
            prt.Lvl + 1,
            prt.CompanyNode_AsChar + CONVERT(NVARCHAR(4000), cld.CompanyID) + N'/'
    FROM    RecComp prt -- parent
    INNER JOIN @Company cld ON prt.CompanyID = cld.ParentCompanyID
)
SELECT  *,
        CONVERT(HIERARCHYID, CompanyNode_AsChar) AS CompanyNode
FROM    RecComp
ORDER BY CompanyNode;

This query make use of CTE feature of SQL server [I personnally test it on SQL server 2008 R2, 2012 and SQL Azure] and HierarchyID to show the result as desire. More can be read in my thread on StackOverFlow here

 

 

Categories
Joyous Server Configuration

Move to Azure as Trial

Okay, so I activate my Azure trial and thought to move my blog on it. I am currently hosted with Azure VM [small instance] as trial. Hope it give better performance than my previous host, which was lately too slow to work with.

Categories
Blog: My thoughts MS SQL Server Programming SQL SQlite

Reading Large Binary Files: Child Play

Yes, that was funny title, but after your experience it you will agree to me. So, here is the story. I have been working on a software that read some recording from hardware device to database, we have 45 records per second for 30 days, so it is about 30x24x60x60 record entry with 45 columns in it. We have to make a desktop application so we choose .NET for it. The first version of software was released by my company 3 yrs ago, and the reading well, we were inexperience at that time to manage that data, and what we get is about 2 hrs to read all that data to database. Oh, I forgot to tell that 30 days entry was from one hardware device and we have 3-4 device :), so we took 2 hr to read all say 4 devices. Now that is not acceptable thing. So, we decide to rewrite the complete software to make use of some parallelism, as that is only way my team thing it is going to work.

I start the rewrite, and with only hope to reduce to 2 hr work to 30-45 minute I start writing code, but this time we make a exception from last time, instead of using TEXT ascii file or SQLite database, we opt to use SQL Server to store our data. Reason, well first pre-release version of software use Text file, we never get that part working for more than 15 days, and it always get out of memory for one or other reason. Then we start using Sqlite which is 5 times lighter on hardware and speed the reading and information access, however, there is part which still use text file. So, in order to avoid two source we opt for database only, and Since client already have Sql Server on seperate machine, we thought it is good to have seperate machine storing database, for long term and obviously LAN benefits. Since client already have SQL Server and we are using .NET I decide to go with Sql Server, only.

So, we start reading binary file, and start putting insert query for each record [just for testing the lowest speed], and it goes through in 12-13 minutes. wow, we already reduce 30-40 minute job to 12 minutes, just by using full time database. Now, the next challenge is to speed it up with known bulk import methods. couple of them that I tried are

1. Using Dataaset and Update feature of Dataset,

2. Using Long Insert query to send 100 or 1000 records in one “Execute”

3. Using SqlBulkCopy feature in .NET. This is obvious choice on speed, but in few cases it fails for me, so I have to look for first two options as well.

So, at end we get SqlBulkCopy as our tool to go with, now it doesn’t simply import the data in Database. We have to prepare background for it, SqlBulkCopy is used to send data from CSV files, so we create CSV file from binary read, than import this file to Staging Sql Table, and then transfer data from Staging Table to main table, all this is done in 2-3 minute flat. Yes, a 40 minute work is done in 3 minutes. Period.

The trick is, we reduce the number of operation needed to write to disk, we still create CSV, but perviously we are creating XML File, secondly, we have multiple file write procedure going, we remove all this. Infact to achieve that speed we even stop writing LOG file to trace error, we use SQL server to record those error for us. Infact removing Log Text file with SQL error log, itself speed things from 5 minutes to 3 minutes. though logs ar enot that long, but disk writing is very slow as compare to database insert.

All said, I will hardly use plain text file in future for long data read, it will be some sort of database for sure.

Categories
Blog: My thoughts

StackOverflow.com: A Review

I lately start using StackOverFlow.com a lot to get my answers and to help others. It is really one good site, one can hook around for a while. I love reading some of their questions. Some of those questions are that I got around in future as developer myself. The fun thing about this site than all other past forum and such site I use is the reputation points and badge. I love getting those, though I am still not very active on site and hence my profile is not very good yet. 

However the thing I don’t like it negative on questions, well actually I love it but most of users are not understanding the concept well on either side. I have seen questions like “Here is my SQL query” and then a query, but what about it? marking such question negative is good idea, but then I saw a question “My SQL query doesn’t produce result, here is my query …” that question too get negative, because person forgot to tell the error he got, but from query it was clear a syntax error. Well that is how some people read and reply.

Couple of my question has answer where from answer I can clearly tell that person even didn’t read my question properly. They simply try to paste a piece of code they have used or found, without understanding the fact that the question is not about coding but concept of piece. 

Another thing, I observe in my short time on site, is that few questions seeks a answer to syntax error, but are actually conceptually wrong, and funny thing is it got 4-5 answers all telling him different way of writing the piece of code involved, but none understand the need of person [which is obviously not written in question]. 

Seeing such situation, frankly I think I can easily suggest my fellow programmer that a person who is asking the question is either not clear on topic or not in very good state of mind [might be frustrated of his failure on topic already], but person who is replying is replying for sake of helping him, so if you want to help such person try to understand what he want to achieve and what he want to do. rather than giving him negative or useless answer. What is the point if your shared knowledge is actually a burden on community. It is a responsible job to answer in public community, “Workaround” is the last thing to tell. In programming is you are not analyzing the question before you answer actually tells how easy it is for you to lose concentration on job, and software you are making is not really going to be very effective. 

I love the site and love the concept, I used lot of forum, but will stick around on it for quite a while. However for that I need to get more time from my job. But for a new programmer, it is my understanding that if you read through the forum it will increase your knowledge.