on Sunday, April 29, 2012

Hi guys,


Im a new user of Kubuntu KDE distribution though I have some hands on Ubuntu Linux. Once I installed KDE, installed LAMPP server on it. I used to XAMPP, so I had no trouble with windows installation, but in Linux, permissions got me into a major issue where I could not write into or copy anything into my htdocs folder. So after a more elaborate search on web that with the installation of XAMPP you should give the credentials to copy anything to htdocs.
The code that you have to run in the terminal with sudo grant follows,



$ sudo chmod 777 -R /opt/lampp/htdocs

Now that solved my temporary problem.
After couple of days, I started installing Drupal onto my htdocs. The first two installations steps were fine but in the verifying requirements step there were two unsolved errors.
The error message follows as,



*The Drupal installer requires write permissions to ./sites/default during the installation process. If you are unsure how to grant file permissions, please consult the on-line handbook.
* The directory sites/default/files does not exist. An automated attempt to create this directory failed, possibly due to a permissions problem. To proceed with the installation, either create the directory and modify its permissions manually, or ensure that the installer has the permissions to create it automatically. For more information, please see INSTALL.txt or the on-line handbook.

I was confused and the error because of permission granting, So the previous step just sparked on my mind. What I did was again defined permissions like,

$ sudo chmod 777 -R /opt/lampp/htdocs/drupal/sites

Also do not forget take a copy from /opt/lampp/drupal/sites/default/default.settings.php and paste it to the same folder renaming it as settings.php :)


Done !! The Installation of Drupal went smoother then after ! 

on Saturday, April 28, 2012
The largest student driven technological event was just happened few hours ago at the SLIIT Main Auditorium, Malabe, Sri Lanka “The Imagine Cup 2012”.
I felt so prestige to wear a Microsoft t-shirt helping out people thinking I am from Microsoft. The feeling was nevertheless a life time event. I was accompanied by Praneeth, Dushantha, Amith, Chamal, Dinusha, Ruvini, Milan, Shanika, Randa, Devdun, Shashika ayya, Oshan ayya and missed Zaman and Hasangi.

The Six finalists performed their final windings about their projects in front of massive 300+ audiences; panel of judges included four professionals, academics and fellow students. Three teams from Informatics Institute of technology, Two from University of Moratuwa Faculty of Information Technology, and One from Sabaragamuwa University of Sri Lanka.

The Teams and the Projects follows as,

1.       Team “INVICTUS” with the Project “Guide Me on the Go” from Sabaragamuwa University

2.       Team “Team Dot” with the Project “FARM @ H2OME” from Informatics Institute of Technology

3.       Team “Sharks” with the Project “vLearn” from University of Moratuwa

4.       Team “CR Coderz” with the “Project Value Life” from Informatics Institute of Technology

5.       Team “Casper Creations” with the Project “arDesign” from Informatics Institute of Technology

6.       Team “V360” with the Project “Back2Earth” from University of Moratuwa

All excitement presentations were too awesome. The entertainment event presented by the SLIIT guys was ok. But the hilarious thing is the Nigerian SLIIT students’ traditional dance. They actually danced something that I have never ever seen before. The dance was superb, but the co-ordination was not thereJ.

From 3pm to 7.30pm we were waiting for the results,

The second runners up of the event was  Team “Sharks” from FIT-UoM
The first runners of the event was Team “CR Coderz” from IIT
and the WINNER of the night was Team “V360” FIT-UoM with the magnificent concept of back2earth J

The team members of  V360 are,
1.Thurupathan Vijayakumar https://www.fb.com/thurupathan
3.Rukshan Lakshitha Dangalla https://www.fb.com/Ruki.DG
4.Dinidu Sandaruwan https://www.fb.com/diniduuom
Good Luck guys! Enjoy Sydney 2012!!


on Tuesday, April 10, 2012
Life is life. life is an interesting journey of ups and downs, but how that journey ends, is up to you today.. You see when you fall down, you feel like giving up. When times get tough, it’s not the end. The question is, are you gonna finish strong. The definition of disability is something that will hint to you from being able to something. I think the greatest disability is not having no arms and no legs; the greatest disability is your mind; the choices that you make. The question is, are you gonna make the right choices. Are you gonna make the choice to have your life in its right prospective, Are you gonna make the choice to "get up instead of give up". Are you gonna make the choice to dream big. 

There is no greatest disability that we have in our life than to make the decision to give up. Because once you give up then there is no hope. But until you give up there is that hope.

My passion is to encourage people; to inspire people to be all that they can be. I found my purpose, I found my strength and I wanted to find yours. Don’t be afraid of failure, every time you fall down every time you fail, you learn something new, you’re ready for the next one, you’ve learn how not to do something,  so then learn from it and move on. Leave what’s behind and press forward. You can only win, if you don’t give up. Go for it. Don’t let anything hold you back. If nothing’s holding me back, what’s holding you back?

No goal is too big, No dream is so far-fetched. It’s big as you can dream it.

There are times in life where you don’t see the purpose or good in your situation. But just because you can’t see it, it doesn’t mean it’s not coming.  Who goes to the train station and looks down the railway and says “Aah!!  Train’s not here I’m leaving”. You will wait for the train, because the schedule says the train is coming. So just because you can’t see the hope in your situation it doesn’t mean that it’s not there. You see many people think that I have only one foot, but just because you can’t see the other one. I’ve gone from lives without limps into lives without limits. What about you??

-Nick Vujicic


on Saturday, April 07, 2012



The following figure 13 shows the GUI credentials page for your reference.




                             Figure 13



After saving your credentials, you have to use it. In order to use will need to setup EC2 API and AMI tools on your server using X.509 certificates.

            To install the required cloud user tools, type,

sudo apt-get install euca2ools

Afterwards, Check the availability of your local cluster details.
Type,

. ~/.euca/eucarc
euca-describe-availability-zones verbose

The result you get (Assume that everything is fine) is,


AVAILABILITYZONE            myowncloud                 192.168.1.1
AVAILABILITYZONE   |- vm types                free / max   cpu   ram disk
AVAILABILITYZONE   |- m1.small                0004 / 0004   1    192     2
AVAILABILITYZONE   |- c1.medium               0004 / 0004   1    256     5
AVAILABILITYZONE   |- m1.large                0002 / 0002   2    512    10
AVAILABILITYZONE   |- m1.xlarge               0002 / 0002   2   1024    20
AVAILABILITYZONE   |- c1.xlarge               0001 / 0001   4   2048    20
 
 
Now, Install an image from the store. 
The simplest way to install an image is using web UI.
Access the following URL from the browser to install an image.



https://<cloud-controller-ip-address>:8443/ **
 
** - Use “https” secured connection, not “http”. If you use “http”, then browser will give you a security warning. 
You will have to add an exception to view the page.

< cloud-controller-ip-address > is the IP that you used to configure your Cloud Controller registration.



You have to login into the UEC. Now, go to the Store tab. The following screenshot (figure 14) will show you what you got in it.  
 
 
 
 
 
                                                 Figure 14
 
You can browse all the images and download and install the image as you want.

Finally you have to instantiate an image that you installed. There are multiple ways of doing it. Use the command line. Maybe that will be slightly a bad idea, because you got cool UEC compatible management tools such as Landscape (figure 7) and etc.
Also you can use ElasticFox extension – A Firefox extension, One of the cool tools available.
Final steps, Try it yourself!!!!



The article is written by myself is a collective effort myself, knowledge from many books, articles and sites, I have given reference to all of them. I honor and thank all of them to their great support.


Reference List 

1.    A Quick Start Guide to Cloud Computing by Dr Mark I Williams, Pages 6-18

2.    Cloud Application Architectures by George Reese, Pages 1-29

3.    Ubuntu Server Guide by Canonical Ltd. and members of the Ubuntu Documentation Project, Pages 4-28

4.     Cloud Computing Virtualization Specialist Complete Certification Kit: by The Art of Service, Pages 9-58

5.    Programming Amazon EC2 by Juan van Vliet and Flavia Paganelli, Pages 4-21


6.     Eucalyptus Beginner's Guide - UEC Edition (Ubuntu Server 10.10 - Maverick Meerkat) by Johnson D, Kiran Murari, Murthy Raju, Suseendran RB,  and Yogesh Girikumar, Pages 2-19


8.    https://ubuntu.com/cloud


·         Ubuntu Server Edition Installation

Before the installation,

Make sure that the data of your machine is backed up.
One of the simplest ways to backup a system is using a shell script. For example, a script can be used to configure which directories to backup, and use those directories as arguments to the tar utility creating an archive file. The archive file can then be moved or copied to another location. The archive can also be created on a remote file system such as an NFS mount.
The tar utility creates one archive file out of many files or directories. Tar can also filter the files through compression utilities reducing the size of the archive file.

If this is not the first time an operating system has been installed on your computer, it is likely you will need to re-partition your disk to make room for Ubuntu.
Any time you partition your disk, you should be prepared to lose everything on the disk should you make a mistake or something goes wrong during partitioning. The programs used in installation are quite reliable, most have seen years of use, but they also perform destructive actions.

After backing up your system, you just wanted to insert the ISO disc in order to boot the system via CD-ROM. The boot prompt menu will ask for the language selection. The installation starts with asking for the keyboard layout.
From the main boot menu, you will see the options that what actually can be installed. You can select UEC for the installation. See figure 8.





          
Figure 8



Once you select UEC, the installer checks whether any of the Eucalyptus components installed or not. Refer figure 9.                      


 Figure 9

          Continued in Part 8
          http://imthefortune7.blogspot.com/2012/04/my-article-to-fossuser-ubuntu-on-cloud_8337.html




There’re five eucalyptus components that you can install, as showed in the figure 10.

                                                



 Figure 10

Let we take a look at each component and what are the functionalities of each.


A. Node Controller (NC)

          A Node Controller is a virtual technology enabled server, capable of running KVM (Kernel Virtual Machine) as a hypervisor. Hypervisor can be called as virtual machine manager (VMM). Ubuntu Enterprise cloud will automatically install the KVM if the user selected the NC for the installation.
 The virtual machine which runs on hypervisors and controlled by the UEC called as “Instances”. Not only KVM, but hypervisors like XEN is also supports UEC. But canonical has chosen the KVM as their preferred hypervisor.
          Node Controller is responsible to control the life cycle of instances running on the node, NC will runs on each of the node. The NC interacts with the OS and the hypervisor which is running on the node on one side, and also the Cluster Controller (CC) on the other side.
          NC queries the OS which is running on the node to determine the node's physical resources such as the number of cores, the size of memory, the available disk space and also to learn about the state of VM instances running on the node and propagates this data up to the CC.
Functionalities of the NC are, NC Collects the data related to the resource availability and consumption on the node and reporting the data to CC and the management of instance life cycle.

B. Cloud Controller (CLC)

          The Cloud Controller (CLC) is the front end to the whole cloud infrastructure.
CLC provides an EC2/S3 amenable web services interface to the client tools on one side and interacts with the rest of the components of the Eucalyptus infrastructure on the other side. CLC also provides a web interface to users for managing certain aspects of the UEC infrastructure.
          It monitors the various resources in the cloud infrastructure. It decides the cluster will be used for the provisioning the instances. This is called as Resource arbitration. Also it monitors the running instances as well.


C. Walrus Storage Controller (WS3)

          WS3 provides a persistent simple storage service using REST and SOAP APIs compatible with S3 APIs. WS3 ought to be considered as a simple storage system in the system.
          WS3 works on storing the machine images, storing snapshots and Storing and serving files using S3 API.


D. Cluster Controller (CC)
          Cluster Controller is responsible for manage one or more Node Controllers and deploys/manages instances on them.
          CC manages the networking part of running instances and on the Nodes under certain types of networking modes of Eucalyptus. Cluster Controller talks with Cloud Controller on one side and Node Controller (NC) on the other side.
E. Storage Controller (SC)

          SC provides determined block storage for use by the instances. Elastic Block Storage (EBS) is a service provided by the AWS, is also very similar to SC.
The functionalities of SC are creation of persistent EBS devices, providing the block storage over AoE or iSCSI protocol to the instance and allowing creation of snapshots of volumes.
          AoE, “ATA over Ethernet (AoE)” is a
network protocol designed for simple, high-performance access of SATA storage devices over Ethernet networks. iSCSI is an abbreviation of Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities.


After you select the cloud installation model, it will ask two other cloud-specific questions during the course of the install:
One is the Cluster name and the range of public IP addresses on the LAN that the cloud can allocate to instances. Figure 11 and 12 will emphasis this in graphically.





                                      Figure 11

                                     
                                      

Figure 12


Continued in Part 9
http://imthefortune7.blogspot.com/2012/04/my-article-to-fossuser-ubuntu-on-cloud_2223.html



·         Ubuntu Enterprise Cloud and  Eucalyptus

Ubuntu Enterprise Cloud, UEC, is a stack of applications from Canonical included with Ubuntu Server Edition. UEC includes Eucalyptus along with a number of other open source software. UEC makes it very easy to install and configure the cloud.

There are a few differences between the Ubuntu Server Edition and the Ubuntu Desktop Edition. It should be noted that both editions use the same apt repositories. Kernel differences can be identified mainly between the two editions.  The Server Edition uses the Deadline I/O scheduler whereas the Desktop edition uses CFQ scheduler. Preemption is switched off in the Server Edition. The timer interrupt is 100 Hz in the Server Edition and 250 Hz in the Desktop Edition.

                                                 Figure 6
 

Well, making it simple, UEC is a cloud computing technology allows you to build a private cloud deployment model in your own environment. You can pool your servers into a centrally managed resource pool behind your firewall on your own network. In UEC, you can manage all your resources yourself using the UEC management tool Landscape. Also UEC allows you to burst into public cloud platforms such as EC2 giving you an added level of flexibility. Figure 7, a screenshot representing Landscape tool after a private cloud has created.  There are other management tools such as Rightscale, CohesiveFX, ElasticFox and etc.

But why UEC comes with EC2 architecture and APIs??
Amazon and EC2 is the momentum leader in the cloud business. So tying up with the leader will create a competitive advantage over other cloud providers. Best captures shift from product to service economy. Designing for EC2 guarantees on-demand deployment and scalability, two key benefits of cloud computing, that’s obviously cool. The Coolest feature I suppose is EC2 has already has many implementations such as Eucalyptus, Globus Nimbus, Open Nebula and etc. Also Top 2 listed most popular AMIs (Amazon Machine Images) is Ubuntu.

          Eucalyptus is a cloud based software available under GPL (General Public License) that helps in creating and managing a private or even a publicly accessible cloud. It provides an EC2-compatible cloud computing platform and S3-compatible cloud storage platform. Eucalyptus has become very popular and is seen as one of the key open source cloud platforms. Since Eucalyptus makes its services available through EC2/S3 compatible APIs, the client tools written for AWS can be used with Eucalyptus as well.

                                                         Figure 7


           Continued in Part 7
              http://imthefortune7.blogspot.com/2012/04/my-article-to-fossuser-ubuntu-on-cloud_8601.html

In 2004 Amazon adopted a formula. If your systems are big, assume decoupling has been also done. The decoupled sections should communicate; at least some messages basically has to be passed or interchanged, in order to get a productive output from the system. Amazon’s solution for this is queuing. Amazon thought this will be the simplest and first-in-first-out solution for communicating. SQS is what Amazon named it. By using Simple Queue Service, according what AWS says, “Developers can move their data in between the decoupled or distributed components of their applications that perform different tasks without any message loses or requiring each component to be available always”. One interesting attribute of SQS is that you can rely on the queue as a buffer between your components; well this will increase the elasticity.

In every application, what we have as a huge draw-back is the storage capacity. Actually what we need is an infinite-like storage capacity. To fix this problem Amazon came up with the solution, that is Amazon Simple Storage Service or S3 was released in 2006, just two years after the release of SQS. S3 allows you to store 5 terabytes of data with unlimited number of objects within. S3 as a service is covered by a service level agreement (SLA) helped the industry to fully-adopt the concept. As statistics say that, in only two years, S3 grew to store 10 billion objects. In early 2010, AWS reported to store 102 billion objects in S3.
People thought that S3 is the perfect solution to be placed because it resolved the storage issue. But
still Amazon thought that S3 is not sufficient to decouple the big system into small components. In the same year 2006, Amazon released another limited beta model called Elastic Cloud Compute or EC2, which was the logical piece which was missing in the puzzle. Because through the decoupling Amazon was expecting strict SOA implementation. Amazon wanted every small team of an organization not only to build its own infrastructure, but also for developers to operate their applications themselves. This thought rooted a new service model which we call that as IaaS or HaaS. EC2 turned computing upside down. AWS used XEN virtualization to create a whole new cloud category.
Now let we take a look at virtualization and its functionalities in the cloud.

One of the most notable character of cloud computing is scalability. The key technology with enables scalability is Virtualization. Virtualization is in simple terms, one physical machine takes multiple roles, so one single machine acts as multiple machines. Virtualization, in its broadest sense, is the emulation of one of more workstations/servers within a single physical computer. The concept of virtualization is not limited to simulation of entire machines, there are many, each differs upon its functionality. One of these, a very commonly used in almost all machines nowadays, that’s virtual memory.


Although the physical locations of data may be scattered across a computer’s RAM and Hard Drive, the process of virtual memory makes it appear that the data is stored contiguously and in order.

RAID (Redundant Array of Independent Disks) is also a form of virtualization along with disk partitioning, processor virtualization and many other virtualization techniques. Virtualization allows the simulation of hardware through software. But for this occurrence, there should be some virtualization software installed in your physical hardware. VMware is well-known virtualization software, most users using it globally. There are other softwares also available in the market.  VMware is very much capable of stimulating an x86 based hardware resources, in order to create a fully functional virtual machine. After the installation of VMware, you can selectively install the OS you wanted and other associated softwares on the newly created virtual machine simply. Multiple virtual machines also can be created, but each as different entity. This entity method will cut the interference between each virtual machine in order to ensure that every VM is working separately.
The following figure 4 and 5 will express what if not there’s no virtualization and what is good if there is virtualization.



  
                                 Figure5

 Figure 4


          Continued in Part 6 
          http://imthefortune7.blogspot.com/2012/04/my-article-to-fossuser-ubuntu-on-cloud_4385.html

Private cloud again the Wikipedia says, Private cloud is infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally.
Many large organizations either prefer or forced by the laws and regulations to take private cloud model on their hand for some several important reasons. Unlike the pay-as-you-go model
of public clouds, on the other hand, private clouds require significant up-front development costs, data centre costs, ongoing maintenance, hardware, software and internal expertise.



Let we compare both deployment models with their pros and cons.


Features
Private Cloud
Public Cloud
Cost
Very much expensive.
Cheap, when comparing with private.
User access
Restrictions can be enforced.
Not strict regulations enforced.
Customization
Can be done, because you own the Hardware and HaaS related stuffs.
Very few customizations are available.
User tools
User tools are available because it is your own cloud.
Very few. Not tailored to fit an organization’s requirement.
Commitments
Long time or Organization’s commitment level.
Short time or temporary commitments.
Flexibility
Less flexibility because the hardware is dedicated to one organization.
High, comparing with Private, Public is a mass approach.
Security
Can be secured to meet compliance of almost every level.
Security might not meet the enterprise compliance standards.



Community clouds are used by distinct groups or communities (Universities, Charity Clubs, Sports Clubs) of organizations that have shared concerns such as compliance or security considerations, and the computing infrastructures may be provided by internal or third-party suppliers. The communities benefit from public cloud capabilities but they also know who their neighbors are so they have fewer fears about security and data protection.


Figure 2
                                                                                                                                                                          
Finally the fourth deployment model is Cloud hybrids, where it talks about the combination of Public and Private Clouds. An organization can deploy public cloud for their general computing and deploy private for the data which the organization feels sensitive i.e. Customer details, Policies, Confidential reports and so on. The figure 2 below neatly expresses the text.

From the definitions we knew the advantages of Cloud Computing. but on the other hand Cloud has many challenges that it has to improve for future integration. 


           Continued in Part 4
           http://imthefortune7.blogspot.com/2012/04/my-article-to-fossuser-ubuntu-on-cloud_07.html