Ads

Tuesday 22 May 2012

Router

File:!Adsl connections.jpg



router is a device that forwards data packets between computer networks, creating an overlay internetwork. A router is connected to two or more data lines from different networks. When a data packet comes in on one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it gets to its destination node.

The most familiar type of routers are home and small office routers that simply pass data, such as web pages and email, between the home computers and the owner's cable or DSL modem, which connects to the Internet through an ISP. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.

Hub

Hub: A hub is used in a wired network to connect Ethernet cables from a number of devices together. The hub allows each device to talk to the others. Hubs aren't used in networks with only wireless connections, since network devices such as routers and adapters communicate directly with one another.

Hubs are such simple devices — they require no configuration, and have no manuals — that their function is now included in other devices such as routers and modems.  NETGEAR no longer sells stand-alone hubs.  If you require a stand-alone appliance, use a switch instead.  Switches provide better performance and features than hubs.
A hub is rectangular box that is used as the central object on which computers and other devices are connected. To make this possible, a hub is equipped with small holes called ports. Here is an example of a hub:

Although this appears with 4 ports, depending on its type, a hub can be equipped with 4, 5, 12, or more ports. Here is an example of a hub with 8 ports:

When configuring it, you connect an RJ-45 cable from the network card of a computer to one port of the hub.
In most cases for a home-based or a small business network, you may not need a hub.

Applets and Servelets


Applets
  • (Usually) small programs that execute inside of browser.
  • Does much, much more than JavaScript
  • Harness full power of Java: objects etc.
  • "Java that runs with security restrictions"
  • Were first examples of client-side computing
  • Support for browser-enabled Java provided by plug-in (in some cases)
Servlets
  • Server-side Java
  • Motivation is Java, Java everywhere.
  • Need only one type of programmer for client, server and application.
  • Avoids spawning process with CGI (mod_perl avoids this though)
  • Can use coherent sophisticated security model provided by J2EE

ASP Vs JSP


ASP
        ASP stands for Active Server Pages
        An ASP file is just the same as an HTML file
        An ASP file can contain text, HTML, XML, and scripts
        Scripts in an ASP file are executed on the server
        An ASP file has the file extension *.asp
        ASP is a programming environment that gives the ability to generate dynamic html pages with the help of server side scripting.
        VBScript is the default scripting language for ASP
How does it work?
        When a browser requests an HTML file, the server returns the file
        When a browser requests an ASP file, IIS/ PWS passes the request to the ASP engine
        The ASP engine reads the ASP file, line by line, and executes the scripts in the file
        Finally, the ASP file is returned to the browser as plain HTML a

Example: <HTML>
<HEAD> <TITLE>Hello World</TITLE>
</HEAD><BODY>
<%
' This will print to the browser the 
' words Hello, ASP World.
response.write "Hello, ASP World!"
%>
</BODY></HTML>



JSP

Java Server Pages or JSP for short is Sun's solution for developing dynamic web sites. JSP provide excellent server side scripting support for creating database driven web applications.
JavaServer Pages (JSP) technology is the Java platform technology for delivering dynamic content to web clients in a portable, secure and well-defined way.
JPS pages are high level extension of servlet and it enable the developers to embed java code in html pages. JSP files are finally compiled into a servlet by the JSP engine. Compiled servlet is used by the engine to serve the requests
In this section we will explain you about JSP Action tags and in the next section we will explain the uses of these tags with examples.

What are ‘cookies’?


Cookies are text files that a Web server can store on a user's hard disk. Cookies allow a Web site to store information (sites visited or credentials for accessing the site) on a user's machine and later retrieve it. The pieces of information are stored as name-value pairs. Cookies are designed to be readable only by the Web site that created them. A name-value pair is simply a named piece of data. It is not a program, and it cannot "do" anything. A Web site can retrieve only the information that it has placed on your machine. It cannot retrieve information from other cookie files or any other information from your machine.

Monday 21 May 2012

Differences IPv4 Vs IPv6


I had compiled differences between IPv6 and IPv4 long back. Though it is for my personal reference I am uploading it on my blog. Hope someone might find this useful.


IPv4
IPv6
Addresses are 32 bits (4 bytes) in length.
Addresses are 128 bits (16 bytes) in length
Address (A) resource records in DNS to map host names to IPv4 addresses.
Address (AAAA) resource records in DNS to map host names to IPv6 addresses.
Pointer (PTR) resource records in the IN-ADDR.ARPA DNS domain to map IPv4 addresses to host names.
Pointer (PTR) resource records in the IP6.ARPA DNS domain to map IPv6 addresses to host names.
IPSec is optional and should be supported externally
IPSec support is not optional
Header does not identify packet flow for QoS handling by routers
Header contains Flow Label field, which Identifies packet flow for QoS handling by router.
Both routers and the sending host fragment packets.
Routers do not support packet fragmentation. Sending host fragments packets
Header includes a checksum.
Header does not include a checksum.
Header includes options.
Optional data is supported as extension headers.
ARP uses broadcast ARP request to resolve IP to MAC/Hardware address.
Multicast Neighbor Solicitation messages resolve IP addresses to MAC addresses.
Internet Group Management Protocol (IGMP) manages membership in local subnet groups.
Multicast Listener Discovery (MLD) messages manage membership in local subnet groups.
Broadcast addresses are used to send traffic to all nodes on a subnet.
IPv6 uses a link-local scope all-nodes multicast address.
Configured either manually or through DHCP.
Does not require manual configuration or DHCP.
Must support a 576-byte packet size (possibly fragmented).
Must support a 1280-byte packet size (without fragmentation).


Private Browsing By Securing Tracks On Your Disk


We all know about different techniques that can be used while browsing the web securely.
However, people often unaware of the fact that the data displayed in their web browser is written to their computer disks/memory before it was displayed on computer screen. There are different software tools that can erase the traces of browsing history such as URL(s), cache etc. (I just don’t want to get into details of files that are erased for different browsers.)
So today I will show you one of the techniques that I utilize to keep browsing data/history from being traced or to secure them on my computer. This is in addition to the privacy tools already available on the web. This is in no way complete privacy protection and I am not a computer forensics expert. :)
To demonstrate this technique, we will use two well known softwares. First is Mozilla FireFox web browser and second is TrueCrypt, free open source disk encryption software.
We start by creating encrypted disk using TrueCrypt. Launch TrueCrypt.exe
truecrypt_encrypted_file_container
Once encrypted volume (around 300MB) is created, lets mount it on “Z:\”. If you are on LINUX/UNIX then you might want to use any other mount point such as “/mnt/secure_browsing”. While mounting this TrueCrypt volume, a password will be required, provided at the time of creating encrypted volume (or container).
mounted_encrypted_container
Now next step will be to create Mozilla Firefox profile (lets call it as Secure_Browsing profile) on mounted TrueCrypt volume. In order to create this profile, launch Mozilla Firefox using “firefox –ProfileManager” command.
Mozilla FireFox Secure Profile


Once creating Secure_Browsing profile is finished, launch Mozilla Firefoxusing the same profile. So whatever data now being browsed on web using  Mozilla Firefox browser will be written or logged to “Z:\Secure_Browsing” folder including all history and cache files etc. It should to be ensured that once Mozilla Firefox is closed, mounted TrueCrypt volume or container should be unmounted.
Being encrypted volume, in order to access web browser’s secure profile’s data (that could be old) a password will be required. This password security is provided by TrueCrypt software for encrypted volume.
Also considering Mozilla Firefox and TrueCrypt available on Windows and other UNIX environments, it could be a cross-platform solution.
So next time if you want to launch Mozilla Firefox, are you going to use secure profile or default one?

Sunday 20 May 2012

OSI Model Vs TCP/IP Model


Both TCP/IP model and OSI model work in a very similar fashion. But they do have very subtle differences too. Knowing these differences is very crucial to learning computer networking. This article will try to show the comparison between TCP/IP model and OSI model.





Background

OSI reference model came into existence way before the TCP/IP model was created. Advance research project agency (ARPA) created a OSI reference model so that they could logically group the similarly working components of the network into various layers of the protocol. But after the advent of the Internet, there arose the need for a streamlined protocol suite, which would address the need of the ever-growing Internet. So the Defense Advanced Research Project Agency (DARPA) decided to create the TCP/IP protocol suite. This was going to address many, if not all the issues that had arisen with the OSI reference model.



TCP/IP Model Layers

TCP/IP is a suite of protocol which is named after its most significant pair of protocols. That is Transmission Control Protocol and Internet Protocol. TCP/IP are made up of layers. Each layer is responsible for a set of computer network related tasks. Every layer provides service to the layer above it. In all, there are four layers in the TCP/IP reference model.
  • Application Layer: This is the topmost layer of the TCP/IP suite. This is responsible for coding of the packet data.
  • Transport Layer: This layer monitors end-to-end path selections of the packets. It also provides service to the application layer.
  • Internet Layer: This layer is responsible for sending packets through different networks.
  • Link Layer: It is the closest layer to the network hardware. It provides service to Internet layer.
In OSI reference model there seven layers of protocols. Again, in OSI reference model, each layer provides services to the layer above it. There are in all seven layers of in OSI. They are
  • Physical Layer: It is the lower most layer of the OSI reference model. It is layer which is responsible for direct interaction of the OSI model with hardware. The hardware provides service to the physical layer and it provides service to the datalink layer.
  • Datalink Layer: There may be certain errors which may occur at the physical layer. If possible, these errors are corrected by the datalink layer. The datalink layer provides the way by which various entities can transfer the data to the network.
  • Network Layer: It does not allow the quality of the service to be degraded that was requested by the transport layer. It is also responsible for data transfer sequence from source to destination.
  • Transport Layer: The reliability of the data is ensured by the transport layer. It also retransmits those data that fail to reach the destination.
  • Session Layer: The sessions layer is responsible for creating and terminating the connection. Management of such a connection is taken care of by the sessions layer.
  • Presentation Layer: This layer is responsible for decoding the context (syntax and semantics) of the higher level entities.
  • Application Layer: Whichever software application that implements socket programming will communicate with this layer. This layer is closest to the user.
OSI Model Layers
 In OSI reference model there seven layers of protocols. Again, in OSI reference model, each layer provides services to the layer above it. There are in all seven layers of in OSI. They are
  • Physical Layer: It is the lower most layer of the OSI reference model. It is layer which is responsible for direct interaction of the OSI model with hardware. The hardware provides service to the physical layer and it provides service to the datalink layer.
  • Datalink Layer: There may be certain errors which may occur at the physical layer. If possible, these errors are corrected by the datalink layer. The datalink layer provides the way by which various entities can transfer the data to the network.
  • Network Layer: It does not allow the quality of the service to be degraded that was requested by the transport layer. It is also responsible for data transfer sequence from source to destination.
  • Transport Layer: The reliability of the data is ensured by the transport layer. It also retransmits those data that fail to reach the destination.
  • Session Layer: The sessions layer is responsible for creating and terminating the connection. Management of such a connection is taken care of by the sessions layer.
  • Presentation Layer: This layer is responsible for decoding the context (syntax and semantics) of the higher level entities.
  • Application Layer: Whichever software application that implements socket programming will communicate with this layer. This layer is closest to the user.
TCP/IP Model vs OSI Model

Sr. No.
TCP/IP Reference Model
OSI Reference Model
1
Defined after the advent of Internet
Defined before advent of internet
2
Service interface and protocols were not clearly distinguished before
Service interface and protocols are clearly distinguished
3
TCP/IP supports Internet working
Internet working not supported
4
Loosely layered
Strict layering
5
Protocol Dependent standard
Protocol independent standard
6
More Credible
Less Credible
7
TCP reliably delivers packets, IP does not reliably deliver   packets
All packets are reliably delivered

What is CAPTCHA and How it Works?


A CAPTCHA is a program that can generate and grade tests that humans can pass but current computer programs cannot. For example, humans can read distorted text as the one shown below, but current computer programs can't:
The term CAPTCHA (for Completely Automated Public Turing Test To Tell Computers and Humans Apart) was coined in 2000 by Luis von Ahn, Manuel Blum, Nicholas Hopper and John Langford of Carnegie Mellon University. At the time, they developed the first CAPTCHA to be used by Yahoo.


CAPTCHA or Captcha (pronounced as cap-ch-uh) which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart” is a type of challenge-response test to ensure that the response is only generated by humans and not by a computer. In simple words, CAPTCHA is the word verification test that you will come across the end of a sign-up form while signing up for Gmail or Yahoo account. The following image shows the typical samples of CAPTCHA.
Almost every Internet user will have an experience of CAPTCHA in their daily Internet usage, but only a few are aware of what it is and why they are used. So in this post you will find a detailed information on how CAPTCHA works and why they are used.

What Purpose does CAPTCHA Exactly Serve?
CAPTCPA is mainly used to prevent automated software (bots) from performing actions on behalf of actual humans. For example while signing up for a new email account, you will come across a CAPTCHA at the end of the sign-up form so as to ensure that the form is filled out only by a legitimate human and not by any of the automated software or a computer bot. The main goal of CAPTCHA is to put forth a test which is simple and straight forward for any human to answer but for a computer, it is almost impossible to solve.


What is the Need to Create a Test that Can Tell Computers and Humans Apart?

For many the CAPTCHA may seem to be silly and annoying, but in fact it has the ability to protect systems from malicious attacks where people try to game the system. Attackers can make use of automated softwares to generate a huge quantity of requests thereby causing a high load on the target server which would degrade the quality of service of a given system, whether due to abuse or resource expenditure. This can affect millions of legitimate users and their requests. CAPTCHAs can be deployed to protect systems that are vulnerable to email spam, such as the services from Gmail, Yahoo and Hotmail.

Who Uses CAPTCHA?

CAPTCHAs are mainly used by websites that offer services like online polls and registration forms. For example, Web-based email services like Gmail, Yahoo and Hotmail offer free email accounts for their users. However upon each sign-up process, CAPTCHAs are used to prevent spammers from using a bot to generate hundreds of spam mail accounts.


Designing a CAPTCHA System.

CAPTCHAs are designed on the fact that computers lack the ability that human beings have when it comes to processing visual data. It is more easily possible for humans to look at an image and pick out the patterns than a computer. This is because computers lack the real intelligence that humans have by default. CAPTCHAs are implemented by presenting users with an image which contains distorted or randomly stretched characters which only humans should be able to identify. Sometimes characters are striked out or presented with a noisy background to make it even more harder for computers to figure out the patterns.


Most, but not all, CAPTCHAs rely on a visual test. Some Websites implement a totally different CAPTCHA system to tell humans and computers apart. For example, a user is presented with 4 images in which 3 contains picture of animals and one contain a flower. The user is asked to select only those images which contain animals in them. This Turing test can easily be solved by any human, but almost impossible for a computer.

Breaking the CAPTCHA

The challenge in breaking the CAPTCHA lies in real hard task of teaching a computer how to process information in a way similar to how humans think. Algorithms with artificial intelligence (AI) will have to be designed in order to make the computer think like humans when it comes to recognizing the patterns in images. However there is no universal algorithm that could pass through and break any CAPTCHA system and hence each CAPTCHA algorithm must have to be tackled individually. It might not work 100 percent of the time, but it can work often enough to be worthwhile to spammers.
Broken CAPTCHAs


Where Can I Get a CAPTCHA For My Site?
A free, secure CAPTCHA implementation is available here from the reCAPTCHA project.

Applications of CAPTCHAs
CAPTCHAs have several applications for practical security, including (but not limited to):

Preventing Comment Spam in Blogs. Most bloggers are familiar with programs that submit bogus comments, usually for the purpose of raising search engine ranks of some website (e.g., "buy penny stocks here"). This is called comment spam. By using a CAPTCHA, only humans can enter comments on a blog. There is no need to make users sign up before they enter a comment, and no legitimate comments are ever lost!

Protecting Website Registration. Several companies (Yahoo!, Microsoft, etc.) offer free email services. Up until a few years ago, most of these services suffered from a specific type of attack: "bots" that would sign up for thousands of email accounts every minute. The solution to this problem was to use CAPTCHAs to ensure that only humans obtain free accounts. In general, free services should be protected with a CAPTCHA in order to prevent abuse by automated programs.
Online Polls. In November 1999, http://www.slashdot.org released an online poll asking which was the best graduate school in computer science (a dangerous question to ask over the web!). As is the case with most online polls, IP addresses of voters were recorded in order to prevent single users from voting more than once. However, students at Carnegie Mellon found a way to stuff the ballots using programs that voted for CMU thousands of times. CMU's score started growing rapidly. The next day, students at MIT wrote their own program and the poll became a contest between voting "bots." MIT finished with 21,156 votes, Carnegie Mellon with 21,032 and every other school with less than 1,000. Can the result of any online poll be trusted? Not unless the poll ensures that only humans can vote.

Preventing Dictionary Attacks. CAPTCHAs can also be used to prevent dictionary attacks in password systems. The idea is simple: prevent a computer from being able to iterate through the entire space of passwords by requiring it to solve a CAPTCHA after a certain number of unsuccessful logins.

Search Engine Bots. It is sometimes desirable to keep webpages unindexed to prevent others from finding them easily. There is an html tag to prevent search engine bots from reading web pages. The tag, however, doesn't guarantee that bots won't read a web page; it only serves to say "no bots, please." Search engine bots, since they usually belong to large companies, respect web pages that don't want to allow them in. However, in order to truly guarantee that bots won't enter a web site, CAPTCHAs are needed.

Worms and Spam. CAPTCHAs also offer a plausible solution against email worms and spam: "I will only accept an email if I know there is a human behind the other computer." A few companies are already marketing this idea.

Guidelines
If your website needs protection from abuse, it is recommended that you use a CAPTCHA. There are many CAPTCHA implementations, some better than others. The following guidelines are strongly recommended for any CAPTCHA:

Accessibility. CAPTCHAs must be accessible. CAPTCHAs based solely on reading text — or other visual-perception tasks — prevent visually impaired users from accessing the protected resource. Such CAPTCHAs may make a site incompatible with Section 508 in the United States. Any implementation of a CAPTCHA should allow blind users to get around the barrier, for example, by permitting users to opt for an audio CAPTCHA.

Image Security. Images of text should be distorted randomly before being presented to the user. Many implementations of CAPTCHAs use undistorted text, or text with only minor distortions. These implementations are vulnerable to simple automated attacks. For example, the CAPTCHAs shown below can all be broken using image processing techniques, mainly because they use a consistent font.

Script Security. Building a secure CAPTCHA is not easy. In addition to making the images unreadable by computers, the system should ensure that there are no easy ways around it at the script level. Common examples of insecurities in this respect include: (1) Systems that pass the answer to the CAPTCHA in plain text as part of the web form. (2) Systems where a solution to the same CAPTCHA can be used multiple times (this makes the CAPTCHA vulnerable to so-called "replay attacks").

Security Even After Wide-Spread Adoption. There are various "CAPTCHAs" that would be insecure if a significant number of sites start using them. An example of such a puzzle is asking text-based questions, such as a mathematical question ("what is 1+1"). Since a parser could easily be written that would allow bots to bypass this test, such "CAPTCHAs" rely on the fact that few sites use them, and thus that a bot author has no incentive to program their bot to solve that challenge. True CAPTCHAs should be secure even after a significant number of websites adopt them.

How Cloud Computing Works





Let's say you're an executive at a large corporation. Your particular responsibilities include making sure that all of your employees have the right hardware and software they need to do their jobs. Buying computers for everyone isn't enough -- you also have to purchase software or software licenses to give employees the tools they require. Whenever you have a new hire, you have to buy more software or make sure your current software license allows another user. It's so stressful that you find it difficult to go to sleep on your huge pile of money every night.
Soon, there may be an alternative for executives like you. Instead of installing a suite of software for each computer, you'd only have to load one application. That application would allow workers to log into a Web-based service which hosts all the programs the user would need for his or her job. Remote machines owned by another company would run everything from e-mail to word processing to complex data analysis programs. It's called cloud computing, and it could change the entire computer industry.
In a cloud computing system, there's a significant workload shift. Local computers no longer have to do all the heavy lifting when it comes to running applications. The network of computers that make up the cloud handles them instead. Hardware and software demands on the user's side decrease. The only thing the user's computer needs to be able to run is the cloud computing system's interface software, which can be as simple as a Web browser, and the cloud's network takes care of the rest.
There's a good chance you've already used some form of cloud computing. If you have an e-mail account with a Web-based e-mail service like Hotmail, Yahoo! Mail or Gmail, then you've had some experience with cloud computing. Instead of running an e-mail program on your computer, you log in to a Web e-mail account remotely. The software and storage for your account doesn't exist on your computer -- it's on the service's computer cloud.

Cloud Computing Architecture
When talking about a cloud computing system, it's helpful to divide it into two sections: the front end and the back end. They connect to each other through a network, usually the Internet. The front end is the side the computer user, or client, sees. The back end is the "cloud" section of the system.
The front end includes the client's computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.
On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.
A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other. Most of the time, servers don't run at full capacity. That means there's unused processing power going to waste. It's possible to fool a physical server into thinking it's actually multiple servers, each running with its own independent operating system. The technique is called server virtualization. By maximizing the output of individual servers, server virtualization reduces the need for more physical machines.
If a cloud computing company has a lot of clients, there's likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. Cloud computing systems need at least twice the number of storage devices it requires to keep all its clients' information stored. That's because these devices, like all computers, occasionally break down. A cloud computing system must make a copy of all its clients' information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable. Making copies of data as a backup is called redundancy.

Cloud Computing Applications
The applications of cloud computing are practically limitless. With the right middleware, a cloud computing system could execute all the programs a normal computer could run. Potentially, everything from generic word processing software to customized computer programs designed for a specific company could work on a cloud computing system.
Why would anyone want to rely on another computer system to run programs and store data? Here are just a few reasons:
Clients would be able to access their applications and data from anywhere at any time. They could access the cloud computing system using any computer linked to the Internet. Data wouldn't be confined to a hard drive on one user's computer or even a corporation's internal network.
It could bring hardware costs down. Cloud computing systems would reduce the need for advanced hardware on the client side. You wouldn't need to buy the fastest computer with the most memory, because the cloud system would take care of those needs for you. Instead, you could buy an inexpensive computer terminal. The terminal could include a monitor, input devices like a keyboard and mouse and just enough processing power to run the middleware necessary to connect to the cloud system. You wouldn't need a large hard drive because you'd store all your information on a remote computer.
Corporations that rely on computers have to make sure they have the right software in place to achieve goals. Cloud computing systems give these organizations company-wide access to computer applications. The companies don't have to buy a set of software or software licenses for every employee. Instead, the company could pay a metered fee to a cloud computing company.
Servers and digital storage devices take up space. Some companies rent physical space to store servers and databases because they don't have it available on site. Cloud computing gives these companies the option of storing data on someone else's hardware, removing the need for physical space on the front end.
Corporations might save money on IT support. Streamlined hardware would, in theory, have fewer problems than a network of heterogeneous machines and operating systems.
If the cloud computing system's back end is a grid computing system, then the client could take advantage of the entire network's processing power. Often, scientists and researchers work with calculations so complex that it would take years for individual computers to complete them. On a grid computing system, the client could send the calculation to the cloud for processing. The cloud system would tap into the processing power of all available computers on the back end, significantly speeding up the calculation.

Cloud Computing Concerns
Perhaps the biggest concerns about cloud computing are security and privacy. The idea of handing over important data to another company worries some people. Corporate executives might hesitate to take advantage of a cloud computing system because they can't keep their company's information under lock and key.
The counterargument to this position is that the companies offering cloud computing services live and die by their reputations. It benefits these companies to have reliable security measures in place. Otherwise, the service would lose all its clients. It's in their interest to employ the most advanced techniques to protect their clients' data.
Privacy is another matter. If a client can log in from any location to access data and applications, it's possible the client's privacy could be compromised. Cloud computing companies will need to find ways to protect client privacy. One way is to use authentication techniques such as user names and passwords. Another is to employ an authorization format -- each user can access only the data and applications relevant to his or her job.
Some questions regarding cloud computing are more philosophical. Does the user or company subscribing to the cloud computing service own the data? Does the cloud computing system, which provides the actual storage space, own it? Is it possible for a cloud computing company to deny a client access to that client's data? Several companies, law firms and universities are debating these and other questions about the nature of cloud computing.
How will cloud computing affect other industries? There's a growing concern in the IT industry about how cloud computing could impact the business of computer maintenance and repair. If companies switch to using streamlined computer systems, they'll have fewer IT needs. Some industry experts believe that the need for IT jobs will migrate to the back end of the cloud computing system.
Another area of research in the computer science community is autonomic computing. An autonomic computing system is self-managing, which means the system monitors itself and takes measures to prevent or repair problems. Currently, autonomic computing is mostly theoretical. But, if autonomic computing becomes a reality, it could eliminate the need for many IT maintenance jobs.

Cloud Computing

Cloud Computing




Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.
Infrastructure-as-a-Service like Amazon Web Services provides virtual server instance API) to start, stop, access and configure their virtual servers and storage. In the enterprise, cloud computing allows a company to pay for only as much capacity as is needed, and bring more online as soon as required. Because this pay-for-what-you-use model resembles the way electricity, fuel and water are consumed, it's sometimes referred to as utility computing.
Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer. Force.com, (an outgrowth of Salesforce.com) and Google Apps are examples of PaaS. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform.
In the software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere.

google-site-verification: googleda4434fd95ffebf5.html