Thursday, October 8, 2009

Definition of Web 2.0 and social software

There is no single agreed definition of the terms Web 2.0 – also known as the Social Web – and
social software, but there is widespread agreement that they apply to a set of characteristics in the context of the internet and applications served over it.5 The characteristics include access and use through a web browser such as, for example, Internet Explorer of Firefox; being both supportive and encouraging of user participation in the sharing, consumption and generation of content, including through remixing and repurposing; and also amenable to developments in functionality consistent with user demand – users can and do, in effect, contribute to service and software design.
At its simplest, social software has been defined as ‘software that supports group interaction’.6
Elaborations include ‘software that allows people to interact and collaborate online or that
aggregates the actions of networked users’;7 ‘a set of internet services and practices that give voice to individual users’;8 and, in the specific context of learning, ‘networked tools that support and encourage individuals to learn together whilst retaining control over their time, space, presence, activity, identity and relationship.’
The most familiar and widely recognised types of Web 2.0 activity include the following:-

Blogging
An internet-based journal or diary in which a user can post text and digital material while others can comment, eg blogger; technorati; twitter.


Conversing
One to one or one to many between internet users, eg MSN

Media sharing
Uploading or downloading media files for purposes of audience or exchange, eg flikr; YouTube

Online gaming and virtual worlds
Rule-governed games or themed environments that invite live interaction with other internet users, eg secondlife; worldofwarcraft.


Social bookmarking
Users submit their bookmarked web pages to a central site where they can be found and tagged by other users, eg del.icio.us.


Social networking
Websites that structure social interaction between members who may form sub-groups of ‘friends’,eg myspace; bebo; facebook.


Syndication
Users can subscribe to RSS (Really Simply Syndication) feed-enabled websites so that they are
automatically notified of any changes or updates in content via an aggregator, eg bloglines; Podcast.


Trading
Buying, selling or exchanging through user transactions mediated by internet communications, eg craigslist; e-bay.


Wikis
A web-based service allowing users unrestricted access to create, edit and link pages, eg wikipedia

Sunday, October 4, 2009

The WEB history :

The past 5-10 years have witnessed not only an explosion of activity, but the creation of entirelynew sectors within the optical industry. As the concept of WDM began to emerge, many new companiesdeveloping WDM transport equipment came into existence. The newer enterprises pushed theolder established equipment vendors to more aggressive deployment schedules and a constant downwardtrend for the corresponding prices of WDM transport equipment followed. In what appeared tobe an almost insatiable demand for more bandwidth, a situation arose that allowed the creation of thenew companies and the accompanying innovation. Not only did new equipment vendors emerge, butalso new national scale carriers were created. This trend is continuing as the concept of optical layering/networking is gaining acceptance and new optical equipment companies are being formed on aregular basis. They deal not only with “traditional” WDM transport equipment, but also with terrestrialultra long haul systems, regional and metro optimized systems, and various incarnations of opticalcross connects.There were hundreds of developments and contributions enabling this burst of activity. Many ofthe technical innovations are described in this book and its predecessors. However, perhaps the greatestsingle factor that fueled this phenomena was the belief and perception that traffic and hence neededcapacity were growing at explosive rates. This is a remarkable fact, especially when one recalls thataround 1990, both the traditional carriers and most of their equipment vendors still expected the trafficdemands to not vary much from the voice demand growths (which historically was around 10% peryear). In fact both carriers and equipment vendors were arguing that WDM would not be needed andthat going to individual channel rates of at most 10 Gb/s would be adequate. Also, around 1995,the conventional wisdom was that 8 channel WDM systems would suffice well into the foreseeablefuture. Now it almost appears as if the pendulum has swung the other way. Is too much capacity beingdeployed and are many of the reported traffic growth rates correct, and if so will they continue?As we explained in the previous section, the early skepticism about the need for high capacityoptical transport was rooted in the reality of the telecommunications networks. Up until 1990, theywere dominated by voice, which was growing slowly. Then, by the mid-1990s, they became to bedominated (in terms of capacity) by private lines, which were growing three or four times as fast. Andthen, in the late 1990s, they came to be dominated by the Internet, which was growing faster still.Before we go through the analyses for the traffic growth on the Internet we must first at least definethe Internet and describe the history and structure of it. This is paramount in helping put much of later12described growth analyses into perspective.When one now speaks of the Internet, it is usually described as an evolution from ARPANET toNSFNET, and finally to the commercial Internet that now exists. Arguably, the phenomenal growthof the Internet started in 1986 (more than 17 years after its “birth”) with NSFNet. However, the pathwas very complicated and full of many twists and turns in its roughly 40 year history [Cerf, Hobbes,Leiner].From the very early research in packet switching, academia, industry, and the US government havebeen intertwined as partners. Ironically, the beginnings of the Internet can trace itself back to the ColdWar and specifically to the launch of Sputnik in 1957. The US government formed the Advanced ResearchProject Agency (ARPA - the name was later changed to DARPA, Defense Advanced ResearchProject Agency, and later back to ARPA) the year after the launch with the stated goal of establishing aUS lead in technology and science (with emphasis on applications for the military). As ARPA was establishingitself, there were several pivotal works [Klein1, Baran] in the early 1960s on packet switchingand computer communications. These works and the efforts they spawned laid many of the foundationsthat enabled the deployment of distributed packet networks. J.C.R. Licklider (of MIT) [LickC]wrote a series of papers in 1962 in which he “envisioned a globally interconnected array of computerswhich would enable ‘everything’ to easily access data and programs from any of the sites”. Genericallyspeaking, this idea is not much different from what today’s Internet has become. Of importance is thefact the Licklider was the first head of the computer research program at DARPA (beginning in 1962),and in this role he was instrumental in pushing his concept of networks. Kleinrock published both thefirst paper on packet switching and the first book on the subject. In addition, Kleinrock convinced severalkey players of the theoretical feasibility of using packets instead of circuits from communications.One such person was Larry Roberts, one of the initial architects for the ARPANET. In the 1965-66time frame ARPA sponsored studies on “cooperative network of [users] sharing computers”[Leiner],and the first ARPANET plans were begun, with the first design papers on ARPANET being publishedin 1967. Concurrently the National Physical Laboratory (NPL) in England deployed an experimentalnetwork called the NPL Network making use of packet switching. It utilized 768 kb/s lines.A year before the Moon landing, in 1968, the first ARPANET requests for proposals were sentout, and the first ARPANET contracts were awarded. Two of the earliest contracts went to UCLA todevelop the Network Measurement Center, and to Bolt, Beranek and Newman (BBN) for the PacketSwitch contract (to construct the Interface Message Processors or IMPs - effectively the routers).Kleinrock headed the Network Measurement Center at UCLA and it was selected as the first node13on the ARPANET. The first IMP was installed at UCLA and the first host computer was connectedin September of 1969. The second node was at Stanford Research Institution (SRI). Two other nodeswere added at UCSB and in Utah, so that by the second half of 1969, just months past the first moonlanding, the initial four node ARPANET became functional. This was truly the initial ARPANET, andthus a case can be made that this was when the Internet was born. The first message carried over thenetwork went from Kleinrock’s lab to SRI. Supposedly the first packet sent over ARPANET was sentby Charley Kline and as he was trying to log in the system crashed as the letter “G” of “LOGIN” wasentered.One of the next major innovations for the fledgling Internet (i.e., ARPANET) was the introductionof the first host-to-host protocol called Network Control Protocal or NCP, which was first used inARPANET in 1970. By 1972 all of the ARPANET sites had finished implementing NCP. Hence theusers of ARPANETcould finally begin to focus on the development of applications - another paramountdriver for the phenomenal growth and sustained growth of the internet. It was also in 1970 that the firstcross-country link was established for ARPANET by AT&T between UCLA and BBN (at the blindingrate of 56 kb/s). By 1971, the ARPANET had grown to 15 nodes and had 23 hosts. However, perhapsthe most influential work that year was the creation of an email program that could send messagesacross a distributed network. (Email was not among the original design criteria for the ARPANET, andits success caught the creators of this network by surprise.) Ray Tomlinson of BBN developed this,and his original program was based on 2 previous ones [Hobbes]. Tomlinson modified his program forARPANET in 1972, and at that point its popularity quickly soared. In fact it was at this time that thesymbol “@” was chosen. Arguably Internet email as we know it today can trace its origins directly tothis work. Internet email was clearly one of key drivers for the popularity (and hence the phenomenaltraffic growth demands) of the Internet and was the first “killer app” for the Net. It was every bit ascritical to the Internet’s “success” as the spreadsheet applications were to the popularization of the PC.Internet email provided a new model of how people could communication with each other and alter thevery nature of collaborations.Although there was already considerable work being done on packet networks outside the US, thefirst international connections to the ARPANET (to England via Norway) took place in 1973. To putthe time frame in perspective this was the same year that Robert Metcalfe did his PhD which describedhis idea for Ethernet. Also during this year the number of ARPANET “users” was estimated to be 2000and that 75% of all the ARPANET traffic (in terms of bytes) was email. One needs to note that in only1-2 years from its introduction onto the Internet email became the predominant type traffic. The same14behavior took place several years later for html (i.e., Web traffic), and to a somewhat lesser degree, thiswas seen for Napster-like traffic within many networks a few years later.Several other key developments began to take place in the mid 1970s. The initial design specificationfor TCP published by Vint Cerf and Bob Kahn in 1974 [CerfK]. The NCP protocol which was beingutilized at the time, tended to act like a device driver, whereas the future TCP (later TCP/IP) would bemuch more like a communications protocol. As is discussed later, the evolution from ARPANET’s NCPprotocol to TCP (which in 1978 was split into TCP and IP) was critical in allowing the future growthand scalability of today’s Internet. DARPA had three contracts to implement TCP/IP (at the time stillcalled TCP), at Stanford (led by Cerf), BBN (led by Ray Tomlinson) and UCLA (led by Kirsten). Stanfordproduced the detailed specification and within a year there were 3 independent implementationsof TCP that could interoperate.It is noted that the basic reasons that led to the separation of TCP (which guaranteed reliable delivery)from IP actually came out of work that was done trying to encode and transport voice througha packet switch. It was found that a tremendous amount of buffering was needed, in order to allow forthe appropriate reassembly after transmission was completed. This in turn led to trying to find a wayto deliver the packets without requiring a guaranteed level of reliability. In essence, the UDP (UserDatagram Protocol) was created to allow users to make use of IP. In addition, it was also in 1978 thatthe first commercial version of ARPANET came into existence as BBN opened Telenet.In 1981-82 the first plans were being made to “migrate” from NCP to TCP. It is claimed by somethat it was this event (TCP was established as THE protocol suite for ARPANET) was truly the birthof the Internet - defined as a connected set of networks, specifically those with TCP/IP. A few yearslater (in 1983) another major development occurred, which later enabled the Internet to scale with the“explosive” growth and popularity of the future Internet. This was the development of the name server(which evolved into the DNS) [Cerf, Leiner]. The name server was developed at the University ofWisconsin [Hobbes] This made it easy for people to use the network since hosts were assigned namesand it was not necessary to remember numeric addresses. Much of the credit for the invention of theDNS (domain name server) is credited to Paul Mockapetris of USC/ISI [Cerf].The year 1983 was also the date for two other key developments on ARPANET. The first one wasthe cutover from NCP to TCP on the ARPANET. Secondly, ARPANET was split into ARPANET andMILNET. Although the road was convoluted, this split was one of the key bifurcations points thatlater allowed NSFNET to come into existence. Soon thereafter (in 1984) the number of hosts on theARPANET had grown to 1000, and the next year in 1985 the first registered domain was assigned in15March.In 1985 NSFNET was created with a backbone speed of 56 kb/s. Initially there were 5 supercomputingcenters that were interconnected. One of the paramount benefits of this was that it allowed anexplosion of connections (most importantly from universities) to take place. Two years later in 1987,NSF agreed to work with MERIT Network to manage the NSFNet backbone. The next year (1988)the process of upgrading the NSFNet backbone to one based on T1 (i.e., 1.5 Mb/s links) was begun.In 1987 the number of hosts on the Internet broke the 10,000 number. Two year later in 1989 this hadgrown to around 100,000, and 3 years after that in 1992 it reached the 1,000,000 value. It is noted thatif you look at how the number of hosts had been growing from 1984 to 1992 that it was still prettymuch tracking a growth curve that was LESS than tripling each year (i.e., doubling every 9 months).In the 1985-86 time frame key decision was made that had very long term impact: that TCP/IP wouldbe mandatory for the NSFNet program.In the 1988-1990 time frame a conscious decision was made to connect the Internet to electronicmail carriers, and by 1992 most of the commercial email carriers in the US were “like the Internet”.This was still another development that cemented email as the single most important application to takeadvantage of the Internet.In 1990 the ARPANET ceased to exist, and arguably NSFNet was the essence of the Internet. Thefollowing year Commercial Internet Service Providers began to emerge (PSI, ANS, Sprint Link, toname a few) and the Commercial Internet Xchange (CIX) was organized in 1991 by commercial ISPsto provided transfer points for traffic. NSF’s lifting the restriction on the commercial use of the Net wasagain one of the pivotal decisions. This was again a key bifurcation point, in that this helped set thestage for the complete commercialization of the Net that would follow only a few years later. In 1991the upgrading of the NSFNet backbone continued as the work to upgrade to a T3 (i.e., 45 Mb/s links)began. It also interesting to note that it was the next year (1992) than the term “surfing the Internet”was first coined by Jean Armour Polly [Polly], only two years before the ARPAnet/Internet celebratedits 25th anniversary.It was in the 1993-1995 time period that several major events seemed to emerge which fueled analmost explosive growth in the popularity of the Internet. One of the key ones was the introduction of“browsers” most notable Mosaic. This led to the creation of Netscape that went public in 1995. Evenas early as 1994 WWW(i.e., predominantly html) traffic was increasing in volume on the Net. By thenit was the second most popular type of traffic, surpassed only by ftp traffic. However, in 1995 WWWtraffic surpassed ftp as the greatest amount of traffic. In addition the traditional online dial up systems16such as AOL, Prodigy and Compuserve began to provide Internet access.In 1996 the net truly became public with the NSFNet being phased out. Soon thereafter majorinfrastructure improvements were made within the transport part of the Internet. The Internet began toupgrade much of its backbone to OC3-OC12 (up to 622 Mb/s) links, and in 1999 upgrades began formuch of the Net to OC-48 (2.5 Gb/s) links.

Internet Safety Resources

NetSmartz : -
Teach children how to be safer on- and offline with NetSmartz, NCMEC’s
award-winning, interactive, educational safety program. Learn more at NetSmartz.org.

NetSmartz411:-
Parents' and guardians' premier, online resource for answering questions about
Internet Safety, computers, and the Web. Learn more at NetSmartz411.org.

CyberTipline:
Has your child ever been sent inappropriate material by someone he or she met
online? Has your child ever inadvertently encountered inappropriate material? You can make a
report of these types of incidents at CyberTipline.com.

Internet Safety for teens

Our Campaigns -
Don't Believe the TypeA site where teens can learn more about online dangers and make the web a safer place to surf.

Help Delete Online PredatorsCreated by NCMEC and Ad Council, this site helps families learn how to better protect their children's online lives and help delete online predators.
Think Before You PostCreated by NCMEC, the U.S. Department of Justice, and the Ad Council, this site informs teens how sharing and posting personal information online can put them at risk.
The Internet offers an array of entertainment and educational resources for children but also
presents some risks. Approximately one in seven youths (10 to 17 years) experience a sexual
solicitation or approach while online.
The National Center for Missing & Exploited Children (NCMEC) is committed to helping all
audiences — from kids to parents and guardians to law-enforcement officers and educators —
learn the aspects of Internet safety.
You can’t watch kids every minute, but you can use strategies to help them benefit from the
Internet and avoid its risks.
NCMEC urges you to do one of the single most important things to promote safety — talk to
kids about the rewards and risks of Internet use.

Wednesday, May 13, 2009

An Introduction to Disk Space

When you are looking for a Web hosting account, and even after you have signed up for an account, a lot of Web hosting jargon is thrown around. From bandwidth limits to server uptime, all of them are important.


When you buy a Web hosting account, you have to learn how to be a good Web hosting client. Once of the most important jargon keywords they will throw your way is disk space. So what is disk space?

Disk space is the space you actually rent on the Web hosting server. It is the place where you put all of your HTML files, images, scripts and anything else you might want to upload to your Web hosting account. You can also use disk space by creating email accounts on your Web hosting space. Each email account takes a little disk space so that you have room to store the messages on the Web hosting server.

Other customers rent space all around you on a shared server but most of the time you do not know who they are and they do not know who you are.

Web hosting accounts, and more importantly the disk space you are given in those accounts, can be any size at all, and some Web hosts offer more than others. It all depends on how much space is assigned to the Web hosting plan you purchased. As part of the plan, you are renting the space from the Web hosting provider.

For example, you might pay five dollars a month for Web hosting. You are renting the space for that amount of money. If you cannot pay for your rent, then you will be kicked out of the space you rent, and somebody else will take your lot.

Just a few years ago, Web hosting accounts were a lot more expensive than they are now, and you did not get close to what you are getting now. For a small Web hosting plan now, you might have payed four times as much, depending on the disk space and bandwidth, a few years ago.

These days though, things are a lot cheaper. With more data centers and bigger pipes on the Internet, we can get more bandwidth to and from our Web hosting accounts. Since the cost of hard drives has been going down, you are also able to get more disk space for less than you would have paid even five years ago.

Where is the world of disk space going in the future? Eventually prices will even out. With new technology though, hard drive space will grow and so will your Web hosting disk space. You will start getting even more bang for your buck in the future.

Once you decide to take that leap and purchase a Web hosting account, you do need to learn about every inch of the Web hosting world. Educating yourself about all the different parts of your Web hosting account will help you in the long run. And disk space is unmistakably one of those important parts. Without disk space, there could be no Web hosting at all.

What operating system is best for your web site?

One of the first things you will need to do in setting up your business on the Internet is to find a web host. The Web expands every day, and so do our choices. Just how do you find a web host to meet your business needs? There are literally tens of thousands, each one with a different focus and services designed for a specific segment of the market.

The operating system used by your hosting company may limit your flexibility as your company begins to grow. As you study your options, you might consider the following tips:

An operating system or "platform," such as Microsoft's Windows NT or DOS is the basic set of commands that tell your computer how to open applications and store files. In the early days of Internet activity, most servers operated on a UNIX platform, an extremely powerful and flexible system that requires considerable technical expertise to administrate. UNIX is still as popular as ever, but today you have a choice: Windows NT and UNIX variations such as Sun Microsystems Solaris, and Berkeley's BSD.

Experts offer significantly different opinions as to which platform works best for web sites, we will however give a brief description and analysis of the different systems. In the end, the choice depends largely on your budget and what you want to do with your site.

Linux

Linux, a version of UNIX, is a very versatile platform that serves a number of functions well. It is particularly suitable for meeting your Internet requirements, such as mailing, streaming, Web serving, and fileserving. Linux is a very cost-effective choice it uses hardware efficiently, and allows for more web sites per server, thereby lowering the cost of hosting per account. Linux servers are compatible with certain Microsoft extensions and applications, for example, MS SQL (a database program) or Microsoft Front Page (a web authoring tool). Many engineers prefer the flexibility, security, and control of Linux servers. Linux is Open Source (free) software and a host of free programs are available to users of Linux.

Microsoft Windows NT/2000

Window's 2000 graphical user interface makes it user-friendly and provides a familiar interface for most IT teams to work with. It integrates well with other Microsoft applications and there are a wealth of commercial applications available for this platform. Particularly attractive is the integration with Microsoft Application Server (ASP) which allows the creation of dynamic web pages linked to SQL databases, and other Legacy back office systems.

Sun Solaris

Sun Solaris servers offer the highest level of resources and power - these are the most robust servers! Sun has a proven track record and is deployed in many large Fortune 500 corporations. It is a mature platform and there are a large number of applications and development tools available. Because of Sun's capacity and stability it is ideal for high-traffic functions, such as database servers, high-traffic Web servers and mission-critical servers.

Cobalt RaQ

The RaQ was designed for virtual (shared) hosting of multiple Web sites. It's simple administration makes it a great first Web server. Its flexible administration interface also allows you to share administration responsibilities among your staff.

FreeBSD

FreeBSD is a version of BSD that was designed for the X86 processor. FreeBSD is a very stable open source operating system, and a good alternative to Linux. It is an extremely well-integrated and tested system, and is inexpensive. There are a large number of free applications available for use with it

How do I choose? As your site grows in size and complexity, in all likelihood your needs will change and the capability and scalability of a particular platform will come into question. It's best to anticipate this contingency and choose a web host that offers a variety of operating systems which are scalable and backs them up with technical expertise.





Testing web hosting companies

In the final stages of your search for a good web hosting company, a very important step is to test "the finalists". Because most web hosting companies have client support email(s) listed openly on their website, testing the quality and speed of their support is quite easy. All you have to do is send an email with one or more questions.

Let's take things step by step. First you have to find that email address. Usually you can find it in a "contact us" or "about us" section. Different email addresses result in different test results. What I mean by that is that you have to send an email to the SUPPORT team to verify the level of support, NOT the sales team.

There are different types of hosting companies: some web hosting companies answer their support emails a lot faster than they answer sales related emails, while others do exactly the opposite. You find that strange? Don't! It'a all a metter of fosus. Generally, good companies focus on their current customers and they regard (not without reason) support tickets as more important than new sales. Sure, the sales department should be reasonably good too, but, as a customer, it's reassuring to know that you come first when time is short.

I'm not saying that you shouldn't send emails to the sales teams. Certain questions are to be sent to the sales department, but the department that really should be tested (customer wise) is the support department. The reason is that after you sign-up for the service you will deal almost exclusively with the support staff.

By sending a test email web several things can be verified (and compared):

1. The amount of time it takes to receive a response.

To get the most of of the test (and be able to make a valid comparison between different companies) you should take special care not to favor any company.

To test the response time accurately you have to ensure that all investigated companies are send the emails at roughly the same hour (their time, not your time). Today we have hosting companies with staff in USA, UK, Australia, Hong Kong etc. Why not "exactly" at the same time? Because this is not rocket science! Of course, be as precise as you possibly can, but don't stress yourself too much.

Another thing is to select the right day in the week. As you might expect, in the weekend the response time can be somewhat longer. But, to put it short, I would send the e-mail Saturday night after midnight (their time, remember?). This should be the ultimate test.

Time evaluation: Anything under 6 hours can be considered a very good response in my opinion.

2. The quality of the response.

Answer quality has many facets. One of them is the quality of the information. Is the question answered precisely and correctly? Another one is the quantity of information. Is the answer incomplete, complete or provides all you ever wanted and a bit more?

Another is the clarity of the answer. Is the answer easy to understand, explains the "tech" words that you might not be familiar with or it sounds like gibberish?

Another is the structure of the answer. Is it well structured, stating with A and finishing with Z, or it's all a mess?

3. The personal level of the conversation (and/or politeness)

There are different approaches to this politeness issue. Some hosting companies use the "Sir" formula and some web hosting companies use the "you" formula (I don't thing there are many using the "Ya" word) . It's all a matter of taste. There are web hosting companies employing the "friendly above all" approach and companies employing the "respectful above all" approach.

As I said, it's a matter of taste. I usually prefer the friendly approach because it allows a "personal touch" and a slightly more relaxed conversation. But hey! Who am I to judge you! If you prefer to be called "Sir" or "Ma'am", I am OK with that. Just tell me when you send me an email which type of conversation you prefer and I assure you I will do my best to respect your likes (and dislikes).

I guess those are the things we can test with a test email. Let's devise now such a test email. This will be just a sample to give you a rough idea; you're free to make-up your own test.

Note that because you're not hosted by them yet, your question come from someone they don't know and can't verify if it's a client or not. You could be asked to provide some form of client identification in order to receive an answer, but I doubt this will happen.

"Hi.

I have a small problem. I intend to learn PHP. I just wrote a small script and I saved it in a file that I uploaded it on the server. Whenever I load it, instead of getting the expected result, the page simply lists the code of the script. Is there something I can do about it?

Thank you very much.

Regards,

Your name"

Of course this test is mainly for UNIX/Linux servers with PHP (the majority of such servers are PHP enabled, but you should make sure about it in each case).

OK... So what are we looking for in the answer?

First of all, because the script doesn't work and it simply gets listed, it's almost obvious that the script file is not parsed by PHP. In 99% of the cases this is due to the fact that the file extension is not .php (e.g. "scriptfile.php"). Since in basic HTML design the files are saved as .htm or .html files, new web programmers save script files with one of those extensions too. This is a common mistake.

You can set .htm and .html files to be parsed by PHP too. They should explain this to you, suggest that you either change the file extension to .php or have .htm and .html files parsed by the server, and of course, instruct you how to do it.

Also, a very good support team would offer to do make the necessary changes provided that you tell them what's the name of your account with them (website name).

And... this is about it. Simple huh?

Note: I'm sorry I can't provide a test for Windows based servers, but I have no experience with them. I am open to Windows server test suggestions!

Good luck with your tests!

EXPLAINING VIRTUAL PRIVATE SERVER (VPS) SOLUTIONS

A virtual private server (VPS) solution uses a software platform that permits a hosting vendor to multiplex a single dedicated server into multiple "virtual" machines. In essence, a VPS solution is a private and protected Web services infrastructure that operates as an independent server.



A virtual private server allows multiple customers to share the expense of hardware and network connections without sacrificing privacy, performance or preference. For this reason, VPS is considered one of the most sophisticated modes of automation available for provisioning small to mid-sized enterprise Web hosting.

The use of such technology allows hosting providers to save money by simulating the features of a dedicated server multiple times upon a single physical hosting environment, while concurrently allowing them to deliver high-quality Web services to their end users. VPS solutions allow Web hosting resellers to provide a full range of services usually only afforded by dedicated hosting technology. Resellers can therefore offer their clientele full administrative or "root" access to their Web services.

The virtual private server was first implemented by hosting giant NTT/Verio to bridge the gap between shared hosting environments and customized dedicated servers. By using a virtual private server, Web hosting resellers and Web designers can provide small businesses the performance, security, and control of dedicated hosting services at a fraction of the cost.

A virtual private server eliminates the restrictions of virtual hosting by providing all of the administrative features of a dedicated server. Each VPS user therefore receives their own set of services that they can customize to their specific needs. Virtual hosting is limited in comparison because its users do not have root access and software configurations cannot be customized, despite the fact that physical resources are also multiplexed. A virtual private server on the other hand, contains its own unique file system and CGI-BIN, disk space, system resources, bandwidth and memory allotments, which allow for a high level of customization.

Due to the fact that a VPS solution truly simulates a dedicated server, some technical understanding of server administration is required. Any true VPS solution will provide users with: "root" or full administrative access; guarantee a specific allocation of server resources, including CPU, memory and bandwidth; and allow the user to manage multiple servers and file areas through a sophisticated control panel.

A virtual private server will ensure "performance isolation" so that heavy traffic or CPU loads will not affect other VPS solutions on the same infrastructure. Others major features that characterize VPS solutions include: "fault tolerance," which ensures that errors, which affect one specific private server, do not affect others; and "enhanced security," which ensures that e-business applications can be deployed with greater privacy.

The most popular feature that VPS customers use, however, is the virtual private server's capacity for "functional isolation." Because a VPS has its own contained services, it is possible for users to install and customize their own open-source and commercial software packages.

Many virtual private servers on the Unix platform have become so advanced that they even permit users to install Linux RPM packages. This allows users to take source code for new software and package it into source and binary form, such that binaries can be easily installed and tracked, and source can be easily rebuilt. The use of RPM packages also allows VPS users to maintain a database of all packages and their files that can be used for verifying packages and querying for information about files and/or packages.

Small businesses that run their own e-commerce Web sites also appreciate the functional isolation of their private server, because it allows them to obtain their own secure certificates and shopping cart software for their e-business operations. Many sophisticated VPS solutions will even offer third-party plug-ins or modules, allowing users to take advantage of control panel functionality in order to install everything from the simplest CGI scripts to the most advanced shopping carts.

Due to these advantages, virtual private servers are very popular and are a relatively inexpensive choice for small to mid-sized enterprises seeking to maintain their own Web presence. VPS solutions are the natural choice for SMEs and individuals wishing to upgrade a shared or virtual hosting package. The following are descriptions of popular VPS packages now available through an assortment of major Web host vendors:

Ensim
Ensim's award-winning product line includes control panels, virtual private servers, server management, as well as Microsoft Exchange hosting software.

H-Sphere
Hsphere is scalable multi-server, centralized hosting automation software with fully brandable resellers support, comprehensive recurrent billing, trouble ticket system as well as complete account provisioning automated signup. It supports Win2000, Linux & FreeBSD. It provides fully features, easy-to-use end user web based control panel, and powerful admin user interface.

SW-soft
SW-soft develops the Virtuozzo technology and the HSPcomplete hosting automation solution. SWsoft's products deliver powerful, comprehensive solutions that power data center management and provide excellent return on investment.

Sphera
Sphera is a leading developer of Web hosting automation and management software for Internet data centers, ISPs and hosting providers. Sphera's HostingDirector enables cost-cutting and revenue increases by automating Web hosting management, facilitating sales of value added applications, services and more.

These above hosting software firms develop popular and dependable VPS packages. Consider using a hosting firm that elects to use one of the above virtual private server systems.

WHAT OPERATING SYSTEM SHOULD I CHOOSE?

Shared or virtual hosting is usually available on a UNIX or Windows platform. What is the difference between hosting on these operating systems?



Windows Servers are designed to accommodate advanced Microsoft applications. Windows Servers therefore integrate back office offerings such as FrontPage, Access and MS SQL. Windows Servers also offer specific programming environments such as Active Server Pages (ASP), Visual Basic Scripts, and Cold Fusion, which mainly link database applications to the Web. Windows servers usually do not provide an interactive shell, but are accessible through GUI-based remote administration packages such as PCAnywhere or through a customized control panel. Such packages allow you to log into the server's desktop as chief administrator as long as you have full control over your server.

Windows hosting is now an excellent option for both shared and dedicated servers. Due to the latest technological developments, Windows Servers can be more easily multiplexed and managed due to .NET technology.

Windows is also an excellent operating system to use if you intend to run your own dedicated server.

Since Windows servers provide unparalleled levels of support, security and integration for the Microsoft family of products, we recommend that consumers select Windows hosting if they need to link a Microsoft-based service to their Internet hosting requirements. Windows hosting, however, is rather complex and labor-intensive and should only be selected if a webmaster has extensive experience in maintaining Windows systems remotely and requires product/Web integration.

If a webmaster does not have experience in deployment or development in the Windows environment, they might opt to host on the UNIX platform.

A large number of hosting solutions are provided on the UNIX platform. This is because the UNIX platform is specifically designed to accommodate heavy Web traffic and server loads. UNIX servers are robust and are recognized for their ability to host multiple sites and serve out gigabytes of traffic.

This platform is also preferred by most webmasters due to their technical requirements. UNIX servers provide a wider degree of flexibility due to their shell environment. Shell environments are interactive, text-based systems that allow webmasters to interact and customize their services in real-time from any computer system worldwide. Unlike Windows systems, UNIX is not limited to special remote administration programs. A typical UNIX system can be accessed from any computer connected to the Internet without special or expensive software.

But the most favored reason that most webmasters choose the UNIX platform is because of its uptime. Most UNIX systems with heavy traffic can provide 99 per cent uptime. Windows servers with heavy traffic usually cannot make this same claim, unless specially configured. For this reason, average webmasters should select UNIX as their OS.

If you are a novice, you most likely will not require Windows hosting and you should select one of many UNIX hosting plans. The only time you would need to use Windows hosting is when you are using the specific Windows applications noted above. Microsoft FrontPage may be used on both the UNIX and Windows platforms, since most hosting firms support Microsoft FrontPage server extensions on both.