
See Our team
Wondering how we keep quality?
Got unsolved questions? Ask Questions
GATE
GMAT
CBSE
NCERT
Career
Interview
Railway
UPSC
NID
NIFT-UG
NIFT-PG
PHP
AJAX
JavaScript
Node Js
Shell Script
Research
Programming the Web[10CS73] unit-1
UNIT - 1 Fundamentals of Web, XHTML * 1 1.1
A Brief Introduction to the Internet 1.1.1
Origins In the 1960s, the U.S. Department of Defense (DoD) became interested in developing a new large-scale computer network. The purposes of this network were communications, program sharing, and remote computer access for researchers working on defense-related contracts. One fundamental requirement was that the network be sufficiently robust so that even if some network nodes were lost to sabotage, war, or some more benign cause, the network would continue to function. The DoDs Advanced Research Projects Agency (ARPA)1 funded the construction of the first such network, which connected about a dozen ARPA-funded research laboratories and universities. The first node of this network was established at UCLA in 1969. Because it was funded by ARPA, the network was named ARPAnet. Despite the initial intentions, the primary early use of ARPAnet was simple text-based communications through e-mail. Because ARPAnet was available only to laboratories and universities that conducted ARPA-funded research, the great majority of educational institutions were not connected. As a result, a number of other networks were developed during the late 1970s and early 1980s, with BITNET and CSNET among them. BITNET, which is an acronym for Because Its Time Network, began at the City University of New York. It was built initially to provide electronic mail and file transfers. CSNET, which is an acronym for Computer Science Network, connected the University of Delaware, Purdue University, the University of Wisconsin, the RAND Corporation, and Bolt, Beranek, and Newman (a research company in Cambridge, Massachusetts). Its initial purpose was to provide electronic mail. For a variety of reasons, neither BITNET nor CSNET became a widely used national network. A new national network, NSFnet, was created in 1986. It was sponsored, of course, by the National Science Foundation (NSF). NSFnet initially connected the NSF-funded supercomputer centers at five universities. Soon after being established, it became available to other academic institutions and research laboratories. By 1990, NSFnet had replaced ARPAnet for most nonmilitary uses, and a wide variety of organizations had established nodes on the new network*by 1992 NSFnet connected more than 1 million computers around the world. In 1995, a small part of NSFnet returned to being a research network. The rest became known as the Internet, although this term was used much earlier for both ARPAnet and NSFnet. 1.1.2 What Is the Internet*
The Internet is a huge collection of computers connected in a communications network. These computers are of every imaginable size, configuration, and manufacturer. In fact, some of the devices connected to the Internet*such as plotters and printers*are not computers at all. The innovation that allows all of these diverse devices to communicate with each other is a single, low-level protocol: the Transmission Control Protocol/Internet Protocol (TCP/IP). TCP/IP became the standard for computer network connections in 1982. It can be used directly to allow a program on one computer to communicate with a program on another computer via the Internet. In most cases, however, a higher-level protocol runs on top of TCP/IP. Nevertheless,
its important to know that TCP/IP provides the low-level interface that allows most computers (and other devices) connected to the Internet to appear exactly the same.2 Rather than connecting every computer on the Internet directly to every other computer on the Internet, normally the individual computers in an organization are connected to each other in a local network. One node on this local network is physically connected to the Internet. So, the Internet is actually a network of networks, rather than a network of computers. Obviously, all devices connected to the Internet must be uniquely identifiable. 1.1.3 Internet Protocol Addresses For people, Internet nodes are identified by names; for computers, they are identified by numeric addresses. This relationship exactly parallels the one between a variable name in a program, which is for people, and the variables numeric memory address, which is for the machine. The Internet Protocol (IP) address of a machine connected to the Internet is a unique 32-bit number. IP addresses usually are written (and thought of) as four 8-bit numbers, separated by periods. The four parts are separately used by Internet-routing computers to decide where a message must go next to get to its destination. Organizations are assigned blocks of IPs, which they in turn assign to their machines that need Internet access*which now include most computers. For example, a small organization may be assigned 256 IP addresses, such as 191.57.126.0 to 191.57.126.255. Very large organizations, such as the Department of Defense, may be assigned 16 million IP addresses, which include IP addresses with one particular first 8-bit number, such as 12.0.0.0 to 12.255.255.255. Although people nearly always type domain names into their browsers, the IP works just as well. For example, the IP for United Airlines (www.ual.com) is 209.87.113.93. So, if a browser is pointed at http://209.87.113.93, it will be connected to the United Airlines Web site. In late 1998, a new IP standard, IPv6, was approved, although it still is not widely used. The most significant change was to expand the address size from 32 bits to 128 bits. This is a change that will soon be essential because the number of remaining unused IP addresses is diminishing rapidly. The new standard can be found at ftp://ftp.isi.edu/in-notes/rfc2460.txt. 1.1.4 Domain Names Because people have difficulty dealing with and remembering numbers, machines on the Internet also have textual names. These names begin with the name of the host machine, followed by progressively larger enclosing collections of machines, called domains. There may be two, three, or more domain names. The first domain name, which appears immediately to the right of the host name, is the domain of which the host is a part. The second domain name gives the domain of which the first domain is a part. The last domain name identifies the type of organization in which the host resides, which is the largest domain in the sites name. For organizations in the United States, edu is the extension for educational institutions, com specifies a company, gov is used for the U.S. government, and org is used for many other kinds of organizations. In other countries, the largest domain is often an abbreviation for the country*for example, se is used for Sweden, and kz is used for Kazakhstan. Consider this sample address:
movies.comedy.marxbros.com Here, movies is the hostname and comedy is moviess local domain, which is a part of marxbross domain, which is a part of the com domain. The hostname and all of the domain names are together called a fully qualified domain name. Because IP addresses are the addresses used internally by the Internet, the fully qualified domain name of the destination for a message, which is what is given by a browser user, must be converted to an IP address before the message can be transmitted over the Internet to the destination. These conversions are done by software systems called name servers, which implement the Domain Name System (DNS). Name servers serve a collection of machines on the Internet and are operated by organizations that are responsible for the part of the Internet to which those machines are connected. All document requests from browsers are routed to the nearest name server. If the name server can convert the fully qualified domain name to an IP address, it does so. If it cannot, the name server sends the fully qualified domain name to another name server for conversion. Like IP addresses, fully qualified domain names must be unique. Figure 1.1 shows how fully qualified domain names requested by a browser are translated into IPs before they are routed to the appropriate Web server.
Figure 1.1 Domain name conversion One way to determine the IP address of a Web site is by using telnet on the fully qualified domain name. This approach is illustrated in Section 1.7.1. By the mid-1980s, a collection of different protocols that run on top of TCP/IP had been developed to support a variety of Internet uses. Among these protocols, the most common were telnet, which was developed to allow a user on one computer on the Internet to log onto and use another computer on the Internet; File Transfer Protocol (ftp), which was developed to transfer files among computers on the Internet; Usenet, which was developed to serve as an electronic bulletin board; and mailto, which was developed to allow messages to be sent from the user of one computer on the Internet to other users of other computers on the Internet. This variety of protocols, each with its own user interface and useful only for the purpose for which it was designed, restricted the growth of the Internet. Users were required to learn all the different interfaces to gain all the advantages of the Internet. Before long, however, a better approach was developed: the World Wide Web.
1.2 The World Wide Web 1.2.1 Origins In 1989, a small group of people led by Tim Berners-Lee at CERN (Conseil Europeen pour la Recherche Nucleaire, or European Organization for Particle Physics) proposed a new protocol for the Internet, as well as a system of document access to use it.3 The intent of this new system, which the group named the World Wide Web, was to allow scientists around the world to use the Internet to exchange documents describing their work. The proposed new system was designed to allow a user anywhere on the Internet to search for and retrieve documents from databases on any number of different document-serving computers connected to the Internet. By late 1990, the basic ideas for the new system had been fully developed and implemented on a NeXT computer at CERN. In 1991, the system was ported to other computer platforms and released to the rest of the world. For the form of its documents, the system used hypertext, which is text with embedded links to text in other documents to allow nonsequential browsing of textual material. The idea of hypertext had been developed earlier and had appeared in Xeroxs NoteCards and Apples HyperCard in the mid-1980s. From here on, we will refer to the World Wide Web simply as the Web. The units of information on the Web have been referred to by several different names; among them, the most common are pages, documents, and resources. Perhaps the best of these is documents, although that seems to imply only text. Pages is widely used, but it is misleading in that Web units of information often have more than one of the kind of pages that make up printed media. There is some merit to calling these units resources, because that covers the possibility of nontextual information. This book will use documents and pages more or less interchangeably, but we prefer documents in most situations. Documents are sometimes just text, usually with embedded links to other documents, but they often also include images, sound recordings, or other kinds of media. When a document contains nontextual information, it is called hypermedia. In an abstract sense, the Web is a vast collection of documents, some of which are connected by links. These documents are accessed by Web browsers, introduced in Section 1.3, and are provided by Web servers, introduced in Section 1.4. 1.2.2 Web or Internet* It is important to understand that the Internet and the Web are not the same thing. The Internet is a collection of computers and other devices connected by equipment that allows them to communicate with each other. The Web is a collection of software and protocols that has been installed on most, if not all, of the computers on the Internet. Some of these computers run Web servers, which provide documents, but most run Web clients, or browsers, which request documents from servers and display them to users. The Internet was quite useful before the Web was developed, and it is still useful without it. However, most users of the Internet now use it through the Web. 1.3 Web Browsers
When two computers communicate over some network, in many cases one acts as a client and
the other as a server. The client initiates the communication, which is often a request for information stored on the server, which then sends that information back to the client. The Web, as well as many other systems, operates in this client-server configuration. Documents provided by servers on the Web are requested by browsers, which are programs running on client machines. They are called browsers because they allow the user to browse the resources available on servers. The first browsers were text based*they were not capable of displaying graphic information, nor did they have a graphical user interface. This limitation effectively constrained the growth of the Web. In early 1993, things changed with the release of Mosaic, the first browser with a graphical user interface. Mosaic was developed at the National Center for Supercomputer Applications (NCSA) at the University of Illinois. Mosaics interface provided convenient access to the Web for users who were neither scientists nor software developers. The first release of Mosaic ran on UNIX systems using the X Window system. By late 1993, versions of Mosaic for Apple Macintosh and Microsoft Windows systems had been released. Finally, users of the computers connected to the Internet around the world had a powerful way to access anything on the Web anywhere in the world. The result of this power and convenience was an explosive growth in Web usage. A browser is a client on the Web because it initiates the communication with a server, which waits for a request from the client before doing anything. In the simplest case, a browser requests a static document from a server. The server locates the document among its servable documents and sends it to the browser, which displays it for the user. However, more complicated situations are common. For example, the server may provide a document that requests input from the user through the browser. After the user supplies the requested input, it is transmitted from the browser to the server, which may use the input to perform some computation and then return a new document to the browser to inform the user of the results of the computation. Sometimes a browser directly requests the execution of a program stored on the server. The output of the program is then returned to the browser. Although the Web supports a variety of protocols, the most common one is the Hypertext Transfer Protocol (HTTP). HTTP provides a standard form of communication between browsers and Web servers. Section 1.7 presents an introduction to HTTP. The most commonly used browsers are Microsoft Internet Explorer (IE), which runs only on PCs that use one of the Microsoft Windows operating systems,4 and Firefox, which is available in versions for several different computing platforms, including Windows, Mac OS, and Linux. Several other browsers are available, such as the close relatives of Firefox and Netscape Navigator, as well as Opera and Apples Safari. However, because the great majority of browsers now in use are either IE or Firefox, in this book we focus on those two. 1.4 Web Servers Web servers are programs that provide documents to requesting browsers. Servers are slave programs: They act only when requests are made to them by browsers running on other computers on the Internet.
The most commonly used Web servers are Apache, which has been implemented for a variety of computer platforms, and Microsofts Internet Information Server (IIS), which runs under Windows operating systems. As of June 2009, there were over 75 million active Web hosts in operation,5 about 47 percent of which were Apache, about 25 percent of which were IIS, and the remainder of which were spread thinly over a large number of others. (The third-place
server was qq.com, a product of a Chinese company, with almost 13 percent.)6 1.4.1 Web Server Operation Although having clients and servers is a natural consequence of information distribution, this configuration offers some additional benefits for the Web. On the one hand, serving information does not take a great deal of time. On the other hand, displaying information on client screens is time consuming. Because Web servers need not be involved in this display process, they can handle many clients. So, it is both a natural and an efficient division of labor to have a small number of servers provide documents to a large number of clients. Web browsers initiate network communications with servers by sending them URLs (discussed in Section 1.5). A URL can specify one of two different things: the address of a data file stored on the server that is to be sent to the client, or a program stored on the server that the client wants executed, with the output of the program returned to the client. All the communications between a Web client and a Web server use the standard Web protocol, Hypertext Transfer Protocol (HTTP), which is discussed in Section 1.7.7 When a Web server begins execution, it informs the operating system under which it is running that it is now ready to accept incoming network connections through a specific port on the machine. While in this running state, the server runs as a background process in the operating system environment. A Web client, or browser, opens a network connection to a Web server, sends information requests and possibly data to the server, receives information from the server, and closes the connection. Of course, other machines exist between browsers and servers on the network*specifically, network routers and domain-name servers. This section, however, focuses on just one part of Web communication: the server. Simply put, the primary task of a Web server is to monitor a communications port on its host machine, accept HTTP commands through that port, and perform the operations specified by the commands. All HTTP commands include a URL, which includes the specification of a host server machine. When the URL is received, it is translated into either a file name (in which case the file is returned to the requesting client) or a program name (in which case the program is run and its output is sent to the requesting client). This process sounds pretty simple, but, as is the case in many other simple-sounding processes, a large number of complicating details are involved. All current Web servers have a common ancestry: the first two servers, developed at CERN in Europe and NCSA at the University of Illinois. Currently, the most common server configuration is Apache running on some version of UNIX. 1.4.2 General Server Characteristics Most of the available servers share common characteristics, regardless of their origin or the platform on which they run. This section provides brief descriptions of some of these characteristics.
The file structure of a Web server has two separate directories. The root of one of these is called the document root. The file hierarchy that grows from the document root stores the Web documents to which the server has direct access and normally serves to clients. The root of the other directory is called the server root. This directory, along with its descendant directories,
stores the server and its support software. The files stored directly in the document root are those available to clients through top-level URLs. Typically, clients do not access the document root directly in URLs; rather, the server maps requested URLs to the document root, whose location is not known to clients. For example, suppose that the site name is www.tunias.com (not a real site*at least, not yet), which we will assume to be a UNIX-based system. Suppose further that the document root is named topdocs and is stored in the /admin/web directory, making its address /admin/web/topdocs. A request for a file from a client with the URL http://www.tunias.com/petunias.html will cause the server to search for the file with the file path /admin/web/topdocs/petunias.html. LikewCSE, the URL http://www.tunias.com/bulbs/tulips.html will cause the server to search for the file with the address /admin/web/topdocs/bulbs/tulips.html. Many servers allow part of the servable document collection to be stored outside the directory at the document root. The secondary areas from which documents can be served are called virtual document trees. For example, the original configuration of a server might have the server store all its servable documents from the primary system disk on the server machine. Later, the collection of servable documents might outgrow that disk, in which case part of the collection could be stored on a secondary disk. This secondary disk might reside on the server machine or on some other machine on a local area network. To support this arrangement, the server is configured to direct-request URLs with a particular file path to a storage area separate from the document-root directory. Sometimes files with different types of content, such as images, are stored outside the document root. Early servers provided few services other than the basic process of returning requested files or the output of programs whose execution had been requested. The list of additional services has grown steadily over the years. Contemporary servers are large and complex systems that provide a wide variety of client services. Many servers can support more than one site on a computer, potentially reducing the cost of each site and making their maintenance more convenient. Such secondary hosts are called virtual hosts. Some servers can serve documents that are in the document root of other machines on the Web; in this case, they are called proxy servers. Although Web servers were originally designed to support only the HTTP protocol, many now support ftp, gopher, news, and mailto. In addition, nearly all Web servers can interact with database systems through Common Gateway Interface (CGI) programs and server-side scripts. 1.4.3 Apache Apache began as the NCSA server, httpd, with some added features. The name Apache has nothing to do with the Native American tribe of the same name. Rather, it came from the nature of its first version, which was a patchy version of the httpd server. As seen in the usage statistics given at the beginning of this section, Apache is the most widely used Web server. The primary reasons are as follows: Apache is an excellent server because it is both fast and reliable. Furthermore, it is open-source software, which means that it is free and is managed by a large team of volunteers, a process that efficiently and effectively maintains the system. Finally, it is one of the best available servers for Unix-based systems, which are the most popular for Web servers.
Apache is capable of providing a long list of services beyond the basic process of serving
documents to clients. When Apache begins execution, it reads its configuration information from a file and sets its parameters to operate accordingly. A new copy of Apache includes default configuration information for a typical operation. The site manager modifies this configuration information to fit his or her particular needs and tastes. For historical reasons, there are three configuration files in an Apache server: httpd.conf, srm.conf, and access.conf. Only one of these, httpd.conf, actually stores the directives that control an Apache servers behavior. The other two point to httpd.conf, which is the file that contains the list of directives that specify the servers operation. These directives are described at http://httpd.apache.org/docs/2.2/mod/quickreference.html. 1.4.4 IIS Although Apache has been ported to the Windows platforms, it is not the most popular server on those systems. Because the Microsoft IIS server is supplied as part of Windows*and because it is a reasonably good server*most Windows-based Web servers use IIS. Apache and IIS provide similar varieties of services. From the point of view of the site manager, the most important difference between Apache and IIS is that Apache is controlled by a configuration file that is edited by the manager to change Apaches behavior. With IIS, server behavior is modified by changes made through a window-based management program, named the IIS snap-in, which controls both IIS and ftp. This program allows the site manager to set parameters for the server. Under Windows XP and Vista, the IIS snap-in is accessed by going to Control Panel, Administrative Tools, and IIS Admin. Clicking on this last selection takes you to a window that allows starting, stopping, or pausing IIS. This same window allows IIS parameters to be changed when the server has been stopped. 1.5 Uniform Resource Locators Uniform (or universal)8 resource locators (URLs) are used to identify documents (resources) on the Internet. There are many different kinds of resources, identified by different forms of URLs. 1.5.1 URL Formats All URLs have the same general format: scheme:object-address The scheme is often a communications protocol. Common schemes include http, ftp, gopher, telnet, file, mailto, and news. Different schemes use object addresses that have different forms. Our interest here is in the HTTP protocol, which supports the Web. This protocol is used to request and send eXtensible Hypertext Markup Language (XHTML) documents. In the case of HTTP, the form of the object address of a URL is as follows: //fully-qualified-domain-name/path-to-document
Another scheme of interest to us is file. The file protocol means that the document resides on the machine running the browser. This approach is useful for testing documents to be made
available on the Web without making them visible to any other browser. When file is the protocol, the fully qualified domain name is omitted, making the form of such URLs as follows: file://path-to-document Because the focus of this book is on XHTML documents, the remainder of the discussion of URLs is limited to the HTTP protocol. The host name is the name of the server computer that stores the document (or provides access to it on some other computer). Messages to a host machine must be directed to the appropriate process running on the host for handling. Such processes are identified by their associated port numbers. The default port number of Web server processes is 80. If a server has been configured to use some other port number, it is necessary to attach that port number to the hostname in the URL. For example, if the Web server is configured to use port 800, the host name must have :800 attached. URLs can never have embedded spaces.9 Also, there is a collection of special characters, including semicolons, colons, and ampersands (&), that cannot appear in a URL. To include a space or one of the disallowed special characters, the character must be coded as a percent sign (%) followed by the two-digit hexadecimal ASCII code for the character. For example, if San Jose is a domain name, it must be typed as San%20Jose (20 is the hexadecimal ASCII code for a space). All of the details characterizing URLs can be found at http://www.w3.org/Addressing/URL/URI_Overview.html. 1.5.2 URL Paths The path to the document for the HTTP protocol is similar to a path to a file or directory in the file system of an operating system and is given by a sequence of directory names and a file name, all separated by whatever separator character the operating system uses. For UNIX servers, the path is specified with forward slashes; for Windows servers, it is specified with backward slashes. Most browsers allow the user to specify the separators incorrectly*for example, using forward slashes in a path to a document file on a Windows server, as in the following: http://www.gumboco.com/files/f99/storefront.html The path in a URL can differ from a path to a file because a URL need not include all directories on the path. A path that includes all directories along the way is called a complete path. In most cases, the path to the document is relative to some base path that is specified in the configuration files of the server. Such paths are called partial paths. For example, if the servers configuration specifies that the root directory for files it can serve is files/f99, the previous URL is specified as follows: http://www.gumboco.com/storefront.html If the specified document is a directory rather than a single document, the directorys name is followed immediately by a slash, as in the following:
http://www.gumboco.com/departments/ Sometimes a directory is specified (with the trailing slash) but its name is not given, as in the following example: http://www.gumboco.com/ The server then searches at the top level of the directory in which servable documents are normally stored for something it recognizes as a home page. By convention, this page is often a file named index.html. The home page usually includes links that allow the user to find the other related servable files on the server. If the directory does not have a file that the server recognizes as being a home page, a directory listing is constructed and returned to the browser. 1.6 Multipurpose Internet Mail Extensions A browser needs some way of determining the format of a document it receives from a Web server. Without knowing the form of the document, the browser would be unable to render it, because different document formats require different rendering tools. The forms of these documents are specified with Multipurpose Internet Mail Extensions (MIME). 1.6.1 Type Specifications MIME was developed to specify the format of different kinds of documents to be sent via Internet mail. These documents could contain various kinds of text, video data, or sound data. Because the Web has needs similar to those of Internet mail, MIME was adopted as the way to specify document types transmitted over the Web. A Web server attaches a MIME format specification to the beginning of the document that it is about to provide to a browser. When the browser receives the document from a Web server, it uses the included MIME format specification to determine what to do with the document. If the content is text, for example, the MIME code tells the browser that it is text and also indicates the particular kind of text it is. If the content is sound, the MIME code tells the browser that it is sound and then gives the particular representation of sound so that the browser can choose a program to which it has access to produce the transmitted sound. MIME specifications have the following form: type/subtype The most common MIME types are text, image, and video. The most common text subtypes are plain and html. Some common image subtypes are gif and jpeg. Some common video subtypes are mpeg and quicktime. A list of MIME specifications is stored in the configuration files of every Web server. In the remainder of this book, when we say document type, we mean both the documents type and its subtype. Servers determine the type of a document by using the filenames extension as the key into a table of types. For example, the extension .html tells the server that it should attach text/html to the document before sending it to the requesting browser.10
Browsers also maintain a conversion table for looking up the type of a document by its file name extension. However, this table is used only when the server does not specify a MIME
type, which may be the case with some older servers. In all other cases, the browser gets the document type from the MIME header provided by the server. 1.6.2 Experimental Document Types Experimental subtypes are sometimes used. The name of an experimental subtype begins with x-, as in video/x-msvideo. Any Web provider can add an experimental subtype by having its name added to the list of MIME specifications stored in the Web providers server. For example, a Web provider might have a handcrafted database whose contents he or she wants to make available to others through the Web. Of course, this raCSEs the issue of how the browser can display the database. As might be expected, the Web provider must supply a program that the browser can call when it needs to display the contents of the database. These programs either are external to the browser, in which case they are called helper applications, or are code modules that are inserted into the browser, in which case they are called plug-ins. Every browser has a set of MIME specifications (file types) it can handle. All can deal with text/plain (unformatted text) and text/html (HTML files), among others. Sometimes a particular browser cannot handle a specific document type, even though the type is widely used. These cases are handled in the same way as the experimental types described previously. The browser determines the helper application or plug-in it needs by examining the browser configuration file, which provides an association between file types and their required helpers or plug-ins. If the browser does not have an application or a plug-in that it needs to render a document, an error message is displayed. 1.7 The Hypertext Transfer Protocol All Web communications transactions use the same protocol: the Hypertext Transfer Protocol (HTTP). The current version of HTTP is 1.1, formally defined as RFC 2616, which was approved in June 1999. RFC 2616 is available at the Web site for the World Wide Web Consortium (W3C), http://www.w3.org. This section provides a brief introduction to HTTP. HTTP consists of two phases: the request and the response. Each HTTP communication (request or response) between a browser and a Web server consists of two parts: a header and a body. The header contains information about the communication; the body contains the data of the communication if there is any. 1.7.1 The Request Phase The general form of an HTTP request is as follows: 1. HTTP method Domain part of the URL HTTP version 2. Header fields 3. Blank line 4. Message body The following is an example of the first line of an HTTP request:
GET /storefront.html HTTP/1.1 Only a few request methods are defined by HTTP, and even a smaller number of these are typically used. Table 1.1 lists the most commonly used methods. Table 1.1 HTTP request methods
Among the methods given in Table 1.1, GET and POST are the most frequently used. POST was originally designed for tasks such as posting a news article to a newsgroup. Its most common use now is to send form data from a browser to a server, along with a request to execute a program on the server that will process the data. Following the first line of an HTTP communication is any number of header fields, most of which are optional. The format of a header field is the field name followed by a colon and the value of the field. There are four categories of header fields: 1. General: For general information, such as the date 2. Request: Included in request headers 3. Response: For response headers 4. Entity: Used in both request and response headers One common request field is the Accept field, which specifies a preference of the browser for the MIME type of the requested document. More than one Accept field can be specified if the browser is willing to accept documents in more than one format. For example; we might have any of the following: Accept: text/plain Accept: text/html Accept: image/gif A wildcard character, the asterisk (*), can be used to specify that part of a MIME type can be anything. For example, if any kind of text is acceptable, the Accept field could be as follows: Accept: text/*
The Host: host name request field gives the name of the host. The Host field is required for HTTP 1.1. The If-Modified-Since: date request field specifies that the requested file should be
sent only if it has been modified since the given date. If the request has a body, the length of that body must be given with a Content-length field, which gives the length of the response body in bytes. POST method requests require this field because they send data to the server. The header of a request must be followed by a blank line, which is used to separate the header from the body of the request. Requests that use the GET, HEAD, and DELETE methods do not have bodies. In these cases, the blank line signals the end of the request. A browser is not necessary to communicate with a Web server; telnet can be used instead. Consider the following command, given at the command line of any widely used operating system: > telnet blanca.uccs.edu http This command creates a connection to the http port on the blanca.uccs.edu server. The server responds with the following:11 Trying 128.198.162.60 ... Connected to blanca Escape character is ^]. The connection to the server is now complete, and HTTP commands such as the following can be given: GET /~user1/respond.html HTTP/1.1 Host: blanca.uccs.edu 1.7.2 The Response Phase The general form of an HTTP response is as follows: 1. Status line 2. Response header fields 3. Blank line 4. Response body The status line includes the HTTP version used, a three-digit status code for the response, and a short textual explanation of the status code. For example, most responses begin with the following: HTTP/1.1 200 OK The status codes begin with 1, 2, 3, 4, or 5. The general meanings of the five categories specified by these first digits are shown in Table 1.2. Table 1.2 First digits of HTTP status codes
One of the more common status codes is one users never want to see: 404 Not Found, which means the requested file could not be found. Of course, 200 OK is what users want to see, because it means that the request was handled without error. The 500 code means that the server has encountered a problem and was not able to fulfill the request. After the status line, the server sends a response header, which can contain several lines of information about the response, each in the form of a field. The only essential field of the header is Content-type. The following is the response header for the request given near the end of Section 1.7.1:
The response header must be followed by a blank line, as is the case for request headers. The response data follows the blank line. In the preceding example, the response body would be the HTML file, respond.html. In HTTP versions prior to 1.1, when a server finished sending a response to the client, the communications connection was closed. However, the default operation of HTTP 1.1 is that the connection is kept open for a time so that the client can make several requests over a short span of time without needing to reestablish the communications connection with the server. This change led to significant increases in the efficiency of the Web. 1.8 Security
It does not take a great deal of contemplation to realize that the Internet and the Web are fertile grounds for security problems. On the Web server side, anyone on the planet with a computer, a browser, and an Internet connection can request the execution of software on any server computer. He or she can also access data and databases stored on the server computer. On the browser end, the problem is similar: Any server to which the browser points can download software that is to be executed on the browser host machine. Such software can access parts of the memory and memory devices attached to that machine that are not related to the needs of the original browser request. In effect, on both ends, it is like allowing any number of total
strangers into your house and trying to prevent them from leaving anything in the house, taking anything from the house, or altering anything in the house. The larger and more complex the design of the house, the more difficult it will be to prevent any of those activities. The same is true for Web servers and browsers: The more complex they are, the more difficult it is to prevent security breaches. Todays browsers and Web servers are indeed large and complex software systems, so security is a significant problem in Web applications. The subject of Internet and Web security is extensive and complicated, so much so that more than a few books that focus on it have been written. Therefore, this one section of one chapter of one book can give no more than a brief sketch of some of the subtopics of security. One aspect of Web security is the matter of getting ones data from the browser to the server and having the server deliver data back to the browser without anyone or any device intercepting or corrupting those data along the way. Consider just the simplest case, that of transmitting a credit card number to a company from which a purchase is being made. The security issues for this transaction are as follows: 1. Privacy*it must not be possible for the credit card number to be stolen on its way to the companys server. 2. Integrity*it must not be possible for the credit card number to be modified on its way to the companys server. 3. Authentication*it must be possible for both the purchaser and the seller to be certain of each others identity. 4. Nonrepudiation*it must be possible to prove legally that the message was actually sent and received. The basic tool to support privacy and integrity is encryption. Data to be transmitted is converted into a different form, or encrypted, such that someone (or some computer) who is not supposed to access the data cannot decrypt it. So, if data is intercepted while en route between Internet nodes, the interceptor cannot use the data because he or she cannot decrypt it. Both encryption and decryption are done with a key and a process (applying the key to the data). Encryption was developed long before the Internet ever existed. Julius Caesar crudely encrypted the messages he sent to his field generals while at war. Until the middle 1970s, the same key was used for both encryption and decryption, so the initial problem was how to transmit the key from the sender to the receiver. This problem was solved in 1976 by Whitfield Diffie and Martin Hellman of Stanford University, who developed public-key encryption, a process in which a public key and a private key are used, respectively, to encrypt and decrypt messages. A communicator*say, Joe*has an inversely related pair of keys, one public and one private. The public key can be distributed to all organizations that might send Joe messages. All of them can use the public key to encrypt messages to Joe, who can decrypt the messages with his matching private key. This arrangement works because the private key need never be transmitted and also because it is virtually impossible to decrypt the private key from its corresponding public key. The technical wording for this situation is that it is computationally infeasible to determine the private key from its public key.
The most widely used public-key algorithm is named RSA, developed in 1977 by three MIT professors*Ron Rivest, Adi Shamir, and Leonard Adleman*the first letters of whose last names were used to name the algorithm. Most large companies now use RSA for e-commerce. Another, completely different security problem for the Web is the intentional and malicious destruction of data on computers attached to the Internet. The number of different ways this can be done has increased steadily over the life span of the Web. The sheer number of such attacks has also grown rapidly. There is now a continuous stream of new and increasingly devious denial-of-service (DoS) attacks, viruses, and worms being discovered, which have caused billions of dollars of damage, primarily to businesses that use the Web heavily. Of course, huge damage also has been done to home computer systems through Web intrusions. DoS attacks can be created simply by flooding a Web server with requests, overwhelming its ability to operate effectively. Most DoS attacks are conducted with the use of networks of virally infected zombie computers, whose owners are unaware of their sinister use. So, DoS and viruses are often related. Viruses are programs that often arrive in a system in attachments to e-mail messages or attached to free downloaded programs. Then they attach to other programs. When executed, they replicate and can overwrite memory and attached memory devices, destroying programs and data alike. Two viruses that were extensively destructive appeared in 2000 and 2001: the ILOVEYOU virus and the CodeRed virus, respectively. Worms damage memory, like viruses, but spread on their own, rather than being attached to other files. Perhaps the most famous worm so far has been the Blaster worm, spawned in 2003. DoS, virus, and worm attacks are created by malicious people referred to as hackers. The incentive for these people apparently is simply the feeling of pride and accomplishment they derive from being able to cause huge amounts of damage by outwitting the designers of Web software systems. Protection against viruses and worms is provided by antivirus software, which must be updated frequently so that it can detect and protect against the continuous stream of new viruses and worms. 1.9 The Web Programmers Toolbox This section provides an overview of the most common tools used in Web programming*some are programming languages, some are not. The tools discussed are XHTML, a markup language, along with a few high-level markup document-editing systems; XML, a meta-markup language; JavaScript, PHP, and Ruby, which are programming languages; JSF, ASP.NET, and Rails, which are development frameworks for Web-based systems; Flash, a technology for creating and displaying graphics and animation in XHTML documents; and Ajax, a Web technology that uses JavaScript and XML. Web programs and scripts are divided into two categories*client side and server side*according to where they are interpreted or executed. XHTML and XML are client-side languages; PHP and Ruby are server-side languages; JavaScript is most often a client-side language, although it can be used for both. We begin with the most basic tool: XHTML.
1.9.1 Overview of XHTML At the onset, it is important to realize that XHTML is not a programming language*it cannot be used to describe computations. Its purpose is to describe the general form and layout of documents to be displayed by a browser. The word markup comes from the publishing world, where it is used to describe what production people do with a manuscript to specify to a printer how the text, graphics, and other elements in the book should appear in printed form. XHTML is not the first markup language used with computers. TeX and LaTeX are older markup languages for use with text; they are now used primarily to specify how mathematical expressions and formulas should appear in print. An XHTML document is a mixture of content and controls. The controls are specified by the tags of XHTML. The name of a tag specifies the category of its content. Most XHTML tags consist of a pair of syntactic markers that are used to delimit particular kinds of content. The pair of tags and their content together are called an element. For example, a paragraph element specifies that its content, which appears between its opening tag,
n this case, the image document stored in the file redhead.jpg is to be displayed at the position in the document in which the tag appears. XHTML 1.0 was introduced in early 2000 by the W3C as an alternative to HTML 4.01, which was at that time (and still is) the latest version of HTML. XHTML 1.0 is nothing more than HTML 4.01 with stronger syntactic rules. These stronger rules are those of XML (see Section 1.9.4). The current version, XHTML 1.1, was released in May 2001 as a replacement for XHTML 1.0, although, for various reasons, XHTML 1.0 is still widely used. Chapter 2, Introduction to XHTML, provides a description of a large subset of XHTML. 1.9.2 Tools for Creating XHTML Documents XHTML documents can be created with a general-purpose text editor. There are two kinds of tools that can simplify this task: XHTML editors and what-you-see-is-what-you-get (WYSIWYG, pronounced wizzy-wig) XHTML editors. XHTML editors provide shortcuts for producing repetitious tags such as those used to create the rows of a table. They also may provide a spell-checker and a syntax-checker, and they may color code the XHTML in the display to make it easier to read and edit.
, and its closing tag,
, is a paragraph. A browser has a default style (font, font style, font size, and so forth) for paragraphs, which is used to display the content of a paragraph element. Some tags include attribute specifications that provide some additional information for the browser. In the following example, the src attribute specifies the location of the img tags image content:
A more powerful tool for creating XHTML documents is a WYSIWYG XHTML editor. Using a WYSIWYG XHTML editor, the writer can see the formatted document that the XHTML describes while he or she is writing the XHTML code. WYSIWYG XHTML editors are very useful for beginners who want to create simple documents without learning XHTML and for users who want to prototype the appearance of a document. Still, these editors sometimes
produce poor-quality XHTML. In some cases, they create proprietary tags that some browsers will not recognize. Two examples of WYSIWYG XHTML editors are Microsoft FrontPage and Adobe Dreamweaver. Both allow the user to create XHTML-described documents without requiring the user to know XHTML. They cannot handle all of the tags of XHTML, but they are very useful for creating many of the common features of documents. Between the two, FrontPage is by far the most widely used. Information on Dreamweaver is available at http://www.adobe.com/; information on FrontPage is available at http://www.microsoft.com/frontpage/. 1.9.3 Plug-ins and Filters Two different kinds of converters can be used to create XHTML documents. Plug-ins12 are programs that can be integrated together with a word processor. Plug-ins add new capabilities to the word processor, such as toolbar buttons and menu elements that provide convenient ways to insert XHTML into the document being created or edited. The plug-in makes the word processor appear to be an XHTML editor that provides WYSIWYG XHTML document development. The end result of this process is an XHTML document. The plug-in also makes available all the tools that are inherent in the word processor during XHTML document creation, such as a spell-checker and a thesaurus. A second kind of converter is a filter, which converts an existing document in some form, such as LaTeX or Microsoft Word, to XHTML. Filters are never part of the editor or word processor that created the document*an advantage because the filter can then be platform independent. For example, a Word-Perfect user working on a Macintosh computer can use a filter running on a UNIX platform to provide documents that can be later converted to XHTML. The disadvantage of filters is that creating XHTML documents with a filter is a two-step process: First you create the document, and then you use a filter to convert it to XHTML. Neither plugs-ins nor filters produce XHTML documents that, when displayed by browsers, have the identical appearance of that produced by the word processor. The two advantages of both plug-ins and filters, however, are that existing documents produced with word processors can be easily converted to XHTML and that users can use a word processor with which they are familiar to produce XHTML documents. This obviates the need to learn to format text by using XHTML directly. For example, once you learn to create tables with your word processor, it is easier to use that process than to learn to define tables directly in XHTML. The XHTML output produced by either filters or plug-ins often must be modified, usually with a simple text editor, to perfect the appearance of the displayed document in the browser. Because this new XHTML file cannot be converted to its original form (regardless of how it was created), you will have two different source files for a document, inevitably leading to version problems during maintenance of the document. This is clearly a disadvantage of using converters. 1.9.4 Overview of XML
HTML is defined with the use of the Standard Generalized Markup Language (SGML), which is a language for defining markup languages. (Such languages are called meta-markup
languages.) XML (eXtensible Markup Language) is a simplified version of SGML, designed to allow users to easily create markup languages that fit their own needs. XHTML is defined with the use of XML. Whereas XHTML users must use the predefined set of tags and attributes, when a user creates his or her own markup language with XML, the set of tags and attributes is designed for the application at hand. For example, if a group of users wants a markup language to describe data about weather phenomena, that language could have tags for cloud forms, thunderstorms, and low-pressure centers. The content of these tags would be restricted to relevant data. If such data is described with XHTML, cloud forms could be put in generic tags, but then they could not be distinguished from thunderstorm elements, which would also be in the same generic tags. Whereas XHTML describes the overall layout and gives some presentation hints for general information, XML-based markup languages describe data and its meaning through their individualized tags and attributes. XML does not specify any presentation details. The great advantage of XML is that application programs can be written to use the meanings of the tags in the given markup language to find specific kinds of data and process it accordingly. The syntax rules of XML, along with the syntax rules for a specific XML-based markup language, allow documents to be validated before any application attempts to process their data. This means that all documents that use a specific markup language can be checked to determine whether they are in the standard form for such documents. Such an approach greatly simplifies the development of application programs that process the data in XML documents. 1.9.5 Overview of JavaScript JavaScript is a client-side scripting language whose primary uses in Web programming are to validate form data and to create dynamic XHTML documents. The name JavaScript is misleading because the relationship between Java and JavaScript is tenuous, except for some of the syntax. One of the most important differences between JavaScript and most common programming languages is that JavaScript is dynamically typed. This strategy is virtually the opposite of that of strongly typed languages such as C++ and Java. JavaScript programs are usually embedded in XHTML documents,13 which are downloaded from a Web server when they are requested by browsers. The JavaScript code in an XHTML document is interpreted by an interpreter embedded in the browser on the client. One of the most important applications of JavaScript is to dynamically create and modify documents. JavaScript defines an object hierarchy that matches a hierarchical model of an XHTML document. Elements of an XHTML document are accessed through these objects, providing the basis for dynamic documents. Chapter 4, The Basics of JavaScript, provides a more detailed look at JavaScript. Chapter 5, JavaScript and XHTML Documents, and Chapter 6, Dynamic Documents with JavaScript, discuss the use of JavaScript to provide access to, and dynamic modification of, XHTML documents. 1.9.6 Overview of Flash
There are two components of Flash: the authoring environment, which is a development framework, and the player. Developers use the authoring environment to create static graphics, animated graphics, text, sound, and interactivity to be part of stand-alone HTML documents or
to be part of other XHTML documents. These documents are served by Web servers to browsers, which use the Flash player plug-in to display the documents. Much of this development is done by clicking buttons, choosing menu items, and dragging and dropping graphics. Flash makes animation very easy. For example, for motion animation, the developer needs only to supply the beginning and ending positions of the figure to be animated*Flash builds the intervening figures. The interactivity of a Flash application is implemented with ActionScript, a dialect of JavaScript. Flash is now the leading technology for delivering graphics and animation on the Web. It has been estimated that nearly 99 percent of the worlds computers used to access the Internet have a version of the Flash player installed as a plug-in in their browsers. 1.9.7 Overview of PHP PHP is a server-side scripting language specifically designed for Web applications. PHP code is embedded in XHTML documents, as is the case with JavaScript. With PHP, however, the code is interpreted on the server before the XHTML document is delivered to the requesting client. A requested document that includes PHP code is preprocessed to interpret the PHP code and insert its output into the XHTML document. The browser never sees the embedded PHP code and is not aware that a requested document originally included such code. PHP is similar to JavaScript, both in terms of its syntactic appearance and in terms of the dynamic nature of its strings and arrays. Both JavaScript and PHP use dynamic data typing, meaning that the type of a variable is controlled by the most recent assignment to it. PHPs arrays are a combination of dynamic arrays and hashes (associative arrays). The language includes a large number of predefined functions for manipulating arrays. PHP allows simple access to XHTML form data, so form processing is easy with PHP. PHP also provides support for many different database management systems. This versatility makes it an excellent language for building programs that need Web access to databases. 1.9.8 Overview of Ajax Ajax, shorthand for Asynchronous JavaScript + XML, has been around for a few years, but did not acquire its catchy name until 2005.14 The idea of Ajax is relatively simple, but it results in a different way of viewing and building Web interactions. This new approach produces an enriched Web experience for those using a certain category of Web interactions. In a traditional (as opposed to Ajax) Web interaction, the user sends messages to the server either by clicking a link or by clicking a forms Submit button. After the link has been clicked or the form has been submitted, the client waits until the server responds with a new document. The entire browser display is then replaced by the new document. Complicated documents take a significant amount of time to be transmitted from the server to the client and more time to be rendered by the browser. In Web applications that require frequent interactions with the client and remain active for a significant amount of time, the delay in receiving and rendering a complete response document can be disruptive to the user.
In an Ajax Web application, there are two variations from the traditional Web interaction. First, the communication from the browser to the server is asynchronous; that is, the browser need not wait for the server to respond. Instead, the browser user can continue whatever he or she was doing while the server finds and transmits the requested document and the browser renders
the new document. Second, the document provided by the server usually is only a relatively small part of the displayed document, and therefore it takes less time to be transmitted and rendered. These two changes can result in much faster interactions between the browser and the server. The x in Ajax, from XML, is there because in many cases the data supplied by the server is in the form of an XML document, which provides the new data to be placed in the displayed document. However, in some cases the data is plain text, which may even be JavaScript code. It can also be XHTML. The goal of Ajax is to have Web-based applications become closer to desktop (client-resident) applications, in terms of the speed of interactions and the quality of the user experience. Wouldnt we all like our Web-based applications to be as responsive as our word processors* Ajax has some advantages over the competing technologies of ASP.NET and JSP. First and foremost, the technologies that support Ajax are already resident in nearly all Web browsers and servers. This is in contrast to both ASP.NET and JSP, which still have far-from-complete coverage. Second, using Ajax does not require learning a new tool or language. Rather, it requires only a new way of thinking about Web interactions. Ajax is discussed in more depth in Chapter 10, Introduction to Ajax. 1.9.9 Overview of Servlets, JavaServer Pages, and JavaServer Faces There are many computational tasks in a Web interaction that must occur on the server, such as processing order forms and accessing server-resident databases.
</div>
A Java class called a servlet can be used for these applications. A servlet is a compiled Java class, an object of which is executed on the server system when requested by the