Overview Of Web Page Optimization Factors Computer Science Essay

Published: November 9, 2015 Words: 5525

The survey regarding the design of a website shows that a good website design is that which facilitates a user's Web browsing behavior leading to better user performance. During this paper the website behavior is examined as a way to understand it using a ''think aloud'' protocol analysis. Here the processes during website browsing and factors are discussed which helps to optimize the web pages. Badly designed Websites irritates the uses and cause them to leave as they cannot find what they need. The following paper helps in improving the performance of a website. It covers server-side and client issues, because many web sites are on intranets, where a system administrator has control over the client and network. Server-side techniques include improving parallelism, using cache control and HTTP compression, rewriting URLs, and using delta compression. Client-side techniques include lazy-loading JavaScript, Cache Off-Site Files on the Server and Load Locally and JavaScript Optimization and Packing. The response time guidelines are also explored during the paper. The paper offers the top ten web performance to streamline contents of the web pages. This includes a) Query Execution, b) Minimize HTTP requests c) Resize and optimize images, d) Optimize multimedia, e) Use server-side sniffing, f) Optimize JavaScript for execution speed and file size, g) Convert JavaScript behavior to CSS, h) Convert table layout to CSS layout, i) Replace inline style with CSS rules, j) Minimize initial display time and k) Load JavaScript wisely. Also the CSS Optimization discussed in the paper reveals how to optimize and modularize the CSS to streamline HTML up to 50% by giving ten techniques covering optimizing CSS, including shorthand properties and grouping, descendant selectors to replace inline style, and substituting CSS techniques for JavaScript behavior. The concept of Ajax Optimization is also explored with the ways to optimize JavaScript code and make Ajax applications more robust. Optimized use of JavaScript also boosts interactivity of the website.

Keywords: Website design, performance, CSS, HTTP, AJAX, JavaScript

1.1 Introduction

Websites are designed to facilitate users in finding what their needs are and increase satisfaction while accomplishing their tasks [ABD01]. Web performance optimization simplifies the content and helps the server to work properly for accessing faster web pages. A large number of metrics and tools were defined for measuring and optimizing the search engine marketing (SEM) campaigns thus improving the performance of a website.

The user gets frustrated when page load times exceed 8 to 10 seconds [BOU00]. According to industry statistics, users generally abandon a web page when the page load time exceeds 8 seconds. According to JupiterResearch survey as 33% of broadband shoppers do not wait more than four seconds for loading a web page, but 43% of narrowband users don't wait more than six seconds [AKA06]. The response time of the end user is less than 10-20% in getting the HTML document from the web server to the browser. To reduce web page response time the remaining 80-90% of the end user experience should be considered.

During survey Google found that moving from a 10-result page loading in 0.4 seconds to a 30-result page loading in 0.9 seconds decreased traffic and add revenues by 20% [LIN06]. If Google Maps home page was reduced from 100 KB to 70-80 KB, traffic went up 10% in the first week and an additional 25% in the following three weeks [FAR06]. Amazon also revealed similar results that for every 100 ms increase in load time of Amazon.com, decreased sales by 1% [KOH07].

Speed being the second most important factor, an attempt can be made to display the initial useful content in less than one or two seconds by layering and streamlining it. Fast display speed increases profits, decreases costs, and improves customer satisfaction. [SKA04]

1.2 Factors to be optimized

Streamlining transforms the pages to display navigable content faster, and to defer or delay off site content. There are certain techniques that can increase the performance of the website namely, Server Side techniques and Client Side techniques. To maximize web page display speed, ten techniques can be considered to optimize the web pages - a) Query Execution, b) Minimize HTTP requests c) Resize and optimize images, d) Optimize multimedia, e) Use server-side sniffing, f) Optimize JavaScript for execution speed and file size, g) Convert JavaScript behavior to CSS, h) Convert table layout to CSS layout, i) Replace inline style with CSS rules, j) Minimize initial display time and k) Load JavaScript wisely.

1.2.1 Server-Side Optimization Techniques

To optimize high-traffic pages the server-side techniques can be used. They are: optimizing parallel downloads, caching frequently used objects, using HTTP compression, delta encoding (delta compression) and rewriting URIs with mod_rewrite.. 1) Optimization of Parallel Downloads - Response time depends on the number of components in the page. Each component generates an HTTP request / response. 1999 recommendation by HTTP 1.1 specification suggested that the download limit for the browsers was two objects per hostname. [FIE99]. It resulted in slower load times where the sites hosted on one domain with objects loaded two at a time. With the improvement of bandwidth and proxy server parallelism can be improved by using multiple domains (or subdomains) to deliver objects. 2) Caching Frequently Used Objects - Caching is the temporary storage of frequently accessed data. It helps to reduce costly HTTP requests to improve performance [THE06]. By storing "fresh" objects closer to users, unnecessary HTTP requests can be avoided and DNS "hops" can be minimized thus resulting in reduced bandwidth consumption and server load, and improved response times. Yahoo! estimates that between 62% and 95% of the time used to fetch a web page is spent making HTTP requests for objects. A powerful caching architecture named mod_cache help in accelerating HTTP traffic. This implement a content cache used to cache local or proxied content and improves performance by temporarily storing resources in faster storage. 3) Using HTTP Compression - HTTP compression a way to compress textual content transferred from web servers to browsers uses compression algorithms, such as gzip and then compress HTML, JavaScript, CSS, XML, and other text-based files at the server. With the help of HTTP headers, messages can be zipped. A compression-aware browser receive encoded content with a message in an HTTP header and an HTTP 1.1 compliant server then deliver the requested document by using an encoding that is acceptable to the client. 4) Delta encoding (delta compression) - Delta encoding updates web pages by sending the differences between versions of a web page. The server (proxy or origin) sends only what has changed in the page since the last access, greatly reducing the amount of data sent. About 32% of page accesses are first-time visits, about 68% of page visits are eligible for delta compression [SAV04]. Sending deltas for the same URI assumes that the client has accessed the page in the past. According to Mogul, there are only 30% of web pages [MOG97]. Delta compression for pages at different URIs achieves more modest compression ratios than the same-URI method. 5) Rewriting URIs with mod_rewrite - mod_rewrite, mapping URIs from one form to another used to abbreviate URIs save bytes or create more search friendly URIs. For example, URIs such as r/29 can be used for longer ones such as http://travel.yahoo.com to save space. Apache, IIS, Manilla, and Zope all support this technique. Yahoo! and other popular sites use URI abbreviation to shave off 20% to 30% of HTML file size.

Client-Side Performance Techniques

Sometimes the loading of content is delayed to boost the initial display speed of web pages. Progressive enhancement can be employed to layer the functionality over HTML elements. The inline images reduce HTTP requests for the browsers that support them. Some of the client side techniques are: 1) Delay Script Loading - Page load times can be improved by delaying the loading of the scripts till the body content has displayed. For nonessential services (advertising, interface enhancements, surveys, etc.) this technique can boost the initial display speed of pages. Scripts in the head of HTML documents are processed before the body content is parsed and displayed. 2) Cache Off-Site Files on the Server and Load Locally - Non-real-time content can be used to cache the off-site file locally. With more real-time content, such as stock quotes, the data can be grabbed and the previous entry can be cached. If the result is bad, the previous entry can be used [EIS99]. 3) JavaScript Optimization and Packing - A number of JavaScript packers remove whitespace and comments and abbreviate variable names. Some packers remap object names. Rhino, compliments of the Mozilla Project, analyzes the code with a JavaScript parser, minimizing the possibility of errors.

1.3 Measurement of optimization of factors

For enhancing the performance of a web page following are the steps.

Step 1: Query Execution - One of the first steps to improve performance of site is to look at database connections and query optimization. The query optimization techniques are:

1. Using WHERE clause - This can result in good performance benefits, because SQL Server returns only the particular rows, not all rows from the table(s). Thus reducing network traffic and boosting the overall performance of the query. For example:

SELECT I.itemId IitemId, max(B.amount) bidAmount, max(B.bidDate) lastBid, I.value, Co.name CoName, Co.companyId CoCompanyId, B.username FROM item I LEFT JOIN company Co ON Co.companyId=I.companyId LEFT JOIN (SELECT itemId, amount, bidDate, username FROM bid B LEFT JOIN account A ON A.accountId=B.accountId ORDER BY amount DESC) B ON B.itemId=I.itemId GROUP BY I.itemId ORDER BY B.bidDate DESC LIMIT 20 .

The above query written below using the WHERE Clause: SELECT I.itemId IitemId, max(B.amount) bidAmount, max(B.bidDate) lastBid, I.value, Co.name CoName, Co.companyId CoCompanyId, B.username FROM item I

LEFT JOIN company Co ON Co.companyId=I.companyId LEFT JOIN (SELECT itemId, amount, bidDate, username FROM bid B LEFT JOIN account A ON A.accountId=B.accountId ORDER BY amount DESC) B ON B.itemId=I.itemId WHERE B.bidDate>'2007-11-20 12:00:20′ GROUP BY I.itemId ORDER BY B.bidDate DESC LIMIT 20

2. Only particular columns from the table, not all table's columns - Through this SQL Server will return only particular columns, not all table's columns reducing network traffic and enhancing the overall performance of the query. For example: The query can be written as

SELECT id, first_name, last_name, subject FROM student_details; instead of: SELECT * FROM student_details; 3. Usage of views and stored procedures - The client sends only stored procedure, a set of Structured Query Language (SQL) statements with an assigned name that's stored in the database in compiled form so that it can be shared by a number of programs, or view name to server instead of large heavy-duty queries text thus, reducing network traffic. The stored procedures are helpful in controlling access to data, preserving data integrity, and improving productivity. 4. Avoiding SQL Server cursors, whenever possible - The subquery or derived tables can be used to perform row-by-row operations as SQL Server cursors result in performance degradation in comparison with select statement. Example: Write the query as

SELECT name FROM employee WHERE (salary, age ) = (SELECT MAX (salary), MAX (age)

FROM employee_details) AND dept = 'Electronics'; instead of: SELECT name FROM employee WHERE salary = (SELECT MAX(salary) FROM employee_details) AND age = (SELECT MAX(age) FROM employee_details) AND emp_dept = 'Electronics'; 5. Using sysindexes instead of SELECT COUNT(*) - For determining the total row count in a table sysindexes can be used in the system table. Thus, the following select statement can be used instead of SELECT COUNT(*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2 6. Using constraints in place of triggers, whenever possible - Constraints are much more efficient than triggers and can boost performance. Thus, constraints can be used instead of triggers, whenever possible. For example:

CREATE TABLE Employees ( ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY, FirstName VARCHAR(100) NOT NULL DEFAULT '', LastName VARCHAR(100) NOT NULL DEFAULT '');

CREATE TABLE SalariedEmployees ( ID INT NOT NULL PRIMARY KEY REFERENCES Employees(ID), Salary DECIMAL(12,2) NOT NULL);

The "REFERENCES Employees(ID)" defines a foreign key constraint. 7. Table variables used instead of temporary tables - Table variables require less locking and logging resources than temporary tables. Therefore, table variables, available in SQL Server 2000 only, should be used whenever possible. 8. Avoid HAVING clause, whenever possible - HAVING clause is used to filter the rows after all the rows are selected. This is used to restrict the result set returned by the GROUP BY clause. In select statement it contains only WHERE and GROUP BY clauses without HAVING clause improving the performance of the query. For Example: The query is written as

SELECT subject, count(subject) FROM student_details WHERE subject != 'Science' AND subject != 'Maths' GROUP BY subject; instead of: SELECT subject, count(subject) FROM student_details GROUP BY subject HAVING subject!= 'Vancouver' AND subject!= 'Toronto'; 9. Avoid the DISTINCT clause, whenever possible - Using DISTINCT clause results in performance degradation, this clause should only be used when it is necessary. For Example: The query used as SELECT d.dept_id, d.dept FROM dept d WHERE EXISTS ( SELECT 'X' FROM employee e WHERE e.dept = d.dept); instead of: SELECT DISTINCT d.dept_id, d.dept FROM dept d,employee e WHERE e.dept = e.dept; 10. Including SET NOCOUNT ON statement into stored procedures - This decrease the network traffic as the client do not receive the message indicating the number of rows affected by a T-SQL statement. 11. Using select statements with TOP keyword or the SET ROWCOUNT statement for returning only the first n rows - This improves performance of the queries returning the smaller result set. This also reduces the traffic between the server and the clients. 12. Use the FAST number_rows table hint for quickly returning 'number_rows' rows - We can quickly get the n rows and can work with them, when the query continues execution and produces its full result set. 13. Using UNION ALL instead of UNION, whenever possible - The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist. For example: SELECT id, first_name FROM student_details_class10 UNION ALL SELECT id, first_name FROM sports_team; instead of: SELECT id, first_name, subject FROM student_details_class10 UNION SELECT id, first_name FROM sports_team; 14. No usage of optimizer hints in the queries - SQL Server query optimizer, optimize the query by using optimizer hints, this hurts performance. 15. User-defined functions used to encapsulate code for reuse - The user-defined functions (UDFs) contain one or more Transact-SQL statements that can be used to encapsulate code for reuse for reducing network traffic. 16. Specifying the index keys in ascending or descending order - Using the CREATE INDEX statement with the DESC option (descending order) increase the speed of queries, which return rows in the descending order. 17. Deleting all tables' rows with TRUNCATE TABLE instead of DELETE command - For deleting all tables' row the TRUNCATE TABLE can be used as it removes all rows from a table without logging the individual row deletes. 18. Don't use Enterprise Manager to access remote servers over a slow link or to maintain very large databases - Using Enterprise Manager is very resource expensive; use stored procedures and T-SQL statements for maintaining very large databases. 19. Using SQL Server cursors for allowing application to fetch a small subset of rows instead of fetching all tables' rows - SQL Server cursors allow application to fetch any block of rows from the result set, including the next n rows, the previous n rows, or n rows starting at a certain row number in the result set resulting in reduced network traffic. 20. Using operator EXISTS, IN and table joins in the query - IN has the slowest performance and it is efficient when most of the filter criteria is in the sub-query. It also EXISTS when most of the filter criteria is in the main query. For Example: Write the query as Select * from product p where EXISTS (select * from order_items o where o.product_id = p.product_id) instead of: Select * from product p where product_id IN (select product_id from order_items) 21. Use char/varchar columns instead of nchar/nvarchar - The char/varchar value uses only one byte to store one character; the nchar/nvarchar value uses two bytes to store one character, so the char/varchar columns use two times less space to store data in comparison with nchar/nvarchar columns. 22. Using cascading referential integrity constraints instead of triggers - In order to make cascading deletes or updates the ON DELETE or ON UPDATE clause is specified in the REFERENCES clause of the CREATE TABLE or ALTER TABLE statements. 23. Call stored procedure using its fully qualified name - An object name specifying the server name, database name, owner name, and object name is known as a fully qualified name. Using fully qualified names can boost performance as SQL Server has a better chance to reuse the stored procedures execution plans if they were executed using fully qualified names. 24. Returning the integer value as an RETURN statement instead of an integer value as part of a recordset - Though the RETURN statement is generally used for error checking, this statement can be used to return an integer value for any other reason. Using RETURN statement can boost performance because SQL Server will not create a recordset. 25. Don't use the prefix "sp_" in the stored procedure name - Microsoft does not recommend using the prefix "sp_" in the user-created stored procedure name, because SQL Server always looks for a stored procedure beginning with "sp_" in the following order: the master database, the stored procedure based on the fully qualified name provided, the stored procedure using dbo as the owner, if one is not specified. 26. Use the sp_executesql stored procedure instead of the EXECUTE statement - The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve readability of code. 27. Creating indexes on columns having integer values rather than character values - The integer values usually have less size then the characters values size, the number of index pages which are used to store the index keys can be reduced. This reduces the number of reads required to read the index and boost overall index performance.

Step 2: Minimize HTTP Requests

A simple way to improve response time is to reduce the number of components and reduce the number of HTTP requests. Every object in a web page requires an HTTP request / response and indeterminate delays. The number of HTTP requests can be minimized by reducing the number of objects in the web pages. Using few HTTP requests, load times can be increased. For this files can be combined and graphics-based techniques can be converted to CSS, graphical text can be converted to CSS text; combining external images, scripts, and CSS files; and eliminating frames and JavaScript includes, converting spacer cells into CSS margins, and replacing JavaScript behavior with CSS :hover techniques, combining multiple decorative images into one CSS sprite, a grid of images merged into one composite image. The techniques for minimizing HTTP requests are: a) Graphical text converted to styled text - Graphical text used for headers or menu items achieve a certain look. CSS can be used to style headers. Image replacement schemes used helps in substituting static or dynamic images for text. b) Using text overlays - In this graphical text is separated from background images. For high quality text in JPEG the quality of the image can be increased. An additional HTTP request can be used with a graphical text overlay for a smaller background image. c) Spacer cells converted to CSS margins or padding - The spacer cells are used with a single-pixel GIF that is stretched to enforce the spacing distance. d) Combining remaining images and map or sprite - Number of HTTP requests can be reduced by combining images into one composite image and mapping any links using an image map. Instead of multiple HTTP requests, this technique requires only one. e) Combining and optimizing CSS and JavaScript files - Combining CSS and JavaScript files in the head of HTML documents helps to create separate style sheets and import them into pages as needed. F) Suture CSS or JavaScript files - HTTP requests can be automatically combined into external CSS or JavaScript files by suturing them together on the server. g) Cache dynamic files - For caching dynamic files headers like "header('Cache-control: must-revalidate');" can be added to the top of the file. h) Putting CSS at the top, JavaScript at the bottom - According to Steve Souders moving stylesheets to the top in the head element makes pages load faster by allowing them to load progressively and also move external JavaScript files to the bottom of the pages, or delay or defer the loading of JavaScript files in the head. i) Eliminating (i)frames and JavaScript includes - More than 52% of web pages use frames in which the majority is of iframes used to display advertising [LIV07]. iframes and JavaScript includes can be harmful to web performance as they introduce extra HTTP requests and can include entire web pages within other web pages so these can be eliminated.

Step 3: Resize and Optimize Images

The images should be cropped and resized to the final dimensions. The images should be optimized with some good-quality graphics program such as Photoshop or Fireworks. Here the main aim is to reduce the image to the lowest acceptable quality and resolution for the Web (72 dpi).

Step 4: Optimize Multimedia

Multimedia follows smaller part of server requests but handles the majority of traffic on the Internet. The different techniques used for optimizing multimedia are: a) Optimizing videos for the Web - Web movies should be short in duration, small in dimension, and optimized with the appropriate codec. b) Video frame rates and dimensions - The fluid motion of the picture can be increased by higher frame rates (frames per second, or fps). For greater usability the frame rate can be reduced to as little as 8 fps. The frame rates lower than 12 fps to 15 fps reduce users' perception of video quality [GUL06]. c) Video production tips - Minimize noise and movement - For highly optimized videos, the more the noise in video, the less it can be compressed, and the larger the final result. For creating high quality videos camera motion can be minimized with a tripod d) Editing video - After capturing video with minimum noise, editing the frames and testing is done for playback. Breaking up of longer videos into smaller segments can be done. e) Compressing videos for the web - After preparing and adjusting video, compressing of video is done. The size of the video must be compressed so that it can be successfully streamed or downloaded to the target audience. This process is called encoding in the industry.

Step 5: Use Server-Side Sniffing

Browser sniffing a technique in websites and web applications to determine the web browser a visitor is using, and to serve browser-appropriate content to the visitor. The web server communicates with the client through a communication protocol known as HTTP. Information communicated between client and server includes information about the browser being used to view the website.

Step 6: Optimize JavaScript for Execution Speed and File Size

JavaScript can be optimized to minimize file size. W3compiler, an automated tool, is used automatically abbreviate and whitespace-optimize the scripts.

Step 7: Convert JavaScript Behavior to CSS

JavaScript used for form validation, menus and rollovers, browser sniffing, statistics, and Ajax applications, CSS can be used to control drop-down menus and rollovers with the :hover pseudo-class.

Step 8: Convert Table Layout to CSS Layout

CSS can be used to position the entire layout or helps to format smaller sections of web pages. Multicolumn layouts can be created using CSS floats and margins applied to divs.

Step 9: Replace Inline Style with CSS Rules

Table layout replaced with CSS layout helps in saving bandwidth and reduces maintenance headaches. Stripping down markup and replacing any inline style with CSS rules helps to optimize HTML.

Step 10: Minimize Initial Display Time

The speed of web page can be improved by loading something useful fast. The useful content loads quickly by layering tables or divs.

Step 11: Load JavaScript Wisely

External scripts used in the head of pages are harmful as they delay the display of body content. Human-computer interaction (HCI) research has shown that delays before viewing pages are less frustrating than delays after a page has loaded. Post-loading delays being a common problem with Ajax-enabled pages, Ajax can make things especially difficult on narrowband users. Even with HTTP compression, the latency due to grabbing all those separate files can cause indeterminate delays.

1.4 CSS Optimization

Cascading Style Sheet (CSS) optimization transforms HTML by abstracting inline style and behavior into minimal stylesheets. CSS optimization helps the web page load faster and makes it more search engine-friendly.

1.4.1 Different measures used for optimizing CSS

There are ten different best practices to optimize CSS: a) Replace Inline Style with Type Selectors - The type selectors help in streamlining markup and techniques to replace JavaScript behavior. Web pages using inline style hardcodes the presentation directly within the HTML. b) Use Descendant Selectors - Descendant selectors or contextual selectors can be used which target elements contained within other elements using the inherent structure of markup. c) Group Selectors with Common Declarations - CSS uses grouping of multiple selectors with common declarations that share the same declaration separated by commas to save space. d) Group Declarations with Common Selectors - CSS also allows grouping multiple declarations that share the same selector into one rule set, separated by semicolons. e) Combine Common Styles into Shared Classes - CSS join common declarations into separate classes during optimization. f) Use Inheritance to Eliminate Duplicate Declarations - Inheritance can be used to decrease the document tree and eliminate duplicate declarations. g) Use CSS Shorthand - CSS properties and colors can be written in longhand or shorthand notation. Longhand CSS explicitly spells out every related property in great detail. Shorthand properties use the shorthand built into CSS for popular properties, including font, border, and margin. h) Abbreviate Long Class and ID Names - Long class names can be easily understood by the designers, but the users should download the extra bytes. i) Use CSS2 and CSS3.x Techniques - Attribute selectors introduced in CSS2 allows targeting elements with attributes matching certain characteristics. In CSS2.1, these selectors can match attribute characteristics in four ways: [att], [att=val], [att~=val] and [att|=val]. j) Replace JavaScript Behavior with CSS Techniques - JavaScript can be converted into CSS lists by using CSS instead of JavaScript, using text for the graphical text and using lists for the tables.

1.5 Ajax Optimization

Asynchronous JavaScript and XML (Ajax) by Jesse James Garrett is a new way to boost the interactivity of websites. It is a combination of Cascading Style Sheets (CSS), XHTML, JavaScript, and XML or JavaScript Object Notation (JSON)-to exchange data asynchronously.

The best practices for optimizing the performance, stability, and usability of Ajax applications are: applying Ajax appropriately to a problem, using a well-constructed and supported Ajax library, minimizing your JavaScript code footprint, reducing HTTP request requirements, choosing the correct data format for transmission, ensuring that network availability and performance concerns are addressed, employing a JavaScript cache, carefully polling for user input, providing a fallback mechanism for search engines and accessibility when JavaScript is turned off and saving state with the fragment identifier

1.6 JavaScript Optimization

The main techniques for JavaScript optimization are: 1) Remove JavaScript Comments -JavaScript comments can be removed by // or /* */ as they increase file size. 2) Reduce Whitespace Carefully - JavaScript is fairly whitespace-agnostic and can easily reduce the whitespace between operators. 3) Use JavaScript Shorthand - A number of shorthand JavaScript statements can be used to shave few bytes. 4) Use String Constant Macros - For few strings or whole strings repeating over and over again, a simple string macro using a global variable remapping can shave off a number of bytes. 5) Avoid Optional Constructs and Kill Dead Code Fast - Code and syntax constructs can be removed without harming the code. 6) Shorten User-Defined Variables and Function Names - Lengthy variable names should be avoided. 7) Remap Built-in Objects - The bulkiness of JavaScript code, beyond long user variable names, comes from the use of built-in objects such as Window, Document, Navigator.

1.7 Data Analysis

The performance of the website is determined in terms of page load time and request / response time. In general, page load times shall be optimized by using caching whenever possible and retrieving data from the local database. For heavy pages that require significant data, an incremental load strategy shall be used to improve the user experience. Calls to web-services that can impact performance/cost shall be made only when necessary based on business rules. Pages on any web site can be categorized into following page types: 1) Content-only page 2) Content-only page with heavy media and documents (e.g. pages having flash/video components or document download) 3) Dynamic pages which retrieve data from the database or cache 4) Dynamic pages which interface synchronously with external systems via web-services or API.

Performance Statistics based on Page Type for websites

Page Type

Response Time / Normal Load

Response Time / Peak Load

Comment

Content only page

1-5 sec

3-6 sec

Static content page.

Content only page with heavy media or documents ,

3-5 sec

5-8 sec

Static pages. Use of CDN cache tools like AKAMAI.

Dynamic pages which retrieve data from Database / File systems or cache

4-6 sec

6-8 sec

Dynamic pages interfacing synchronously with external systems (e.g. webservice)

4-6 sec

(includes Transaction time)

6-8 sec

(includes Transaction time)

Transactions can be credit card based or any external system interaction

1.7.1 Request Response Parameters:

The following table lists the possible benchmarks that are applicable across web applications.

Request/Response Parameter

Value

HTTP Requests packet

1500 bytes

HTTP Objects

2.5 k to 10 k

Graphics - Resolution optimization

Resolution for web graphics should be not more than 72 dpi. Many high end work stations that support higher resolutions (96 dpi)

Size of ALL images (Total size) used on a page

Less than 30KB

Maximum size of individual webpage

Less than 20-30 KB including all graphics

Loading time of a page

under 15 seconds

Cookies in total (maximum)

300

Cookies per domain (maximum)

20

Bytes per cookie

4096 bytes

1.7.2 Determining Acceptable Response Delays

The term response delay refers to how long an application takes to acknowledge or fulfill a particular user request. Some user interface events require shorter response delays than others. For example, an application's response to a user's mouse click or key press needs to be much faster than its response to a request to save a file. Table below shows the maximum acceptable response delay for typical interface events.

Table: Maximum Acceptable Response Delays for Typical Events

User Interface Events

Maximum Acceptable Response Delay

Mouse click; pointer movement; window movement or resizing; key press; button press; drawing gesture; other user-input event involving hand-eye coordination

0.1 second (100 milliseconds)

Displaying progress indicators; completing ordinary user commands (for example, closing a dialog box); completing background tasks (for example, reformatting a table)

1.0 second

Displaying a graph or anything else that a typical user would expect to take time (for example, displaying a new list of all a company's financial transactions for an accounting period)

10.0 seconds

Accepting and processing all user input to any task

10.0 seconds

CDN tools like Akamai and JupiterResearch Identify '4 Seconds' as the Threshold of Acceptability for Retail Web Page Response Times

According to CAMBRIDGE, MA, November 6, 2006, Four seconds is the maximum length of time an average online shopper wait for a Web page to load before potentially abandoning a retail site. This is one of several key findings revealed in a report made available today by Akamai Technologies, Inc. (NASDAQ: AKAM), commissioned through JupiterResearch, that examines consumer reaction to a poor online shopping experience. The basic advice regarding response times has been about the same for thirty years [CAR91] [MIL68] [MYE85]

1.8 Conclusion

With the upcoming of broadband, high-speed users no longer wait 8 to 10 seconds for a page to load. The average web page has more than 50 external objects the object overhead dominates the web page delays. Minimizing HTTP requests is one of the most important parts for web performance optimizers. Web page optimization streamlines the pages to download and display faster. When website performance improves, the bailout rates and bandwidth bills will go down while the conversion rates and profits will rise. To reduce the overhead of multiple objects that causes the delay in web page, minimize the number of objects referenced within the web pages. To fully optimize the CSS, HTML markup needs to be transformed. Replacement should be done for table-based artificial structure and inline redundant styles with standards-based semantic markup using CSS rules acting on similar elements via external stylesheets. Ajax communication included into a website or application lays a vast emphasis on JavaScript and network management.