Introduction:
DynaTrace AJAX Edition provides a Performance Report that analyzes every visited page.
Performance Report lists problematic JavaScript handlers and functions that have a significant performance impact on the page load time.Optimizing client side code can improve page load times significantly.It allows us to not only look at performance metrics per web page. It allows us to look at individual user interactions.
e.g:- what exactly happened while entering the keyword into the search field? How many JavaScript files, CSS and image files were downloaded?
The dynaTrace architecture begins with smart agent technology. The dynaTrace agents have a following unique characteristics :-
* Smart agents are very lightweight, consisting of only a few hundred kilobytes.
* Smart agents consume very little system resources and memory with a 10MB maximum limit.
* Smart agents automatically deploy inserting sensors at logical points within the application – no application code needs to be modified.
* Smart agent sensors self-learn the application – capturing the start of transactions and tracing them across tiers and technologies without the need to know the application, understand it, or expect architects to map it.
* Smart agents unveil each and every performance Hotspot whether it is caused by synchronization, I/O, waiting for shared resources or just by consuming lots of CPU cycles.
* Smart agents measure and forward not only timing metrics, but correlated code-level context as well – methods and arguments, memory and threads, SQL statements and bind values, cpu, exceptions, logs, synchronization and more.
* Smart agents and their sensors can penetrate 3rd party code – services, frameworks, complete applications – eliminating blind-spots without source code.
* Smart agents are “native” so the same agent technology can trace heterogeneous applications, across Web, Java, .NET, C/C++ and mainframe environments.
* Smart agents don’t do heavy calculations which are known to slow application performance – rather smart agents buffer and stream all captured information to the global collector asynchronously offloading data processing away from the application.
* Smart agents are firewall friendly, only a single firewall port for any number of agents need to be opened in outbound direction.
* Smart agents are remotely configured and maintained centrally through the dynaTrace Server and a simple graphical management interface.
* Smart agents support “hot” sensor placement and configuration for 24×7 and dynamic use.
* Smart agents come with special virtualization aware high-resolution timers, that don’t skew as can happen with other timers (such as the ones provided by JVMs).
The global collector(the global collector immediately serves smart agents and instructs their sensor placement) sits between smart agents and the dynaTrace server.
* The global collector’s task is to buffer, compress and encrypt smart agent measurements and stream them asynchronously to the dynaTrace server.
* In addition, the global collector manages dynaTrace plug-ins that gather system and operating system metrics, even hypervisor metrics for VMware environments.
We can compare high-level performance metrics, also we can compare the actual Network Requests, Caching Behavior and JavaScript execution on each browser and browser version.
Performance report :-
Categories for performance comparison:-
DynaTrace ajax edition downloads live performance data and compare performance results against live websites. When we open the performance report we can compare the result against and report will show how well our application is good in areas like caching, javascript, server-side, Load Times.
Summary :-
Summary gives an overall website performance rank which includes caching, network and server related information.
Caching:-
Browsers can cache content such as images, javascript or css files. Caching resources greatly improves browsing behavior for revisiting users as they do not need to download the same resources again and therefore save on the number of roundtrips to the server and the transfer size. Browser needs to be told which elements to cache and which not to cache. This can be achieved by specifying HTTP Caching Headers that define how long a resource can be cached by the browser before requesting an updated version of that resource from the web server.
dynaTrace AJAX Edition lists all resources with no cache setting or with a date in the past in the first table on the Caching tab of the Performance Report. It analyzes the Expires and Cache-Control:max-age header and calculates the actual expires date.
Performance recommendations :-
The ultimate goal of caching is to reduce number of roundtrips to the server.If we know that certain objects are not going to change in the future, set expiration date so that can reduce the number of roundtrips.
Ranking:-
If there are less than 5 resources without cach settings then it will be given a score of 100.
Example :-
Take a page that has a total of 50 resources such as images, css, javascript files. If 10 out of the 50 are not cached at all or have an expiration header in the past the rank is degraded by 15% (5 out of 50 = 10% * Factor 1.5) and goes down to 85.
Network:-
When entering a URL in browser the browser starts by downloading the initial HTML document. After that it downloads all referenced images, styles sheets, javascript files and any other media types such as flash components. Every resource that needs to be downloaded from the web server requires one roundtrip. The total download time of a page depends on network speed, latency and speed of the web server to deliver the requested resources. The more resources there are to download the longer it takes till the end user can see the full page.
In order to download the resources the browser establishes a physical network connection to the web server that host the resources. The number of connections depends on the type and version of the browser, e.g: IE7 uses 2 connections per domain, IE8 and FF3.5 up to 6 connections per domain.
The more resources there are on a web site the more roundtrips are necessary for the browser to download all these resources. Because of the limited number of parallel downloads (depending on the number of physical connections) certain resources have to wait a long time to be downloaded which increases overall page load time, the more resources there are to download from one domain the longer the wait time becomes for the individual resources to be downloaded.
The main goal is to avoid unnecessary roundtrips as well as the size of individual resources. This allows the browser to use the available connections to download all resources fastly and therefore speeds up the page load time and improves end-user experience.
Avoid Redirects, HTTP 400s and HTTP 500s
Http redirects indicates the new location of the resource. Redirect lead to additional roundtrip from browser to server as the redirect tells the browser where to really find the resource that was requested. Firstly it had to follow the redirect, once the document downloaded browser could start downloading additional resources such as the CSS files.
Requests that result in a HTTP 400 (Authentication Issues) are another example of unnecessary roundtrips as the user does not get the the content delivered that is requested. There are 2 common sources for HTTP 400’s: a) the application code generated HTML that references resources not accessible by the current user, e.g: if I am not a premium member of a website I may not be authorized to download certain resources. b) access control on resources are either wrongly configured or resources are incorrectly deployed secured folders.
HTTP 500’s (Server-Errors) are caused by failing application code and it is a problem that needs to be investigated by analyzing the code execution traces on the application server.
The dynaTrace AJAX Edition lists all HTTP 300s, 400s and 500s in the first table on the Network Tab on the Performance Report. It shows you exactly which request (URL Column) returned with which HTTP Status and also lets you know how much time could be saved by avoiding these requests.
This edition lists all css, images and js files in separate tables on the Network Tab on the Performance Report. It also calculates how much time there might be to save when merging these resources and therefore reducing the number of roundtrips.
Performance recommendations and savings:-
The ultimate goal is to reduce the number of roundtrips to the server. Getting rid of unnecessary calls and merging resources to reduce the roundtrips allows the browser to make more efficient use of the available network connections.
Reducing the size of images, css and javascript not only speeds up loading your web site but also reduces the memory footprint and cpu usage of the browser.
Optimizing images (CSS Sprites and Compacting)
CSS sprites can be used in various settings. Large websites can combine multiple single images in a meaningful manner, creating clearly separated “chunks” of the master images .
Optimizing Style Sheets (Merge CSS Files)
If there are multiple css files on a single page merge the content, get rid of potential duplicate style definitions and compress the file by getting rid of spaces, empty lines or comments. This not only saves network roundtrips but also reduces the overall size of the transferred content and reduces the parsing time on the browser.
Optimizing JavaScript (Merge and Minimize JavaScript Files)
Minimize and compact the file using available javascript compressors/minifiers. Minimize and compact the file using available javascript compressors/minifiers.
Rank Calculations:-
This AJAX Edition calculates a rank based on the number of avoidable roundtrip. A page gets a score of 100 if there are no redirects, 400s or 500s and if there are no images, css and js files that could be merged.
The AJAX Edition makes the assumption that most images, css and js files served from the same domain can be merged so that we do not end up having more than 1 css, 6 images and 2 js file from the same domain.
Example:-
Take a page that has a total of 50 roundtrips in order to fully load the page. If we have 2 HTTP redirects on that page it degrades the rank by 2. If we have one domain that serves 3 CSS files this means that 2 can be avoided degrading the rank by another 2. If one of the domains serves 16 images we assume that 10 can be saved by merging images into fewer. We reduce the rank by one point for every 5 images which reduces the rank for this page by another 2. If we also have one domain that serves 3 javascript files the rank gets penalized by 1 point.We end up with a rank of 100-2-2-2-1=93 which corresponds to an A Grade.
JavaScript and Ajax Performance:-
It has a unique capability to trace all JavaScript execution on the web page. It also traces calls into the Browser DOM (Document Object Model) and is able to capture method arguments and return values.
By getting this level of details on JavaScript execution it is easy to identify slow running JavaScript handlers, custom javascript code, slow access to the DOM and expensive or inefficient calls into 3rd party frameworks such as jQuery.
Optimizing javascript execution:-
The JavaScript/AJAX Tab on the Performance Report shows similar data than the HotSpot View. It analyzes all JavaScript executions on a page and provides an aggregated list showing all methods with their overall performance contribution. The list is filtered to script blocks and calls to external libraries such as jQuery. This list gives you a good starting point on performance improvement efforts. The view also shows who called these problematic methods (Back Traces), which methods were called (Forward Traces) and the actual JavaScript source code
The PurePath view also shows the actual JavaScript code. Bad performance often comes from excessive use of string manipulations, manipulations of the DOM, DOM object lookups using CSS Selectors, problematic 3rd party javascript libraries and too many or long running XHR calls.
XHR calls:-
We use AJAX to load detailed product information for every product individually. This means 10 XHR calls for every 10 products that are displayed. This will of course work but it means that we have 10 roundtrips to the server that lets the user wait for the final result. The server needs to handle 10 additional requests that puts additional pressure on the server infrastructure.
Instead of making 10 individual requests it is recommended to combine these calls into a single batch call requesting the product details for all 10 products on the page.
This AJAX Edition shows all XHR/AJAX calls on the JavaScript/AJAX Tab of the Performance Report:
Performance savings:-
The goal is to avoid JavaScript execution in the early stages of page load to achieve fast overall page load.
Ranking:-
This Edition calculates a Rank based on the number of JavaScript files and based on long running script blocks. We consider 2 JavaScript files as good but penalize the Rank for every additional file as we believe these files can be merged and therefore reduce roundtrips and script parsing.
Script blocks that execute longer than 20ms are considered to have potential for improvement. The longer a script block executes the more impact it has on the overall performance and therefore results in a lower Rank. We take the overall execution time of blocks that execute longer than 20ms. Every 50ms reduces the Page Rank by 1 point.XHR calls are also considered for rank calculation.
Example:-
Take a page that has a total of 5 JavaScript files. 4 execute faster than 20ms, 2 execute in 500ms, 1 takes 700ms and the last one takes 1s. It also makes 4 XHR calls.
The Rank gets degraded by 3 because of too many JavaScript files. We also have a total of 2620ms (480+480+680+980) of script blocks executing longer than 20ms which leads us to a Rank reduction 52 (2620/50). We do not penalize for XHR as it is below the 5 XHR threshold.
We end up with a Page Rank of 45 which corresponds to an F Grade.
Server-Side
With growing load we see a shift in response time from Transfer Time to Server Time as being the main contributor. It is typically easier to scale static content such as images, css or js as web servers and load-balancers are doing a good job with this. But requests to the application server that need to query information from the database or fetch data from other resources face new scalability and performance challenges under increasing load. That is why it is important to focus on server-side requests and analyze the response under certain load.
This Edition provides a table on the Server-Side tab on the Performance Report that lists all requests that match the following criteria which are very likely to be handled by the application server.
* First request on the page -> usually returns the initial HTML
* Requests that return HTML -> generated content (this also may include static HTML pages)
* Requests on URL’s ending with aspx, jsp, php
* Requests that send GET or POST parameters data to the server
* All XHR/AJAX Requests
The Server-Column shows the time to first byte(It is the duration from the virtual user making an HTTP request to the first byte of the page being received by the browser. This time is made up of the socket connection time, the time taken to send the HTTP request and the time to taken to get the first byte of the page). This is as close to server-side processing time as we can get by analyzing the network requests that are sent by the browser. So this is the time from the last byte sent from the HTTP Request until the first byte received. This also includes some network latency.
Performance saving:-
The ultimate goal is to optimize the response time of server side calls as well as to reduce the number of calls that are made. Especially highly interactive web sites make a lot of calls to the application server via XHR to retrieve more data as the user browses through the site or to simply report progress. JavaScript frameworks that are used for this can often lead to a very chatty behaviour of the application causing additional stress on the application server and ultimately leading to resource congestion, performance and scalability issues.
Ranking
This edition calculates a rank based on the number of requests to the application server as well as the Server Time. The more requests the lower the ranking as we assume that requests can be merged into fewer. Up to 6 Server-Side requests are fine. Every additional request gets penalized by 1 Rank.
The slower the Server Time the more performance improvements are possible. Every request that takes more than 200ms Server time has potential of being improved. We reduce the Rank by 1 having a Server Time from 200ms to 400ms. We reduce it by 2 between 400ms and 1000s and reduce it by 4 when having times longer than 1s.
Example:-
Take a page that has a total of 10 requests that match the criteria described above. This reduces the Rank by 4. 2 Requests take between 400ms and 1000ms which reduces the rank by 4 and we have one request that takes more than 1s which reduces the Rank by additional 4.
The total Rank of this page is therefore is 88 which correspond to a Grade B.
KPI
The best way to measure the performance of website is by looking at certain Key Performance Indicators (KPI’s) that tells us how fast or slow web site is to the end user.
Factors such as page load time, number of network roundtrips and transferred size are important performance indicators for a web page. It is possible to extend the list of existing KPI’s to also include metrics such as Time to First Impression, Time to Fully Loaded and Time Spent in JavaScript. domain name generator This document describes a list of KPI’s that should be tracked on every page, what we consider as good and bad and what we can do to improve these KPI’s.
Time to first impression :-
This is the time from when the URL is entered into the browser until the user has the first visual indication of the page that gets loaded. The first visual indication is the first drawing activity by the browser and can be traced with dynaTrace AJAX Edition.
Time to onload event :-
This is the time until the browser triggers the onLoad event which happens when the initial document and all referenced objects are fully downloaded. JavaScript onLoad handlers use this event to manipulate the current initial state of the page. This event is one of the options explained earlier to download additional or delay load content.
PurePath Technology :-
PurePath provides the true transaction trace from entry point into your application to database and back. This entry point can be at browser click, at Web tier, or at the application tier.
PurePath is a combination of timings gathered directly from the running transactions of an application plus data context (like log messages, SQL statements, exceptions, and any method arguments or return values) captured with auto-sensors. We can see all application performance, scalability and stability characteristics through the lens of the application, down through multiple layers of infrastructure.
PurePaths are reassembled together by the dynaTrace server in realtime. The dynaTrace server can be configured to support various transaction time intervals, from milliseconds to hours, depending on application requirements.
PurePath analytics automatically calculate hotspots, longest running transactions, high number of database calls per request, memory leaks and duplicate threads.
The PurePath technology automatically detects I/O, synchronization, wait time and CPU hotspots for each tier, application component or API.
* Accurate
>True transaction level detail, all the time, end to end
* Lightweight
> Get transaction level visibility, all the time, for maximum business impact.
> Production proven in the most demanding, high load environments.
* End to End
> Know where your problems are – all tiers from user click to database of record and back.
> Identify issues early in the lifecycle or when they happen.
* Code level deep
>Realize immediate results – easy to use and no application architecture knowledge required.
>Hotspot enables quick analysis of slowest transactions to isolate performance problems.
* Multilayer
> Purepath go everyhwere -Physical, virtual or cloud(public,private and hybrid).
> Performance, stability issue and scalability issues that surface under load are diagnosed through this.
> Grows with the application.
* Automatic
>Automatically discovers transaction flows.
>Automatically alerts for memory leaks and CPU related issues.
Hotspot
DynaTrace automatically pinpoints not only Java, .NET and Javascript method performance issues, but also those of components, synchronization, resources, calls to external systems, remoting, database execution, all in context of selected transactions and correlated with environmental influences such as virtualization, latencies, and configurations. A single click from the slow transaction to the API and Tier hotspots is all it takes to find the culprit and from there a single click takes you to the root cause – the actual low level methods allowing you to understand and fix the problem without ever reproducing it.
High CPU consumption is caused by poorly written code. The more 3rd party code, the more open-source libraries, the more weaving of components, the less transparent the runtime execution, performance and CPU resources are wasted.
As dynaTrace follows each request throughout the application, it captures the actual CPU resource consumption along with response time and GC metrics. This not only tells us which component or method causes overly high CPU consumption, but also identifies which transaction is causing the CPU load. Additionally it precisely reports whether the CPU load was caused by the garbage collector or the application code in this particular transaction, and tells us in which line of code, transaction and thread this happened.
Web Browser Hotspots
DynaTrace lightweight Web Browser Agent captures performance metrics from inside the Web Browser and enables the deepest performance diagnostics of:
* DOM interaction
* XHR calls
* Server side tracing of XHR calls
* Background execution
* Browser side content caching
* Browser connection limits
* Resource loading
* Page loading
* Page rendering
* Asynchronous client events
* Data driven problems
User Experience Report :-
Every user action (Mouse and Keyboard interactions) needs to be optimized in order to deliver acceptable end user experience. This report analyzes the number of end user actions.
The below specified screenshot shows the report for a Web 2.0 Web Application and tells you the number of normal Page Views, Web 2.0 Actions and their ratio. Only a very small time is actually spent in the initial page load. The rest is spent in JavaScript, XHR Calls and DOM Manipulations triggered by user actions on the same URL.
Instead of loading a new page for every user interaction JavaScript loads additional information from the Web Server and merges this into the current page. When executing actions (through mouse or keyboard) JavaScript handlers take care of executing these actions.
Timeline:-
The Timeline View can be opened for the complete session by double clicking on the Timeline node in the Cockpit. The drill down opens the Timeline view for that particular page, automatically showing splitting network requests in individual domains. The timeline now shows the click event, an XmlHttpRequest event followed by an onError and later on another XmlHttpRequest (XHR). Hovering with the mouse over the events shows us on which DOM elements the events were actually triggered. Hovering over the JavaScript shows us how long it took to execute the event handlers and hovering over the network request shows us which additional resources were downloaded. We also see what type of Rendering the browser had to do. We also see that the onError event handler is triggered and runs for 240ms.
Database Access :-
DynaTrace pinpoints those transactions that perform heavy database access.A drill down to the database analyzer reveals SQL executions, SQL prepares and bind values for a fast root-cause analysis. It reveals improper configurations such as caching behavior, and creates required transparency to know where to craft SQLs manually to reduce a volume of SQL executions into a small number of optimized executions that execute faster, cause less database round-trips and provide smaller results sets.
It also captures the utilization of each connection pool to see which ones are over-used, causing high acquisition times when the application requests a database connection. Thus dynaTrace immediately unveils inappropriate pool configuration – a very common but easy to address problem in enterprise applications.