DynaTrace AJAX Edition provides a Performance Report that analyzes every visited page.
The dynaTrace architecture begins with smart agent technology. The dynaTrace agents have a following unique characteristics :-
* Smart agents are very lightweight, consisting of only a few hundred kilobytes.
* Smart agents consume very little system resources and memory with a 10MB maximum limit.
* Smart agents automatically deploy inserting sensors at logical points within the application – no application code needs to be modified.
* Smart agent sensors self-learn the application – capturing the start of transactions and tracing them across tiers and technologies without the need to know the application, understand it, or expect architects to map it.
* Smart agents unveil each and every performance Hotspot whether it is caused by synchronization, I/O, waiting for shared resources or just by consuming lots of CPU cycles.
* Smart agents measure and forward not only timing metrics, but correlated code-level context as well – methods and arguments, memory and threads, SQL statements and bind values, cpu, exceptions, logs, synchronization and more.
* Smart agents and their sensors can penetrate 3rd party code – services, frameworks, complete applications – eliminating blind-spots without source code.
* Smart agents are “native” so the same agent technology can trace heterogeneous applications, across Web, Java, .NET, C/C++ and mainframe environments.
* Smart agents don’t do heavy calculations which are known to slow application performance – rather smart agents buffer and stream all captured information to the global collector asynchronously offloading data processing away from the application.
* Smart agents are firewall friendly, only a single firewall port for any number of agents need to be opened in outbound direction.
* Smart agents are remotely configured and maintained centrally through the dynaTrace Server and a simple graphical management interface.
* Smart agents support “hot” sensor placement and configuration for 24×7 and dynamic use.
* Smart agents come with special virtualization aware high-resolution timers, that don’t skew as can happen with other timers (such as the ones provided by JVMs).
The global collector(the global collector immediately serves smart agents and instructs their sensor placement) sits between smart agents and the dynaTrace server.
* The global collector’s task is to buffer, compress and encrypt smart agent measurements and stream them asynchronously to the dynaTrace server.
* In addition, the global collector manages dynaTrace plug-ins that gather system and operating system metrics, even hypervisor metrics for VMware environments.
Performance report :-
Categories for performance comparison:-
Summary gives an overall website performance rank which includes caching, network and server related information.
dynaTrace AJAX Edition lists all resources with no cache setting or with a date in the past in the first table on the Caching tab of the Performance Report. It analyzes the Expires and Cache-Control:max-age header and calculates the actual expires date.
Performance recommendations :-
The ultimate goal of caching is to reduce number of roundtrips to the server.If we know that certain objects are not going to change in the future, set expiration date so that can reduce the number of roundtrips.
If there are less than 5 resources without cach settings then it will be given a score of 100.
In order to download the resources the browser establishes a physical network connection to the web server that host the resources. The number of connections depends on the type and version of the browser, e.g: IE7 uses 2 connections per domain, IE8 and FF3.5 up to 6 connections per domain.
The more resources there are on a web site the more roundtrips are necessary for the browser to download all these resources. Because of the limited number of parallel downloads (depending on the number of physical connections) certain resources have to wait a long time to be downloaded which increases overall page load time, the more resources there are to download from one domain the longer the wait time becomes for the individual resources to be downloaded.
The main goal is to avoid unnecessary roundtrips as well as the size of individual resources. This allows the browser to use the available connections to download all resources fastly and therefore speeds up the page load time and improves end-user experience.
Avoid Redirects, HTTP 400s and HTTP 500s
Http redirects indicates the new location of the resource. Redirect lead to additional roundtrip from browser to server as the redirect tells the browser where to really find the resource that was requested. Firstly it had to follow the redirect, once the document downloaded browser could start downloading additional resources such as the CSS files.
Requests that result in a HTTP 400 (Authentication Issues) are another example of unnecessary roundtrips as the user does not get the the content delivered that is requested. There are 2 common sources for HTTP 400’s: a) the application code generated HTML that references resources not accessible by the current user, e.g: if I am not a premium member of a website I may not be authorized to download certain resources. b) access control on resources are either wrongly configured or resources are incorrectly deployed secured folders.
HTTP 500’s (Server-Errors) are caused by failing application code and it is a problem that needs to be investigated by analyzing the code execution traces on the application server.
The dynaTrace AJAX Edition lists all HTTP 300s, 400s and 500s in the first table on the Network Tab on the Performance Report. It shows you exactly which request (URL Column) returned with which HTTP Status and also lets you know how much time could be saved by avoiding these requests.
This edition lists all css, images and js files in separate tables on the Network Tab on the Performance Report. It also calculates how much time there might be to save when merging these resources and therefore reducing the number of roundtrips.
Performance recommendations and savings:-
The ultimate goal is to reduce the number of roundtrips to the server. Getting rid of unnecessary calls and merging resources to reduce the roundtrips allows the browser to make more efficient use of the available network connections.
Optimizing images (CSS Sprites and Compacting)
CSS sprites can be used in various settings. Large websites can combine multiple single images in a meaningful manner, creating clearly separated “chunks” of the master images .
Optimizing Style Sheets (Merge CSS Files)
If there are multiple css files on a single page merge the content, get rid of potential duplicate style definitions and compress the file by getting rid of spaces, empty lines or comments. This not only saves network roundtrips but also reduces the overall size of the transferred content and reduces the parsing time on the browser.
This AJAX Edition calculates a rank based on the number of avoidable roundtrip. A page gets a score of 100 if there are no redirects, 400s or 500s and if there are no images, css and js files that could be merged.
The AJAX Edition makes the assumption that most images, css and js files served from the same domain can be merged so that we do not end up having more than 1 css, 6 images and 2 js file from the same domain.
We use AJAX to load detailed product information for every product individually. This means 10 XHR calls for every 10 products that are displayed. This will of course work but it means that we have 10 roundtrips to the server that lets the user wait for the final result. The server needs to handle 10 additional requests that puts additional pressure on the server infrastructure.
Instead of making 10 individual requests it is recommended to combine these calls into a single batch call requesting the product details for all 10 products on the page.
Script blocks that execute longer than 20ms are considered to have potential for improvement. The longer a script block executes the more impact it has on the overall performance and therefore results in a lower Rank. We take the overall execution time of blocks that execute longer than 20ms. Every 50ms reduces the Page Rank by 1 point.XHR calls are also considered for rank calculation.
We end up with a Page Rank of 45 which corresponds to an F Grade.
With growing load we see a shift in response time from Transfer Time to Server Time as being the main contributor. It is typically easier to scale static content such as images, css or js as web servers and load-balancers are doing a good job with this. But requests to the application server that need to query information from the database or fetch data from other resources face new scalability and performance challenges under increasing load. That is why it is important to focus on server-side requests and analyze the response under certain load.
This Edition provides a table on the Server-Side tab on the Performance Report that lists all requests that match the following criteria which are very likely to be handled by the application server.
* First request on the page -> usually returns the initial HTML
* Requests that return HTML -> generated content (this also may include static HTML pages)
* Requests on URL’s ending with aspx, jsp, php
* Requests that send GET or POST parameters data to the server
* All XHR/AJAX Requests
The Server-Column shows the time to first byte(It is the duration from the virtual user making an HTTP request to the first byte of the page being received by the browser. This time is made up of the socket connection time, the time taken to send the HTTP request and the time to taken to get the first byte of the page). This is as close to server-side processing time as we can get by analyzing the network requests that are sent by the browser. So this is the time from the last byte sent from the HTTP Request until the first byte received. This also includes some network latency.
This edition calculates a rank based on the number of requests to the application server as well as the Server Time. The more requests the lower the ranking as we assume that requests can be merged into fewer. Up to 6 Server-Side requests are fine. Every additional request gets penalized by 1 Rank.
The slower the Server Time the more performance improvements are possible. Every request that takes more than 200ms Server time has potential of being improved. We reduce the Rank by 1 having a Server Time from 200ms to 400ms. We reduce it by 2 between 400ms and 1000s and reduce it by 4 when having times longer than 1s.
Take a page that has a total of 10 requests that match the criteria described above. This reduces the Rank by 4. 2 Requests take between 400ms and 1000ms which reduces the rank by 4 and we have one request that takes more than 1s which reduces the Rank by additional 4.
The total Rank of this page is therefore is 88 which correspond to a Grade B.
The best way to measure the performance of website is by looking at certain Key Performance Indicators (KPI’s) that tells us how fast or slow web site is to the end user.
Time to first impression :-
This is the time from when the URL is entered into the browser until the user has the first visual indication of the page that gets loaded. The first visual indication is the first drawing activity by the browser and can be traced with dynaTrace AJAX Edition.
Time to onload event :-
PurePath Technology :-
PurePath provides the true transaction trace from entry point into your application to database and back. This entry point can be at browser click, at Web tier, or at the application tier.
PurePath is a combination of timings gathered directly from the running transactions of an application plus data context (like log messages, SQL statements, exceptions, and any method arguments or return values) captured with auto-sensors. We can see all application performance, scalability and stability characteristics through the lens of the application, down through multiple layers of infrastructure.
PurePaths are reassembled together by the dynaTrace server in realtime. The dynaTrace server can be configured to support various transaction time intervals, from milliseconds to hours, depending on application requirements.
PurePath analytics automatically calculate hotspots, longest running transactions, high number of database calls per request, memory leaks and duplicate threads.
The PurePath technology automatically detects I/O, synchronization, wait time and CPU hotspots for each tier, application component or API.
>True transaction level detail, all the time, end to end
> Get transaction level visibility, all the time, for maximum business impact.
> Production proven in the most demanding, high load environments.
* End to End
> Know where your problems are – all tiers from user click to database of record and back.
> Identify issues early in the lifecycle or when they happen.
* Code level deep
>Realize immediate results – easy to use and no application architecture knowledge required.
>Hotspot enables quick analysis of slowest transactions to isolate performance problems.
> Purepath go everyhwere -Physical, virtual or cloud(public,private and hybrid).
> Performance, stability issue and scalability issues that surface under load are diagnosed through this.
> Grows with the application.
>Automatically discovers transaction flows.
>Automatically alerts for memory leaks and CPU related issues.
High CPU consumption is caused by poorly written code. The more 3rd party code, the more open-source libraries, the more weaving of components, the less transparent the runtime execution, performance and CPU resources are wasted.
As dynaTrace follows each request throughout the application, it captures the actual CPU resource consumption along with response time and GC metrics. This not only tells us which component or method causes overly high CPU consumption, but also identifies which transaction is causing the CPU load. Additionally it precisely reports whether the CPU load was caused by the garbage collector or the application code in this particular transaction, and tells us in which line of code, transaction and thread this happened.
Web Browser Hotspots
DynaTrace lightweight Web Browser Agent captures performance metrics from inside the Web Browser and enables the deepest performance diagnostics of:
* DOM interaction
* XHR calls
* Server side tracing of XHR calls
* Background execution
* Browser side content caching
* Browser connection limits
* Resource loading
* Page loading
* Page rendering
* Asynchronous client events
* Data driven problems
User Experience Report :-
Every user action (Mouse and Keyboard interactions) needs to be optimized in order to deliver acceptable end user experience. This report analyzes the number of end user actions.
Database Access :-
DynaTrace pinpoints those transactions that perform heavy database access.A drill down to the database analyzer reveals SQL executions, SQL prepares and bind values for a fast root-cause analysis. It reveals improper configurations such as caching behavior, and creates required transparency to know where to craft SQLs manually to reduce a volume of SQL executions into a small number of optimized executions that execute faster, cause less database round-trips and provide smaller results sets.
It also captures the utilization of each connection pool to see which ones are over-used, causing high acquisition times when the application requests a database connection. Thus dynaTrace immediately unveils inappropriate pool configuration – a very common but easy to address problem in enterprise applications.