<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    隨筆 - 41  文章 - 29  trackbacks - 0
    <2009年3月>
    22232425262728
    1234567
    891011121314
    15161718192021
    22232425262728
    2930311234

    常用鏈接

    留言簿(5)

    隨筆分類(28)

    隨筆檔案(23)

    收藏夾(6)

    Inside JVM

    Java

    java performance

    Solr

    搜索

    •  

    最新評(píng)論

    閱讀排行榜

    評(píng)論排行榜

    Recently, our performance testing result is being very strange: suddenly much slower or much faster than the previous testing. This situation lasts for a pretty long time. And the unstable testing result makes us very confusing and doesn't believe anything and caused lots of back and forward work (such as testing scripting changes).
    However, it is normal cases of performance testing. We have to face it. This article is trying to explain why the performance testing result is always changing and how can we avoid it.

    NOTE: this article just introduced some basic ideas, i hope more discussion on this topic and finally we are not confusing any more.

    The Performance of Java Application is Not Stable Inherently?

    Measuring the performance and understanding the behaviour of Java programs is challenging. Many factors, from program characteristics, VM techniques and implementations, and OS strategies, DB Optimization to hardware platform performance, can affect the final measured performance. 

    1. What's the status of hardware?

    As we all know, hardware is the basic of performance. However, we don't know what's the exact performance benchmark of our hardware? Most of people may notice one fact: two computers may perform very differently even they have the exact same hardware. So, we need some tool to compare them. 
    Even for the same machine, we are always noticing that server's status varies vastly at different times. We cannot assume the server is always perform at the same level. Disk maybe become slower and slower. A virus maybe absorbed CPU time crazily. Another process maybe occupied huge memory.

    2. JVM Optimization

    As we know, Sun HotSpot JVM actually includes both a dynamic compiler and a virtual machine to interpret bytecodes, as shown in the following figure.


    When bytecodes are first loaded, they are run through the interpreter. The profiler keeps a record of runtimes for each method. When a method is found to be taking a lot of time, HotSpot compiles and optimizes it. Every future call to that method uses the native machine instructions produced by the compiler.

    As with a JIT, the results of the compilation are not kept between runs. Because the bytecodes are more compact, this saves loading time as well as storage space. It also retains portability, and allows the optimization to reflect the way the program is currently being used. Currently, no attempt is made to save information that might help future runs become efficient more quickly.

    In other words, HotSpot JVM will optimize some critical methods during runtime. So, the execution time of each method will change during the execution time. 

    3. SQL Server Optimization

    MS SQL Server is a very powerful database. It will do lots of optimization to improve query performance at runtime. SQL Server Performance Optimization is a very complicated topic, here we just try to emphasize some points -
    (1)As you see from the following diagram, any single statement will optimized into Query Plan by Query Optimizer based on query statements, database schema and statistics. SQL Server 2005 has a pool of memory that is used to store both execution plans and data buffers. The percentage of the pool allocated to either execution plans or data buffers fluctuates dynamically, depending on the state of the system. SQL Server 2005 has an efficient algorithm to find any existing execution plans for any specific SQL statement. In most systems, the minimal resources that are used by this scan are less than the resources that are saved by being able to reuse existing plans instead of compiling every SQL statement.
     
    (2) Disk I/O is a core characteristic of the Database. SQL Server has one component named Buffer management will try to mange the reading and writing database pages and cache database pages to reduce file I/O. It is also a dynamic process. 

    4. Application Errors 

    Another very importance factor impacting application performance result is application runtime errors. As most of us noticed again and again, the application error will greatly impact the stability of the server  and make the server very unstable.
    The recent example is, SM performance testing finds that the latest performance testing is much slower that the pervious testing and the thoughout is very unstable during the testing. However, if one of the report testing is removed from testing scripts, the performance testing is much slower and the result is much reasonable. As we all know, the report component is one of the root cause of server crash and out-of-memory issues. So, if we try to test the performance with report module, the result doesn't make much sence.  

    How can we get trusted result?

    1. Fix Errors Before Testing

    First of all, please fix the errors. There are several errors we must notice:
    (1) Lots of Exceptions on log file
    (2) Server crash issue and out-of-memory issue
    No serious error is the precondition of the stable performance testing result.

    2. Run benchmark tool before running application

    There are lots of good PC performance benchmark tool, i selected PassMark Performance Testing (http://www.passmark.com/download/pt_download.htm) which is very easy to use, understand and compare.


    I did a quick testing on my laptop and one of the high-performance server. The result shows huge differences -


    Overall

    CPU

    Memory

    Disk

    My Laptop

    388.9

    943.5

    441.3

    155.4

    High-Performance Server

    1627.4

    3155.9

    835.7

    1541.2

    Comparison Result

    around 4.2 times

    around 3.5 times

    around 2 times

    aournd 9.9 tiems


    The results displays the Disk is the biggest difference among these two servers. So, the disk-sensitive operations may impact the overall performance if the testing is running at my laptop.


    Another very interesting benchmark is JVM benchmark tool (SPECjvm2008, http://www.spec.org/jvm2008/docs/UserGuide.html), which is designed to measure the performance of a JRE (a JVM and associated libraries). It also measures the performance of the operating system and hardware in the context of executing the JRE. It has a complicated evaluation process, and we may only need see the final result. The following is an example -

    the score pf My Laptop is 35.6 ops/m, while the score of high-performance server is  51.8 ops/m.

    3. A Little Bit Long Time Running

    As we saw in the previous section, modern software applied so many optimization techniques during runtime, so the response time is changing during the testing. Giving a little bit more time as the warm-up time is reasonable. However, how-long it need need trade-off and the support of experiences. 
     

    4. Dig into Enough Detail (even single line of code if need)

    Last but actually most important, not be fooled by a overall result. Because performance testing and tuning is very time-consuming activity and need a lot of patience, we always try to get evidence or clues from a big picture. However, a big picture or overall result is always in changing, it cannot tell you much on the real performance from testing and tuning perspective. Generally, the product-wide multiple functionality testing can be used as an overall summary of product performance, but it is not beneficial on bottleneck trouble shooting.

    The only way to find the real performance issue is trying to dig into detailed module/function/even single line of code or query. Only based on these detailed, you have the confidence to say the tuning works or not.


     In summary, we always met the performance result stability issue during performance testing. This article tries to explain why an java application will perform differently at different time, there are too many factors impacting this, and how can we get a more trustable results. This article is a first try to explain this topic, i hope more and more discussion will make us clearer. 



    posted on 2009-03-09 23:25 Justin Chen 閱讀(1532) 評(píng)論(1)  編輯  收藏 所屬分類: Performance

    FeedBack:
    # re: Why Performance Testing Result is NOT Stable - How can we avoid it? 2009-03-22 06:40 duguo
    If you have monitors to the key resources, such as cpu or io, it should tell you more information about your application status.

    Good luck  回復(fù)  更多評(píng)論
      

    只有注冊(cè)用戶登錄后才能發(fā)表評(píng)論。


    網(wǎng)站導(dǎo)航:
     
    主站蜘蛛池模板: 香蕉免费一区二区三区| 一级毛片免费全部播放| 久久精品国产大片免费观看| 亚洲中文字幕第一页在线| fc2免费人成在线| 亚洲中文字幕无码久久综合网| 日本一区二区三区在线视频观看免费| 四虎永久在线精品免费影视| 色爽黄1000部免费软件下载| 亚洲视频在线免费| aaa毛片免费观看| 亚洲av女电影网| 51精品视频免费国产专区| 亚洲影视一区二区| 成人免费无毒在线观看网站| 无码天堂va亚洲va在线va| 亚洲欧洲日本在线| 无码人妻一区二区三区免费看 | 日韩不卡免费视频| 国产 亚洲 中文在线 字幕| 黄a大片av永久免费| 色老头综合免费视频| 久久综合日韩亚洲精品色| 18禁男女爽爽爽午夜网站免费| 日本亚洲精品色婷婷在线影院| 国产青草视频免费观看97| 久久精品无码专区免费| 亚洲精品91在线| 热久久精品免费视频| 国产中文字幕在线免费观看 | 国产伦一区二区三区免费 | 一级毛片免费观看| 最新亚洲卡一卡二卡三新区| 亚洲AⅤ无码一区二区三区在线 | 美女视频黄a视频全免费网站色| a级亚洲片精品久久久久久久| 最近免费mv在线电影| 亚洲av色香蕉一区二区三区| 亚洲日韩中文字幕在线播放| 成人免费激情视频| eeuss影院ss奇兵免费com|