<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    統計

    留言簿(1)

    DB

    Others

    QA

    Tech Website

    閱讀排行榜

    評論排行榜

    【轉】How Google Tests Software - Part Five

    Google的測試分類比較籠統:Small Tests, Medium Tests, Large Tests
    ------
    By James Whittaker
    Instead of distinguishing between code, integration and system testing, Google uses the language of small, medium and large tests emphasizing scope over form. Small tests cover small amounts of code and so on. Each of the three engineering roles may execute any of these types of tests and they may be performed as automated or manual tests.
    Small Tests are mostly (but not always) automated and exercise the code within a single function or module. They are most likely written by a SWE or an SET and may require mocks and faked environments to run but TEs often pick these tests up when they are trying to diagnose a particular failure. For small tests the focus is on typical functional issues such as data corruption, error conditions and off by one errors. The question a small test attempts to answer is does this code do what it is supposed to do?
    Medium Tests can be automated or manual and involve two or more features and specifically cover the interaction between those features. I've heard any number of SETs describe this as "testing a function and its nearest neighbors." SETs drive the development of these tests early in the product cycle as individual features are completed and SWEs are heavily involved in writing, debugging and maintaining the actual tests. If a test fails or breaks, the developer takes care of it autonomously. Later in the development cycle TEs may perform medium tests either manually (in the event the test is difficult or prohibitively expensive to automate) or with automation. The question a medium test attempts to answer is does a set of near neighbor functions interoperate with each other the way they are supposed to?
    Large Tests cover three or more (usually more) features and represent real user scenarios to the extent possible. There is some concern with overall integration of the features but large tests tend to be more results driven, i.e., did the software do what the user expects? All three roles are involved in writing large tests and everything from automation to exploratory testing can be the vehicle to accomplish accomplish it. The question a large test attempts to answer is does the product operate the way a user would expect?
    The actual language of small, medium and large isn’t important. Call them whatever you want. The important thing is that Google testers share a common language to talk about what is getting tested and how those tests are scoped. When some enterprising testers began talking about a fourth class they dubbed enormous every other tester in the company could imagine a system-wide test covering nearly every feature and that ran for a very long time. No additional explanation was necessary.
    The primary driver of what gets tested and how much is a very dynamic process and varies wildly from product to product. Google prefers to release often and leans toward getting a product out to users so we can get feedback and iterate. The general idea is that if we have developed some product or a new feature of an existing product we want to get it out to users as early as possible so they may benefit from it. This requires that we involve users and external developers early in the process so we have a good handle on whether what we are delivering is hitting the mark.
    Finally, the mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated. Only those problems, in any of the above categories, which specifically require human judgment, such as the beauty of a user interface or whether exposing some piece of data constitutes a privacy concern, should remain in the realm of manual testing.
    Having said that, it is important to note that Google performs a great deal of manual testing, both scripted and exploratory, but even this testing is done under the watchful eye of automation. Industry leading recording technology converts manual tests to automated tests to be re-executed build after build to ensure minimal regressions and to keep manual testers always focusing on new issues. We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors and files a bug. The ongoing effort to automate to within the “last inch of the human mind” is currently the design spec for the next generation of test engineering tools Google is building.
    Those tools will be highlighted in future posts. However, my next target is going to revolve around The Life of an SET. I hope you keep reading.

    posted on 2011-06-04 15:54 XXXXXX 閱讀(290) 評論(0)  編輯  收藏 所屬分類: Uncategorized

    主站蜘蛛池模板: 亚洲va中文字幕无码| 亚洲成人黄色网址| 久久精品亚洲视频| 亚洲一区二区三区国产精华液| 一级毛片免费不卡在线| 国产美女做a免费视频软件| 亚洲一本之道高清乱码| 一个人看的免费视频www在线高清动漫| 最近中文字幕免费完整| 亚洲国产精品一区| 久久中文字幕免费视频| 亚洲妓女综合网99| 中国在线观看免费国语版| 国产亚洲一区二区三区在线| 色婷婷亚洲一区二区三区| 免费一级毛片一级毛片aa| 一级做a爰全过程免费视频毛片| 曰韩亚洲av人人夜夜澡人人爽 | 在线精品亚洲一区二区三区| 一边摸一边桶一边脱免费视频| 国产亚洲精品激情都市| 国产午夜亚洲精品不卡免下载| 国产精品久久久久影院免费| 国产亚洲人成在线影院| 亚洲色婷婷六月亚洲婷婷6月| 男女男精品网站免费观看| 久久亚洲精品无码观看不卡| 暖暖日本免费中文字幕| 亚洲精品在线不卡| 狼友av永久网站免费观看| 成人精品综合免费视频| 亚洲av一综合av一区| 免费做爰猛烈吃奶摸视频在线观看| 亚洲午夜理论片在线观看| 亚洲国产精品自产在线播放| 很黄很污的网站免费| 亚洲国产天堂久久综合网站| 国产免费不卡v片在线观看| 特级毛片爽www免费版| 亚洲国产一区二区三区青草影视 | 久久这里只精品国产免费10|