<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    統計

    留言簿(1)

    DB

    Others

    QA

    Tech Website

    閱讀排行榜

    評論排行榜

    【轉】How Google Tests Software - Part Five

    Google的測試分類比較籠統:Small Tests, Medium Tests, Large Tests
    ------
    By James Whittaker
    Instead of distinguishing between code, integration and system testing, Google uses the language of small, medium and large tests emphasizing scope over form. Small tests cover small amounts of code and so on. Each of the three engineering roles may execute any of these types of tests and they may be performed as automated or manual tests.
    Small Tests are mostly (but not always) automated and exercise the code within a single function or module. They are most likely written by a SWE or an SET and may require mocks and faked environments to run but TEs often pick these tests up when they are trying to diagnose a particular failure. For small tests the focus is on typical functional issues such as data corruption, error conditions and off by one errors. The question a small test attempts to answer is does this code do what it is supposed to do?
    Medium Tests can be automated or manual and involve two or more features and specifically cover the interaction between those features. I've heard any number of SETs describe this as "testing a function and its nearest neighbors." SETs drive the development of these tests early in the product cycle as individual features are completed and SWEs are heavily involved in writing, debugging and maintaining the actual tests. If a test fails or breaks, the developer takes care of it autonomously. Later in the development cycle TEs may perform medium tests either manually (in the event the test is difficult or prohibitively expensive to automate) or with automation. The question a medium test attempts to answer is does a set of near neighbor functions interoperate with each other the way they are supposed to?
    Large Tests cover three or more (usually more) features and represent real user scenarios to the extent possible. There is some concern with overall integration of the features but large tests tend to be more results driven, i.e., did the software do what the user expects? All three roles are involved in writing large tests and everything from automation to exploratory testing can be the vehicle to accomplish accomplish it. The question a large test attempts to answer is does the product operate the way a user would expect?
    The actual language of small, medium and large isn’t important. Call them whatever you want. The important thing is that Google testers share a common language to talk about what is getting tested and how those tests are scoped. When some enterprising testers began talking about a fourth class they dubbed enormous every other tester in the company could imagine a system-wide test covering nearly every feature and that ran for a very long time. No additional explanation was necessary.
    The primary driver of what gets tested and how much is a very dynamic process and varies wildly from product to product. Google prefers to release often and leans toward getting a product out to users so we can get feedback and iterate. The general idea is that if we have developed some product or a new feature of an existing product we want to get it out to users as early as possible so they may benefit from it. This requires that we involve users and external developers early in the process so we have a good handle on whether what we are delivering is hitting the mark.
    Finally, the mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated. Only those problems, in any of the above categories, which specifically require human judgment, such as the beauty of a user interface or whether exposing some piece of data constitutes a privacy concern, should remain in the realm of manual testing.
    Having said that, it is important to note that Google performs a great deal of manual testing, both scripted and exploratory, but even this testing is done under the watchful eye of automation. Industry leading recording technology converts manual tests to automated tests to be re-executed build after build to ensure minimal regressions and to keep manual testers always focusing on new issues. We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors and files a bug. The ongoing effort to automate to within the “last inch of the human mind” is currently the design spec for the next generation of test engineering tools Google is building.
    Those tools will be highlighted in future posts. However, my next target is going to revolve around The Life of an SET. I hope you keep reading.

    posted on 2011-06-04 15:54 XXXXXX 閱讀(290) 評論(0)  編輯  收藏 所屬分類: Uncategorized

    主站蜘蛛池模板: 免费国产草莓视频在线观看黄| 亚洲另类无码专区丝袜| 香蕉视频在线免费看| 久久久久国产成人精品亚洲午夜| 一级黄色毛片免费看| 亚洲人午夜射精精品日韩| 一级特黄录像视频免费| 国产成人麻豆亚洲综合无码精品 | 亚洲精品免费网站| 国产成人精品日本亚洲18图| 久久99九九国产免费看小说| 亚洲一区二区三区亚瑟| 免费无遮挡无码视频网站| 亚洲av无码成人精品国产| 亚洲精品NV久久久久久久久久| 中文字幕一区二区三区免费视频 | 亚洲成aⅴ人片在线影院八| 国产乱码免费卡1卡二卡3卡| 亚洲中文字幕乱码AV波多JI| 国产免费人成在线视频| 国产特黄一级一片免费 | 亚洲色欲色欲www在线播放| 国产精品免费视频网站| 五月天国产成人AV免费观看| 久久久久久久综合日本亚洲| 四虎精品视频在线永久免费观看| 亚洲av无码无线在线观看| 中文字幕亚洲乱码熟女一区二区| 无码国产精品一区二区免费3p| va天堂va亚洲va影视中文字幕| 国产一区二区视频免费| 国产无遮挡又黄又爽免费网站| 亚洲成AV人综合在线观看| 免费在线观看亚洲| 午夜精品射精入后重之免费观看 | 亚洲电影一区二区三区| 成人影片麻豆国产影片免费观看 | jizz日本免费| 亚洲欧洲日本精品| 亚洲 综合 国产 欧洲 丝袜| 一区二区免费视频|