锘??xml version="1.0" encoding="utf-8" standalone="yes"?>久久狠狠高潮亚洲精品,亚洲无删减国产精品一区,久久亚洲国产成人影院网站 http://m.tkk7.com/jinfeng_wang/category/700.htmlG-G-S,D-D-U!zh-cnTue, 27 Feb 2007 08:54:34 GMTTue, 27 Feb 2007 08:54:34 GMT60鏁版嵁搴撻┍鍔ㄧ▼搴忕殑嫻嬭瘯闇瑕佹敞鎰忕殑闂http://m.tkk7.com/jinfeng_wang/archive/2006/05/28/48597.htmljinfeng_wangjinfeng_wangSun, 28 May 2006 07:52:00 GMThttp://m.tkk7.com/jinfeng_wang/archive/2006/05/28/48597.htmlhttp://m.tkk7.com/jinfeng_wang/comments/48597.htmlhttp://m.tkk7.com/jinfeng_wang/archive/2006/05/28/48597.html#Feedback0http://m.tkk7.com/jinfeng_wang/comments/commentRss/48597.htmlhttp://m.tkk7.com/jinfeng_wang/services/trackbacks/48597.html 1. 涓嶈鐢═estCase鐨勬瀯閫犲嚱鏁板垵濮嬪寲Fixture錛岃岃鐢?br />setUp()鍜宼earDown()鏂規硶銆?/font>

2. 涓嶈渚濊禆鎴栧亣瀹氭祴璇曡繍琛岀殑欏哄簭錛屽洜涓篔Unit鍒╃敤
Vector淇濆瓨嫻嬭瘯鏂規硶銆傛墍浠ヤ笉鍚岀殑騫沖彴浼氭寜涓嶅悓鐨?br />欏哄簭浠嶸ector涓彇鍑烘祴璇曟柟娉曘?

3. 閬垮厤緙栧啓鏈夊壇浣滅敤鐨凾estCase銆備緥濡傦細濡傛灉闅忓悗鐨?br />嫻嬭瘯渚濊禆浜庢煇浜涚壒瀹氱殑浜ゆ槗鏁版嵁錛屽氨涓嶈鎻愪氦浜ゆ槗鏁?br />鎹傜畝鍗曠殑浼氭粴灝卞彲浠ヤ簡銆?/font>

聽 瀵逛簬鎴戜滑鏉ヨ錛屾湁鏃舵槸蹇呴』瑕佹彁浜わ紝浠ヨ嚦浜庢湁鍓綔鐢ㄧ殑銆?/font>
聽 渚嬪錛氬湪鎵ц鈥滄彃鍏モ滃悗錛屾暟鎹簱鏄劇劧浼氬鍑轟竴鏉℃暟
鎹潵銆傞偅涔堝繀欏誨湪闅忓悗姣忎釜嫻嬭瘯鑷繁娑堥櫎鑷繁鐨勫壇
浣滅敤銆?/font>
聽聽鍦ㄨ繖閲岋紝灝辨槸鑷繁鈥滃啀鍒犻櫎鍒氭彃鍏ョ殑鏁版嵁鈥濄傦紙榪欐椂鍊?br />闇瑕佽冭檻鍒拌繖涓杽鍚庣殑宸ヤ綔涓嶈兘鑷繁灝變笉鑳芥湁鍓綔鐢紝
聽 鍒犻櫎澶氫簡鍏朵粬鐨勬暟鎹級銆?/font>
聽 榪欓噷鐨勫壇浣滅敤榪樻寚鈥滃獎鍝嶅埌鍛ㄥ洿鐜鈥濓紝鍥犱負鎴戜滑鐜?br />鍦ㄥ伐浣滅殑浜烘瘮杈冨錛屾墍浠ユ渶濂藉ぇ瀹剁殑嫻嬭瘯鏈嶅姟鍣ㄨ兘澶?br />鍒嗗紑鏉ワ紝
聽渚嬪涓涓漢涓涓狣atabase瀹炰緥錛堝彲浠ュ緩寰楃◢寰皬涓
鐐癸級鎴栬呬竴涓漢涓涓暟鎹簱錛?/font> 聽 娉ㄦ剰灝嗚繖浜涗釜浜轟箣闂?br />鏈夊尯鍒殑鍐呭鐢ㄥ父閲忓湪姣忎釜浜鴻嚜宸辯殑鎵鏈夌▼搴忎腑鍏?br />鐢ㄣ傝屼笉鏄垎甯冨湪鍚勪釜浣嶇疆銆?/font> 聽 鍚﹀垯浠ュ悗瑕佹敼鎹㈡祴璇?br />鏈嶅姟鍣紝鎵鏈夌殑紼嬪簭閮介渶瑕佹敼鍔ㄣ?/font>
聽 涓轟簡淇濊瘉嫻嬭瘯紼嬪簭鑳藉寰堝鏄撶殑鍒板鎵ц錛岃淇濊瘉
澶у鐨勬暟鎹簱鏈嶅姟鍣ㄧ殑嫻嬭瘯鏁版嵁鍏ㄩ儴涓鑷淬?/font> 聽鍚﹀垯錛?br />灝變笉鑳藉仛鍒板緢瀹規槗寰楁嬁鍒癋J涔熷彲浠ュ緢瀹規槗鐨勮繍琛岋紝
鎵浠ラ渶瑕佸噯澶団滄祴璇曟暟鎹泦鈥溿?/font> 鍖呮嫭錛歋chema ,table 錛?br />stored procedure絳夋暟鎹簱瀵硅薄鐨勭粨鏋勪竴鑷達紝 聽榪樺寘
鎷暟鎹簱鐨勬暟鎹唴瀹逛繚鎸佷竴鑷淬?/font>

4. 褰撶戶鎵夸竴涓祴璇曠被鏃訛紝璁板緱璋冪敤鐖剁被鐨剆etUp()鍜?br />tearDown()鏂規硶銆?

5. 灝嗘祴璇曚唬鐮佸拰宸ヤ綔浠g爜鏀懼湪涓璧鳳紝涓杈瑰悓姝ョ紪璇?br />鍜屾洿鏂般傦紙浣跨敤Ant涓湁鏀寔junit鐨則ask.錛?

6. 嫻嬭瘯綾誨拰嫻嬭瘯鏂規硶搴旇鏈変竴鑷寸殑鍛藉悕鏂規銆傚鍦?br />宸ヤ綔綾誨悕鍓嶅姞涓妕est浠庤屽艦鎴愭祴璇曠被鍚嶃?
鍙兘榪欓噷鎴戜滑闇瑕佹敼鍔紝灝嗗嚱鏁板悕鍜屾垜浠殑嫻嬭瘯鐢?br />渚嬬殑緙栧彿涓鑷磋搗鏉ャ?/font>

7. 紜繚嫻嬭瘯涓庢椂闂存棤鍏籌紝涓嶈渚濊禆浣跨敤榪囨湡鐨勬暟鎹?br />榪涜嫻嬭瘯銆傚鑷村湪闅忓悗鐨勭淮鎶よ繃紼嬩腑寰堥毦閲嶇幇嫻嬭瘯銆?

8. 濡傛灉浣犵紪鍐欑殑杞歡闈㈠悜鍥介檯甯傚満錛岀紪鍐欐祴璇曟椂瑕?br />鑰冭檻鍥介檯鍖栫殑鍥犵礌銆備笉瑕佷粎鐢ㄦ瘝璇殑Locale榪涜嫻嬭瘯銆?

9. 灝藉彲鑳藉湴鍒╃敤JUnit鎻愪緵鍦癮ssert/fail鏂規硶浠ュ強
寮傚父澶勭悊鐨勬柟娉曪紝鍙互浣夸唬鐮佹洿涓虹畝媧併?
榪欎釜鍐呭鏈夊叾鍏抽敭錛宎ssert璇彞鐨勫ソ鍧忕洿鎺ュ獎鍝?br />鍒版祴璇曠殑姝g‘鎬с?/font> 鍥犱負assert灝辨槸鐢ㄤ簬褰撳墠嫻嬭瘯
欏圭殑姝g‘鎬х殑銆?/font>

10.嫻嬭瘯瑕佸敖鍙兘鍦板皬錛屾墽琛岄熷害蹇?


==========
1錛夊皢鎵鏈夌殑鏁版嵁搴撶殑嫻嬭瘯鏁版嵁鐢∣DBC紼嬪簭鑷姩
鐢熸垚鐨勩?鐢ㄦ埛鍙互綆鍗曠殑淇敼ConnectionString錛?br />鐒跺悗榪愯紼嬪簭錛屽氨鍙互鍒涘緩鐢熸垚鏁?/font>鎹簱/鏁版嵁搴?br />琛?瀛樺偍緇撴瀯錛屽茍涓旇嚜鍔ㄦ彃鍏ユ暟鎹?br />
聽聽 2錛変負浜嗕繚璇佸涓祴璇曚漢鍛樼殑涓嶅共鎵幫紝寤鴻鍒嗗埆
鍚勮嚜鍗曠嫭浣跨敤鑷繁鐨勬暟鎹?/font>搴撱傚惁鍒欎細鍥犱負涓涓嚜
宸辯殑閿欒錛屽獎鍝嶅埆浜虹殑宸ヤ綔銆?/font>
聽聽 3錛夊湪鑷繁鐨勭▼搴忎腑錛屾墍鏈夋秹鍙婄幆澧冪殑鍐呭閮界敤
鍗曠嫭鏀懼埌涓涓被涓紝鐢╯tatic
甯擱噺鍏變韓浣跨敤錛堣繖鏍?br />灝變究浜庡緢瀹規槗鐨勬洿鎹㈢幆澧冨啀榪涜嫻嬭瘯錛屽仛鍒板緢瀹?br />鏄撶殑縐繪嫻嬭瘯鐜錛夈?/font>
聽聽 4錛夊叧浜庢暟鎹簱琛ㄧ粨鏋勶紝鎴戝緩璁祴璇曡〃涓惈鏈変竴
涓富閿紝鎴戜滑鍦ㄦ彃鍏ユ暟鎹殑鏃?/font>鍊欙紝淇濊瘉嫻嬭瘯鐢ㄤ緥錛?br />嫻嬭瘯鐢ㄤ緥紼嬪簭錛屾祴璇曠敤渚嬬▼搴忎腑鐨勬暟鎹紝榪欎笁鑰?br />鐨勭紪鍙蜂竴鑷磋搗鏉ャ備究浜庡嚭鐜伴棶棰樻椂錛屽彲浠ユ帓闄ゆ暟鎹?/font>


]]>
鏁版嵁搴撻┍鍔ㄧ▼搴忔祴璇曠殑寤鴻http://m.tkk7.com/jinfeng_wang/archive/2006/05/28/48596.htmljinfeng_wangjinfeng_wangSun, 28 May 2006 07:32:00 GMThttp://m.tkk7.com/jinfeng_wang/archive/2006/05/28/48596.htmlhttp://m.tkk7.com/jinfeng_wang/comments/48596.htmlhttp://m.tkk7.com/jinfeng_wang/archive/2006/05/28/48596.html#Feedback0http://m.tkk7.com/jinfeng_wang/comments/commentRss/48596.htmlhttp://m.tkk7.com/jinfeng_wang/services/trackbacks/48596.html 1. 涓嶈鐢═estCase鐨勬瀯閫犲嚱鏁板垵濮嬪寲Fixture錛岃岃鐢?br />setUp()鍜宼earDown()鏂規硶銆?/font>

2. 涓嶈渚濊禆鎴栧亣瀹氭祴璇曡繍琛岀殑欏哄簭錛屽洜涓篔Unit鍒╃敤
Vector淇濆瓨嫻嬭瘯鏂規硶銆傛墍浠ヤ笉鍚岀殑騫沖彴浼氭寜涓嶅悓鐨?br />欏哄簭浠嶸ector涓彇鍑烘祴璇曟柟娉曘?

3. 閬垮厤緙栧啓鏈夊壇浣滅敤鐨凾estCase銆備緥濡傦細濡傛灉闅忓悗鐨?br />嫻嬭瘯渚濊禆浜庢煇浜涚壒瀹氱殑浜ゆ槗鏁版嵁錛屽氨涓嶈鎻愪氦浜ゆ槗
鏁版嵁銆傜畝鍗曠殑浼氭粴灝卞彲浠ヤ簡銆?br />

聽 瀵逛簬鎴戜滑鏉ヨ錛屾湁鏃舵槸蹇呴』瑕佹彁浜わ紝浠ヨ嚦浜庢湁鍓綔鐢ㄧ殑銆?/font>
聽 渚嬪錛氬湪鎵ц鈥滄彃鍏モ滃悗錛屾暟鎹簱鏄劇劧浼氬鍑轟竴鏉℃暟鎹潵銆?br />閭d箞蹇呴』鍦ㄩ殢鍚庢瘡涓祴璇曡嚜宸辨秷闄よ嚜宸辯殑鍓綔鐢ㄣ?/font>
聽聽鍦ㄨ繖閲岋紝灝辨槸鑷繁鈥滃啀鍒犻櫎鍒氭彃鍏ョ殑鏁版嵁鈥濄傦紙榪欐椂鍊欓渶瑕?br />鑰冭檻鍒拌繖涓杽鍚庣殑宸ヤ綔涓嶈兘鑷繁灝變笉鑳芥湁鍓綔鐢紝 聽 鍒犻櫎
澶氫簡鍏朵粬鐨勬暟鎹級銆?/font>
聽 榪欓噷鐨勫壇浣滅敤榪樻寚鈥滃獎鍝嶅埌鍛ㄥ洿鐜鈥濓紝鍥犱負鎴戜滑鐜板湪宸?br />浣滅殑浜烘瘮杈冨錛屾墍浠ユ渶濂藉ぇ瀹剁殑嫻嬭瘯鏈嶅姟鍣ㄨ兘澶熷垎寮鏉ワ紝
聽渚嬪涓涓漢涓涓狣atabase瀹炰緥錛堝彲浠ュ緩寰楃◢寰皬涓鐐癸級鎴?br />鑰呬竴涓漢涓涓暟鎹簱錛?/font> 聽 娉ㄦ剰灝嗚繖浜涗釜浜轟箣闂存湁鍖哄埆鐨勫唴
瀹圭敤甯擱噺鍦ㄦ瘡涓漢鑷繁鐨勬墍鏈夌▼搴忎腑鍏敤銆傝屼笉鏄垎甯冨湪
鍚勪釜浣嶇疆銆?/font> 聽 鍚﹀垯浠ュ悗瑕佹敼鎹㈡祴璇曟湇鍔″櫒錛屾墍鏈夌殑紼嬪簭閮介渶
瑕佹敼鍔ㄣ?/font>
聽 涓轟簡淇濊瘉嫻嬭瘯紼嬪簭鑳藉寰堝鏄撶殑鍒板鎵ц錛岃淇濊瘉澶у
鐨勬暟鎹簱鏈嶅姟鍣ㄧ殑嫻嬭瘯鏁版嵁鍏ㄩ儴涓鑷淬?/font> 聽鍚﹀垯錛屽氨涓嶈兘鍋氬埌
寰堝鏄撳緱鎷垮埌FJ涔熷彲浠ュ緢瀹規槗鐨勮繍琛岋紝鎵浠ラ渶瑕佸噯澶団滄祴
璇曟暟鎹泦鈥溿?/font> 鍖呮嫭錛歋chema ,table 錛宻tored procedure絳夋暟鎹?br />搴撳璞$殑緇撴瀯涓鑷達紝 聽榪樺寘鎷暟鎹簱鐨勬暟鎹唴瀹逛繚鎸佷竴鑷淬?/font>

4. 褰撶戶鎵夸竴涓祴璇曠被鏃訛紝璁板緱璋冪敤鐖剁被鐨剆etUp()鍜宼earDown()鏂規硶銆?

5. 灝嗘祴璇曚唬鐮佸拰宸ヤ綔浠g爜鏀懼湪涓璧鳳紝涓杈瑰悓姝ョ紪璇戝拰鏇存柊銆?br />錛堜嬌鐢ˋnt涓湁鏀寔junit鐨則ask.錛?

6. 嫻嬭瘯綾誨拰嫻嬭瘯鏂規硶搴旇鏈変竴鑷寸殑鍛藉悕鏂規銆傚鍦ㄥ伐浣滅被
鍚嶅墠鍔犱笂test浠庤屽艦鎴愭祴璇曠被鍚嶃?
鍙兘榪欓噷鎴戜滑闇瑕佹敼鍔紝灝嗗嚱鏁板悕鍜屾垜浠殑嫻嬭瘯鐢ㄤ緥鐨勭紪鍙蜂竴鑷磋搗鏉ャ?/font>

7. 紜繚嫻嬭瘯涓庢椂闂存棤鍏籌紝涓嶈渚濊禆浣跨敤榪囨湡鐨勬暟鎹繘琛屾祴璇曘?br />瀵艱嚧鍦ㄩ殢鍚庣殑緇存姢榪囩▼涓緢闅鵑噸鐜版祴璇曘?

8. 濡傛灉浣犵紪鍐欑殑杞歡闈㈠悜鍥介檯甯傚満錛岀紪鍐欐祴璇曟椂瑕佽冭檻鍥介檯
鍖栫殑鍥犵礌銆備笉瑕佷粎鐢ㄦ瘝璇殑Locale榪涜嫻嬭瘯銆?

9. 灝藉彲鑳藉湴鍒╃敤JUnit鎻愪緵鍦癮ssert/fail鏂規硶浠ュ強寮傚父澶勭悊鐨?br />鏂規硶錛屽彲浠ヤ嬌浠g爜鏇翠負綆媧併?
榪欎釜鍐呭鏈夊叾鍏抽敭錛宎ssert璇彞鐨勫ソ鍧忕洿鎺ュ獎鍝嶅埌嫻嬭瘯鐨勬紜с?/font>
鍥犱負assert灝辨槸鐢ㄤ簬褰撳墠嫻嬭瘯欏圭殑姝g‘鎬х殑銆?/font>

10.嫻嬭瘯瑕佸敖鍙兘鍦板皬錛屾墽琛岄熷害蹇?

=============
聽1錛夊皢鎵鏈夌殑鏁版嵁搴撶殑嫻嬭瘯鏁版嵁紼嬪簭鑷姩鐢熸垚鐨勩?
鐢ㄦ埛鍙互綆鍗曠殑淇敼ConnectionString錛岀劧鍚庤繍琛岀▼搴忥紝
灝卞彲浠ュ垱寤虹敓鎴愭暟
鎹簱/鏁版嵁搴撹〃/瀛樺偍緇撴瀯錛屽茍涓旇嚜鍔?br />鎻掑叆鏁版嵁銆?

聽聽 2錛変負浜嗕繚璇佸涓祴璇曚漢鍛樼殑涓嶅共鎵幫紝寤鴻鍒嗗埆鍚勮嚜
鍗曠嫭浣跨敤鑷繁鐨勬暟鎹?/font>搴撱傚惁鍒欎細鍥犱負涓涓嚜宸辯殑閿欒錛?br />褰卞搷鍒漢鐨勫伐浣溿?/font>
聽聽 3錛夊湪鑷繁鐨勭▼搴忎腑錛屾墍鏈夋秹鍙婄幆澧冪殑鍐呭閮界敤鍗曠嫭
鏀懼埌涓涓被涓紝鐢╯tatic
甯擱噺鍏變韓浣跨敤錛堣繖鏍峰氨渚夸簬寰?br />瀹規槗鐨勬洿鎹㈢幆澧冨啀榪涜嫻嬭瘯錛屽仛鍒板緢瀹規槗鐨勭Щ妞?/font>嫻嬭瘯
鐜錛夈?/font>
聽聽 4錛夊叧浜庢暟鎹簱琛ㄧ粨鏋勶紝鎴戝緩璁祴璇曡〃涓惈鏈変竴涓富閿紝
鎴戜滑鍦ㄦ彃鍏ユ暟鎹殑鏃?/font>鍊欙紝淇濊瘉嫻嬭瘯鐢ㄤ緥錛屾祴璇曠敤渚嬬▼搴忥紝
嫻嬭瘯鐢ㄤ緥紼嬪簭涓殑鏁版嵁錛岃繖涓夎呯殑緙栧彿涓
鑷磋搗鏉ャ備究浜?br />鍑虹幇闂鏃訛紝鍙互鎺掗櫎鏁版嵁銆?/font>


]]>
stub VS mockhttp://m.tkk7.com/jinfeng_wang/archive/2005/04/25/3721.htmljinfeng_wangjinfeng_wangMon, 25 Apr 2005 08:41:00 GMThttp://m.tkk7.com/jinfeng_wang/archive/2005/04/25/3721.htmlhttp://m.tkk7.com/jinfeng_wang/comments/3721.htmlhttp://m.tkk7.com/jinfeng_wang/archive/2005/04/25/3721.html#Feedback0http://m.tkk7.com/jinfeng_wang/comments/commentRss/3721.htmlhttp://m.tkk7.com/jinfeng_wang/services/trackbacks/3721.html    
        鍦ㄧ悊瑙e叾鍖哄埆涔嬪墠錛岄渶瑕佹槑鐧戒竴鐐癸紝浠栦滑閮芥槸涓轟簡鍚屼竴涓洰鏍囪屽嚭鐜扮殑錛屼唬鏇夸緷璧栭儴鍒嗭紝璁╁師鍏堢殑鈥滄暣鍚堟祴璇曗濈畝鍖栦負鈥滃崟鍏冩祴璇曗濄?nbsp;      

mock錛氫嬌鐢╡asymock絳夊寘錛屽湪紼嬪簭浠g爜涓悜琚祴璇曚唬鐮佹敞鍏モ滀緷璧栭儴鍒嗏濓紝閫氳繃浠g爜鍙紪紼嬬殑鏂瑰紡妯℃嫙鍑哄嚱鏁拌皟鐢ㄨ繑鍥炵殑緇撴灉銆?BR>
stub錛氳嚜宸卞啓浠g爜浠f浛鈥滀緷璧栭儴鍒嗏濄傚畠鏈韓灝辨槸鈥滀緷璧栭儴鍒嗏濈殑涓涓畝鍖栧疄鐜般?BR>
     瀹為檯涓婏紝鍦ㄨ兘澶熶嬌鐢╩ock鐨勬椂鍊欙紝灝變笉搴旇閫夋嫨浣跨敤stub銆備絾鏄湁鏃跺欐槸蹇呴』浣跨敤stub鐨勶紝渚嬪鍦ㄥ閬楃暀浠g爜榪涜嫻嬭瘯鏃訛紝璇ラ儴鍒嗕唬鐮佷笉鏀寔鈥滄敞鍏モ濓紝閭d箞鍙兘灝嗏滄浛浠b濊繖涓繃紼嬪縐伙紝浣跨敤stub瀹屾垚姝や換鍔′簡銆?img src ="http://m.tkk7.com/jinfeng_wang/aggbug/3721.html" width = "1" height = "1" />

]]>
Junit Test Practices(zz)http://m.tkk7.com/jinfeng_wang/archive/2005/04/25/3717.htmljinfeng_wangjinfeng_wangMon, 25 Apr 2005 08:22:00 GMThttp://m.tkk7.com/jinfeng_wang/archive/2005/04/25/3717.htmlhttp://m.tkk7.com/jinfeng_wang/comments/3717.htmlhttp://m.tkk7.com/jinfeng_wang/archive/2005/04/25/3717.html#Feedback0http://m.tkk7.com/jinfeng_wang/comments/commentRss/3717.htmlhttp://m.tkk7.com/jinfeng_wang/services/trackbacks/3717.htmlJunit Test Practices

 

Now that we've seen JUnit in action, let's step back a little and look at some good practices for writing tests. Although we'll discuss implementing them with JUnit, these practices are applicable to whatever test tool we may choose to use.

 

Write Tests to Interfaces

Wherever possible, write tests to interfaces, rather than classes. It's good OO design practice to program to interfaces, rather than classes, and testing should reflect this. Different test suites can easily be created to run the same tests against implementations of an interface (see Inheritance and Testing later).

瀵規煇涓被榪涜嫻嬭瘯鏃訛紝浠呮祴璇曡綾誨澶栧叕寮鐨勬帴鍙o紝閫傚綋鐨勬祴涓浜涘叾鍐呴儴鐨勬帴鍙c傚綋涓涓被浠庡叾浠栫被緇ф壙鏂規硶鏃訛紝閭d箞瀵硅繖浜涙柟娉曠殑嫻嬭瘯鍒欎笉搴旇鐢眘ubclass鐨勬祴璇曞畬鎴愶紝鑰岀敱parent class鐨勬祴璇曞畬鎴愩?/FONT>

 

Don't Bother Testing JavaBean Properties

It's usually unnecessary to test property getters and setters. It's usually a waste of time to develop such tests. Also, bloating test cases with code that isn't really useful makes them harder to read and maintain.

 

Maximizing Test Coverage

Test-first development is the best strategy for ensuring that we maximize test coverage. However, sometimes tools can help to verify that we have met our goals for test coverage. For example, a profiling tool such as Sitraka'sJProbe Profiler (discussed in Chapter 15) can be used to examine the execution path through an application under test and establish what code was (and wasn't) executed. Specialized tools such as JProbe Coverage (also part of theJProbe Suite) make this much easier. Jprobe Coverage can analyze one or more test runs along with the application codebase, to produce a list of methods| and even lines of source code that weren't executed. The modest investment in such a tool is likely to be worthwhile when it's necessary to implement a test suite for code that doesn't already have one.

 

Don't Rely on the Ordering of Test Cases

When using reflection to identify test methods to execute, JUnit does not guarantee the order in which it runs tests. Thus tests shouldn't rely on other tests having been executed previously. If ordering is vital, it's possible f to add tests to a TestSuite object programmatically. They will be executed in the order in which they were added. However, it's best to avoid ordering issues by using the setup () method appropriately.

 

Avoid Side Effects

For the same reasons, it's important to avoid side effects when testing. A side effect occurs when one test changes the state of the system being tested in a way that may affect subsequent tests. Changes to persistent data in a database are also potential side effects.

 

Read Test Data from the Classpath, Not the File System

It's essential that tests are easy to run. A minimum of configuration should be required. A common cause of problems when running a test suite is for tests to read their configuration from the file system. Using absolute file paths will cause problems when code is checked out to a different location; different file location and path conventions (such as \home\rodj \tests\foo.dat or C:\\Documents and Settings\ \rodj \ \ f oo.dat) can tie tests to a particular operating system. These problems can be avoided by loading test data from the classpath, with the Class.getResource () or   Class.getResourceAsStream() methods. The necessary resources are usually best placed in the same directory as the test classes that use them.

 

Avoid Code Duplication in Test Cases

Test cases are an important part of the application. As with application code, the more code duplication they contain, the more likely they are to contain errors. The more code test cases contain the more of a chore they are to write and the less likely it is that they will be written. Avoid this problem by a small investment in test infrastructure. We've already seen the use of a private method by several test cases, which greatly simplifies the test methods using it.


 

Inheritance and Testing

We need to consider the implications of the inheritance hierarchy of classes we test. A class should pass all tests associated with its superclasses and the interfaces it implements. This is a corollary of the "Liskov Substitution Principle", which we'll meet in Chapter 4.

When using JUnit, we can use inheritance to our advantage. When one JUnit test case extends another (rather than extending junit.framework.TestCase directly), all the tests in the superclass are executed, as well as tests added in the subclass. This means that JUnit test cases can use an inheritance hierarchy paralleling the concrete inheritance hierarchy of the classes being tested.

In another use of inheritance among test cases, when a test case is written against an interface, we can make the test case abstract, and test individual implementations in concrete subclasses. The abstract superclass can declare a protected abstract method returning the actual object to be tested, forcing subclasses to implement it.

It's good practice to subclass a more general JUnit test case to add new tests for a subclass of an

object or a particular implementation of an interface.



]]>
Mock Objects in Unit Tests錛坺z錛?/title><link>http://m.tkk7.com/jinfeng_wang/archive/2005/03/20/2262.html</link><dc:creator>jinfeng_wang</dc:creator><author>jinfeng_wang</author><pubDate>Sun, 20 Mar 2005 10:16:00 GMT</pubDate><guid>http://m.tkk7.com/jinfeng_wang/archive/2005/03/20/2262.html</guid><wfw:comment>http://m.tkk7.com/jinfeng_wang/comments/2262.html</wfw:comment><comments>http://m.tkk7.com/jinfeng_wang/archive/2005/03/20/2262.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://m.tkk7.com/jinfeng_wang/comments/commentRss/2262.html</wfw:commentRss><trackback:ping>http://m.tkk7.com/jinfeng_wang/services/trackbacks/2262.html</trackback:ping><description><![CDATA[<H2><IMG height=91 alt="Mock Objects in Unit Tests" hspace=10 src="http://www.onjava.com/onjava/2005/01/12/graphics/111-mock_obj.gif" width=111 align=left border=0> Mock Objects in Unit Tests</H2>by <A lid="Lu Jian">Lu Jian</A><BR>01/12/2005<BR clear=all> <P>The use of <I>mock objects</I> is a widely employed unit testing strategy. It shields external and unnecessary factors from testing and helps developers focus on a specific function to be tested. </P> <P> <TABLE cellSpacing=0 cellPadding=8 width=336 align=right border=0> <TBODY> <TR> <TD><!-- dy --><NOSCRIPT> <a ><img src="http://ad.doubleclick.net/ad/onjava.ds/jdart;abr=!ie;pos=_jdart;sz=336x280;ord=1754602270?" border="0" width="336" height="280" alt="Advertisement" /></a> </NOSCRIPT></TD></TR></TBODY></TABLE><!-- me --><A lid="EasyMock">EasyMock</A> is a well-known mock tool that can create a mock object for a given interface at runtime. The mock object's behavior can be defined prior encountering the test code in the test case. EasyMock is based on <CODE>java.lang.reflect.Proxy</CODE>, which can create dynamic proxy classes/objects according to given interfaces. But it has an inherent limitation from its use of <CODE>Proxy</CODE>: it can create mock objects only for interfaces.</P> <P><A el="http://mocquer.dev.java.net" lid="Mocquer">Mocquer</A> is a similar mock tool, but one that extends the functionality of EasyMock to support mock object creation for classes as well as interfaces.</P> <H3>Introduction to Mocquer</H3> <P>Mocquer is based on the <A el="http://dunamis.dev.java.net" lid="Dunamis project">Dunamis project</A>, which is used to generate dynamic delegation classes/objects for specific interfaces/classes. For convenience, it follows the class and method naming conventions of EasyMock, but uses a different approach internally.</P><!-- sidebar begins --><!-- don't move sidebars --><!-- sidebar ends --> <P><CODE>MockControl</CODE> is the main class in the Mocquer project. It is used to control the the mock object life cycle and behavior definition. There are four kinds methods in this class.</P> <UL> <LI>Life Cycle Control Methods <PRE><CODE> public void replay(); public void verify(); public void reset(); </CODE></PRE> <P>The mock object has three states in its life cycle: <I>preparing</I>, <I>working</I>, and <I>checking</I>. Figure 1 shows the mock object life cycle.</P> <P><IMG height=192 alt="Mock Object Life Cycle" src="http://www.onjava.com/onjava/2005/01/12/graphics/state.gif" width=450><BR><I>Figure 1. Mock object life cycle</I> </P>Initially, the mock object is in the preparing state. The mock object's behavior can be defined in this state. <CODE>replay()</CODE> changes the mock object's state to the working state. All method invocations on the mock object in this state will follow the behavior defined in the preparing state. After <CODE>verify()</CODE> is called, the mock object is in the checking state. <CODE>MockControl</CODE> will compare the mock object's predefined behavior and actual behavior to see whether they match. The match rule depends on which kind of <CODE>MockControl</CODE> is used; this will be explained in a moment. The developer can use <CODE>replay()</CODE> to reuse the predefined behavior if needed. Call <CODE>reset()</CODE>, in any state, to clear the history state and change to the initial preparing state. <P></P> <LI>Factory Methods <PRE><CODE> public static MockControl createNiceControl(...); public static MockControl createControl(...); public static MockControl createStrictControl(...); </CODE></PRE> <P>Mocquer provides three kinds of <CODE>MockControls</CODE>: <CODE>Nice</CODE>, <CODE>Normal</CODE>, and <CODE>Strict</CODE>. The developer can choose an appropriate <CODE>MockControl</CODE> in his or her test case, according to what is to be tested (the <I>test point</I>) and how the test will be carried out (the <I>test strategy</I>). The <CODE>Nice</CODE> <CODE>MockControl</CODE> is the loosest. It does not care about the order of method invocation on the mock object, or about unexpected method invocations, which just return a default value (that depends on the method's return value). The <CODE>Normal</CODE> <CODE>MockControl</CODE> is stricter than the <CODE>Nice</CODE> <CODE>MockControl</CODE>, as an unexpected method invocation on the mock object will lead to an <CODE>AssertionFailedError</CODE>. The <CODE>Strict</CODE> <CODE>MockControl</CODE> is, naturally, the strictest. If the order of method invocation on the mock object in the working state is different than that in the preparing state, an <CODE>AssertionFailedError</CODE> will be thrown. The table below shows the differences between these three kinds of <CODE>MockControl</CODE>.</P> <TABLE cellSpacing=0 cellPadding=4 border=1> <TBODY> <TR> <TH> </TH> <TH><CODE>Nice</CODE></TH> <TH><CODE>Normal</CODE></TH> <TH><CODE>Strict</CODE></TH></TR> <TR> <TD>Unexpected Order</TD> <TD>Doesn't care</TD> <TD>Doesn't care</TD> <TD><CODE>AssertionFailedError</CODE></TD></TR> <TR> <TD>Unexpected Method</TD> <TD>Default value</TD> <TD><CODE>AssertionFailedError</CODE></TD> <TD><CODE>AssertionFailedError</CODE></TD></TR></TBODY></TABLE> <P>There are two versions for each factory method.</P><!-- jamie: "clazz" is ok; common variable name for referring to Class objects. --><PRE><CODE> public static MockControl createXXXControl(Class clazz); public static MockControl createXXXControl(Class clazz, Class[] argTypes, Object[] args); </CODE></PRE> <P>If the class to be mocked is an interface or it has a public/protected default constructor, the first version is enough. Otherwise, the second version factory method is used to specify the signature and provide arguments to the desired constructor. For example, assuming <CODE>ClassWithNoDefaultConstructor</CODE> is a class without a default constructor:</P><PRE><CODE> public class ClassWithNoDefaultConstructor { public ClassWithNoDefaultConstructor(int i) { ... } ... } </CODE></PRE> <P>The <CODE>MockControl</CODE> can be obtained through:</P><PRE><CODE> MockControl control = MockControl.createControl( ClassWithNoDefaultConstructor.class, new Class[]{Integer.TYPE}, new Object[]{new Integer(0)}); </CODE></PRE> <LI>Mock object getter method <PRE><CODE> public Object getMock(); </CODE></PRE> <P>Each <CODE>MockControl</CODE> contains a reference to the generated mock object. The developer can use this method to get the mock object and cast it to the real type.</P><PRE><CODE> //get mock control MockControl control = MockControl.createControl(Foo.class); //Get the mock object from mock control Foo foo = (Foo) control.getMock(); </CODE></PRE> <LI>Behavior definition methods <PRE><CODE> public void setReturnValue(... value); public void setThrowable(Throwable throwable); public void setVoidCallable(); public void setDefaultReturnValue(... value); public void setDefaultThrowable(Throwable throwable); public void setDefaultVoidCallable(); public void setMatcher(ArgumentsMatcher matcher); public void setDefaultMatcher(ArgumentsMatcher matcher); </CODE></PRE> <P><CODE>MockControl</CODE> allows the developer to define the mock object's behavior per each method invocation on it. When in the preparing state, the developer can call one of the mock object's methods first to specify which method invocation's behavior is to be defined. Then, the developer can use one of the behavior definition methods to specify the behavior. For example, take the following <CODE>Foo</CODE> class:</P><PRE><CODE> //Foo.java public class Foo { public void dummy() throw ParseException { ... } public String bar(int i) { ... } public boolean isSame(String[] strs) { ... } public void add(StringBuffer sb, String s) { ... } } </CODE></PRE>The behavior of the mock object can be defined as in the following: <PRE><CODE> //get mock control MockControl control = MockControl.createControl(Foo.class); //get mock object Foo foo = (Foo)control.getMock(); //begin behavior definition //specify which method invocation's behavior //to be defined. foo.bar(10); //define the behavior -- return "ok" when the //argument is 10 control.setReturnValue("ok"); ... //end behavior definition control.replay(); ... </CODE></PRE> <P>Most of the more than 50 methods in <CODE>MockControl</CODE> are behavior definition methods. They can be grouped into following categories.</P> <UL> <LI><CODE>setReturnValue()</CODE> <P>These methods are used to specify that the last method invocation should return a value as the parameter. There are seven versions of <CODE>setReturnValue()</CODE>, each of which takes a primitive type as its parameter, such as <CODE>setReturnValue(int i)</CODE> or <CODE>setReturnValue(float f)</CODE>. <CODE>setReturnValue(Object obj)</CODE> is used for a method that takes an object instead of a primitive. If the given value does not match the method's return type, an <CODE>AssertionFailedError</CODE> will be thrown.</P> <P>It is also possible to add the number of expected invocations into the behavior definition. This is called the <I>invocation times limitation</I>.</P><PRE><CODE> MockControl control = ... Foo foo = (Foo)control.getMock(); ... foo.bar(10); //define the behavior -- return "ok" when the //argument is 10. And this method is expected //to be called just once. setReturnValue("ok", 1); ... </CODE></PRE> <P>The code segment above specifies that the method invocation, <CODE>bar(10)</CODE>, can only occur once. How about providing a range?</P><PRE><CODE> ... foo.bar(10); //define the behavior -- return "ok" when the //argument is 10. And this method is expected //to be called at least once and at most 3 //times. setReturnValue("ok", 1, 3); ... </CODE></PRE> <P>Now <CODE>bar(10)</CODE> is limited to be called at least once and at most, three times. More appealingly, a <CODE>Range</CODE> can be given to specify the limitation.</P><PRE><CODE> ... foo.bar(10); //define the behavior -- return "ok" when the //argument is 10. And this method is expected //to be called at least once. setReturnValue("ok", Range.ONE_OR_MORE); ... </CODE></PRE> <P><CODE>Range.ONE_OR_MORE</CODE> is a pre-defined <CODE>Range</CODE> instance, which means the method should be called at least once. If there is no invocation-count limitation specified in <CODE>setReturnValue()</CODE>, such as <CODE>setReturnValue("Hello")</CODE>, it will use <CODE>Range.ONE_OR_MORE</CODE> as its default invocation-count limitation. There are another two predefined <CODE>Range</CODE> instances: <CODE>Range.ONE</CODE> (exactly once) and <CODE>Range.ZERO_OR_MORE</CODE> (there's no limit on how many times you can call it).</P> <P>There is also a special set return value method: <CODE>setDefaultReturnValue()</CODE>. It defines the return value of the method invocation despite the method parameter values. The invocation times limitation is <CODE>Range.ONE_OR_MORE</CODE>. This is known as the <I>method parameter values insensitive</I> feature.</P><PRE><CODE> ... foo.bar(10); //define the behavior -- return "ok" when calling //bar(int) despite the argument value. setDefaultReturnValue("ok"); ... </CODE></PRE> <LI><CODE>setThrowable</CODE> <P><CODE>setThrowable(Throwable throwable)</CODE> is used to define the method invocation's exception throwing behavior. If the given throwable does not match the exception declaration of the method, an <CODE>AssertionFailedError</CODE> will be thrown. The invocation times limitation and method parameter values insensitive features can also be applied.</P><PRE><CODE> ... try { foo.dummy(); } catch (Exception e) { //skip } //define the behavior -- throw ParseException //when call dummy(). And this method is expected //to be called exactly once. control.setThrowable(new ParseException("", 0), 1); ... </CODE></PRE> <LI><CODE>setVoidCallable()</CODE> <P><CODE>setVoidCallable()</CODE> is used for a method that has a <CODE>void</CODE> return type. The invocation times limitation and method parameter values insensitive features can also be applied.</P><PRE><CODE> ... try { foo.dummy(); } catch (Exception e) { //skip } //define the behavior -- no return value //when calling dummy(). And this method is expected //to be called at least once. control.setVoidCallable(); ... </CODE></PRE> <LI>Set <CODE>ArgumentsMatcher</CODE> <P>In the working state, the <CODE>MockControl</CODE> will search the predefined behavior when any method invocation has happened on the mock object. There are three factors in the search criteria: method signature, parameter value, and invocation times limitation. The first and third factors are fixed. The second factor can be skipped by the parameter values insensitive feature described above. More flexibly, it is also possible to customize the parameter value match rule. <CODE>setMatcher()</CODE> can be used in the preparing state with a customized <CODE>ArgumentsMatcher</CODE>.</P><PRE><CODE> public interface ArgumentsMatcher { public boolean matches(Object[] expected, Object[] actual); } </CODE></PRE> <P>The only method in <CODE>ArgumentsMatcher</CODE>, <CODE>matches()</CODE>, takes two arguments. One is the expected parameter values array (null, if the parameter values insensitive feature applied). The other is the actual parameter values array. A true return value means that the parameter values match.</P><PRE><CODE> ... foo.isSame(null); //set the argument match rule -- always match //no matter what parameter is given control.setMatcher(MockControl.ALWAYS_MATCHER); //define the behavior -- return true when call //isSame(). And this method is expected //to be called at least once. control.setReturnValue(true, 1); ... </CODE></PRE> <P>There are three predefined <CODE>ArgumentsMatcher</CODE> instances in <CODE>MockControl</CODE>. <CODE>MockControl.ALWAYS_MATCHER</CODE> always returns true when matching, no matter what parameter values are given. <CODE>MockControl.EQUALS_MATCHER</CODE> calls <CODE>equals()</CODE> on each element in the parameter value array. <CODE>MockControl.ARRAY_MATCHER</CODE> is almost the same as <CODE>MockControl.EQUALS_MATCHER</CODE>, except that it calls <CODE>Arrays.equals()</CODE> instead of <CODE>equals()</CODE> when the element in the parameter value array is an array type. Of course, the developer can implement his or her own <CODE>ArgumentsMatcher</CODE>.</P> <P>A side effect of a customized <CODE>ArgumentsMatcher</CODE> is that it defines the method invocation's out parameter value.</P><PRE><CODE> ... //just to demonstrate the function //of out parameter value definition foo.add(new String[]{null, null}); //set the argument match rule -- always //match no matter what parameter given. //Also defined the value of out param. control.setMatcher(new ArgumentsMatcher() { public boolean matches(Object[] expected, Object[] actual) { ((StringBuffer)actual[0]) .append(actual[1]); return true; } }); //define the behavior of add(). //This method is expected to be called at //least once. control.setVoidCallable(true, 1); ... </CODE></PRE><CODE>setDefaultMatcher()</CODE> sets the <CODE>MockControl</CODE>'s default <CODE>ArgumentsMatcher</CODE> instance. If no specific <CODE>ArgumentsMatcher</CODE> is given, the default <CODE>ArgumentsMatcher</CODE> will be used. This method should be called before any method invocation behavior definition. Otherwise, an <CODE>AssertionFailedError</CODE> will be thrown. <PRE><CODE> //get mock control MockControl control = ...; //get mock object Foo foo = (Foo)control.getMock(); //set default ArgumentsMatcher control.setDefaultMatcher( MockControl.ALWAYS_MATCHER); //begin behavior definition foo.bar(10); control.setReturnValue("ok"); ... </CODE></PRE>If <CODE>setDefaultMatcher()</CODE> is not used, <CODE>MockControl.ARRAY_MATCHER</CODE> is the system default <CODE>ArgumentsMatcher</CODE>. </LI></UL></LI></UL> <H3>An Example</H3> <P>Below is an example that demonstrates Mocquer's usage in unit testing.</P> <P>Suppose there is a class named <CODE><FONT color=#003366>FTPConnector</FONT></CODE>.</P><PRE><CODE> <FONT color=#003366>package org.jingle.mocquer.sample; import java.io.IOException; import java.net.SocketException; import org.apache.commons.net.ftp.FTPClient; public class FTPConnector { //ftp server host name String hostName; //ftp server port number int port; //user name String user; //password String pass; public FTPConnector(String hostName, int port, String user, String pass) { this.hostName = hostName; this.port = port; this.user = user; this.pass = pass; } /** * Connect to the ftp server. * The max retry times is 3. * @return true if succeed */ public boolean connect() { boolean ret = false; FTPClient ftp = getFTPClient(); int times = 1; while ((times <= 3) && !ret) { try { ftp.connect(hostName, port); ret = ftp.login(user, pass); } catch (SocketException e) { } catch (IOException e) { } finally { times++; } } return ret; } /** * get the FTPClient instance * It seems that this method is a nonsense * at first glance. Actually, this method * is very important for unit test using * mock technology. * @return FTPClient instance */ protected FTPClient getFTPClient() { return new FTPClient(); } } </FONT></CODE></PRE> <P>The <CODE><FONT color=#003366>connect()</FONT></CODE> method can try to connect to an FTP server and log in. If it fails, it can retry up to three times. If the operation succeeds, it returns true. Otherwise, it returns false. The class uses <CODE><FONT color=#003366>org.apache.commons.net.FTPClient</FONT></CODE> to make a real connection. There is a protected method, <CODE><FONT color=#003366>getFTPClient()</FONT></CODE>, in this class that looks like nonsense at first glance. Actually, this method is very important for unit testing using mock technology. I will explain that later.</P> <P>A JUnit test case, <CODE><FONT color=#003366>FTPConnectorTest</FONT></CODE>, is provided to test the <CODE><FONT color=#003366>connect()</FONT></CODE> method logic. Because we want to isolate the unit test environment from any other factors such as an external FTP server, we use Mocquer to mock the <CODE><FONT color=#003366>FTPClient</FONT></CODE>.</P><PRE><CODE> <FONT color=#003366>package org.jingle.mocquer.sample; import java.io.IOException; import org.apache.commons.net.ftp.FTPClient; import org.jingle.mocquer.MockControl; import junit.framework.TestCase; public class FTPConnectorTest extends TestCase { /* * @see TestCase#setUp() */ protected void setUp() throws Exception { super.setUp(); } /* * @see TestCase#tearDown() */ protected void tearDown() throws Exception { super.tearDown(); } /** * test FTPConnector.connect() */ public final void testConnect() { //get strict mock control MockControl control = MockControl.createStrictControl( FTPClient.class); //get mock object //why final? try to remove it final FTPClient ftp = (FTPClient)control.getMock(); //Test point 1 //begin behavior definition try { //specify the method invocation ftp.connect("202.96.69.8", 7010); //specify the behavior //throw IOException when call //connect() with parameters //"202.96.69.8" and 7010. This method //should be called exactly three times control.setThrowable( new IOException(), 3); //change to working state control.replay(); } catch (Exception e) { fail("Unexpected exception: " + e); } //prepare the instance //the overridden method is the bridge to //introduce the mock object. FTPConnector inst = new FTPConnector( "202.96.69.8", 7010, "user", "pass") { protected FTPClient getFTPClient() { //do you understand why declare //the ftp variable as final now? return ftp; } }; //in this case, the connect() should //return false assertFalse(inst.connect()); //change to checking state control.verify(); //Test point 2 try { //return to preparing state first control.reset(); //behavior definition ftp.connect("202.96.69.8", 7010); control.setThrowable( new IOException(), 2); ftp.connect("202.96.69.8", 7010); control.setVoidCallable(1); ftp.login("user", "pass"); control.setReturnValue(true, 1); control.replay(); } catch (Exception e) { fail("Unexpected exception: " + e); } //in this case, the connect() should //return true assertTrue(inst.connect()); //verify again control.verify(); } } </FONT></CODE></PRE> <P>A strict <CODE><FONT color=#003366>MockObject</FONT></CODE> is created. The mock object variable declaration has a final modifier because the variable will be used in the inner anonymous class. Otherwise, a compilation error will be reported.</P> <P>There are two test points in the test method. The first test point is when <CODE><FONT color=#003366>FTPClient.connect()</FONT></CODE> always throws an exception, meaning <CODE><FONT color=#003366>FTPConnector.connect()</FONT></CODE> will return false as result.</P><PRE><CODE> <FONT color=#003366>try { ftp.connect("202.96.69.8", 7010); control.setThrowable(new IOException(), 3); control.replay(); } catch (Exception e) { fail("Unexpected exception: " + e); } </FONT></CODE></PRE> <P>The <CODE><FONT color=#003366>MockControl</FONT></CODE> specifies that, when calling <CODE><FONT color=#003366>connect()</FONT></CODE> on the mock object with the parameters <CODE><FONT color=#003366>202.96.96.8</FONT></CODE> as the host IP and <CODE><FONT color=#003366>7010</FONT></CODE> as the port number, an <CODE><FONT color=#003366>IOException</FONT></CODE> will be thrown. This method invocation is expected to be called exactly three times. After the behavior definition, <CODE><FONT color=#003366>replay()</FONT></CODE> changes the mock object to the working state. The <CODE><FONT color=#003366>try</FONT></CODE>/<CODE><FONT color=#003366>catch</FONT></CODE> block here is to follow the declaration of <CODE><FONT color=#003366>FTPClient.connect()</FONT></CODE>, which has an <CODE><FONT color=#003366>IOException</FONT></CODE> defined in its <CODE><FONT color=#003366>throw</FONT></CODE> clause.</P><PRE><CODE> <FONT color=#003366>FTPConnector inst = new FTPConnector("202.96.69.8", 7010, "user", "pass") { protected FTPClient getFTPClient() { return ftp; } }; </FONT></CODE></PRE> <P>The code above creates a <CODE><FONT color=#003366>FTPConnector</FONT></CODE> instance with its <CODE><FONT color=#003366>getFTPClient()</FONT></CODE> overridden. It is a bridge to introduce the created mock object into the target to be tested.</P><PRE><CODE> <FONT color=#003366>assertFalse(inst.connect()); </FONT></CODE></PRE> <P>The expected result of <CODE><FONT color=#003366>connect()</FONT></CODE> should be false on this test point.</P><PRE><CODE> <FONT color=#003366>control.verify(); </FONT></CODE></PRE> <P>Finally, change the mock object to the checking state.</P> <P>The second test point is when <CODE><FONT color=#003366>FTPClient.connect()</FONT></CODE> throws exceptions two times and succeeds on the third time, and <CODE><FONT color=#003366>FTPClient.login()</FONT></CODE> also succeeds, meaning <CODE><FONT color=#003366>FTPConnector.connect()</FONT></CODE> will return true as result. </P> <P>This test point follows the procedure of previous test point, except that the <CODE><FONT color=#003366>MockObject</FONT></CODE> should change to the preparing state first, using <CODE><FONT color=#003366>reset()</FONT></CODE>. </P> <H3>Conclusion</H3> <P>Mock technology isolates the target to be tested from other external factors. Integrating mock technology into the JUnit framework makes the unit test much simpler and neater. EasyMock is a good mock tool that can create a mock object for a specified interface. With the help of Dunamis, Mocquer extends the function of EasyMock. It can create mock objects not only for interfaces, but also classes. This article gave a brief introduction to Mocquer's usage in unit testing. For more detailed information, please refer to the references below.</P> <H3>References</H3> <UL> <LI>Mocquer project: <A el="http://mocquer.dev.java.net" lid="mocquer.dev.java.net">mocquer.dev.java.net</A> <LI>Download <A el="https://mocquer.dev.java.net/files/documents/2565/9652/samples.jar" lid="sample code">sample code</A> for this article <LI>The Dunamis project: <A el="http://dunamis.dev.java.net" lid="dunamis.dev.java.net">dunamis.dev.java.net</A> <LI>An article about dynamic delegation: "<A lid="Dynamic Delegation and Its Application">Dynamic Delegation and Its Application</A>" <LI>EasyMock: <A lid="www.easymock.org">www.easymock.org</A> <LI>JUnit: <A lid="www.junit.org">www.junit.org</A> </LI></UL> <P><I><A lid="Lu Jian">Lu Jian</A> is a senior Java architect/developer with four years of Java development experience. </I></P><img src ="http://m.tkk7.com/jinfeng_wang/aggbug/2262.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://m.tkk7.com/jinfeng_wang/" target="_blank">jinfeng_wang</a> 2005-03-20 18:16 <a href="http://m.tkk7.com/jinfeng_wang/archive/2005/03/20/2262.html#Feedback" target="_blank" style="text-decoration:none;">鍙戣〃璇勮</a></div>]]></description></item><item><title>Should we be doing more automated testing? http://m.tkk7.com/jinfeng_wang/archive/2005/03/09/1879.htmljinfeng_wangjinfeng_wangWed, 09 Mar 2005 06:49:00 GMThttp://m.tkk7.com/jinfeng_wang/archive/2005/03/09/1879.htmlhttp://m.tkk7.com/jinfeng_wang/comments/1879.htmlhttp://m.tkk7.com/jinfeng_wang/archive/2005/03/09/1879.html#Feedback0http://m.tkk7.com/jinfeng_wang/comments/commentRss/1879.htmlhttp://m.tkk7.com/jinfeng_wang/services/trackbacks/1879.htmlhttp://www.javaworld.com/javaworld/jw-03-2005/jw-0307-testing.html

Summary
To help developers decide whether they should be doing more automated testing, Ben Teese presents two questions in this article: Are developers being realistic about the testing they will complete on their applications? And when does automatic testing make sense for their applications? (2,300 words; March 7, 2005)

This article is about whether we, meaning professional software developers, should be doing more automated testing. This article is targeted to those who find themselves repeating manual tests over and over again, be it developers, testers, or anyone else.

In this article I ask:

  • Are we realistic about how much testing we're going to do?
  • When does it become feasible to automate testing?

Note that this article doesn't discuss whether we should be testing (be it automated or manual). Nor is it about any particular type of testing, be it unit testing, system testing, or user-acceptance testing.

Instead, this article is intended to act as a prompt for discussion and contains opinions based upon my own experience.

A real-world example, Part 1
Let's start with the job that I recently worked on. It involved small changes to a moderately complex online Website鈥攏othing special, just some new dynamically-generated text and images.

Because the system had no unit tests and was consequently not designed in a way that facilitated unit testing, isolating and unit-testing my code changes proved to be difficult. Consequently, my unit tests were more like miniature system tests in that they indirectly tested my changes by exercising the new functionality via the Web interface.

I figured automating the tests would prove pointless, as I guessed I would be running them only a couple of times. So I wrote some plain English test plans that described the manual steps for executing the tests.

Coding and testing the first change was easy. Coding and testing the second change was easy too, but then I also had to re-execute the tests for the first change to make sure I hadn't broken anything. Coding and testing the third change was easy, but then I had to re-execute the tests for the first and second changes to make sure I hadn't broken them. Coding and testing the fourth change was easy, but...well, you get the picture.

What a drag
Whenever I had to rerun the tests, I thought: "Gee running these tests is a drag."

I would then run the tests anyway and, on a couple of occasions, found that I had introduced a defect. On such occasions, I thought: "Gee I'm glad I ran those tests."

Since these two thoughts seemed to contradict each other, I started measuring how long it was actually taking me to run the tests.

Once I had obtained a stable development build, I deployed my changes into a system-testing environment where somebody else would test them. However, because the environment differed, I figured I should re-execute the tests just to make sure they worked there.

Somebody then system-tested my changes and found a defect (something that wasn't covered by my tests). So I had to fix the defect in my development environment, rerun the tests to make sure I hadn't introduced a side effect, and then redeploy.

The end result
By the time I'd finished everything, I had executed the full test suite about eight times. My time measurements suggested that each test cycle took about 10 minutes to execute. So that meant I had spent roughly 80 minutes on manual testing. And I was thinking to myself: "Would it have been easier if I'd just automated those tests early on?"

Do you ever test anything just a couple of times?
I believe the mistake I made in underestimating the effort required to test my work is a mistake also made by other developers. Developers are renowned for underestimating effort, and I don't think that test-effort estimation is any different. In fact, given the disregard many developers have for testing, I think they would be more likely to underestimate the effort required to test their code than they would be to underestimate anything else.

The main cause of this test-effort blow-out is not that executing the test cycle in itself takes longer than expected, but that the number of test cycles that need to be executed over the life of the software is greater than expected. In my experience, it seems that most developers think they'll only test their code a couple of times at most. To such a developer I ask this question: "Have you ever had to test anything just a couple of times?" I certainly haven't.

But what about my JUnit tests?
Sure, you might write lots of low-level JUnit tests, but I'm talking about the higher-level tests that test your system's end-to-end functionality. Many developers consider writing such tests, but put the task off because it seems like a lot of effort given the number of times they believe they will execute the tests. They then proceed to manually execute the tests and often reach a point where the task becomes a drag鈥攚hich is usually just after the point when they thought they wouldn't be executing the tests any more.

Alternately, the developer working on a small piece of work on an existing product (as I was doing) can also fall into this trap. Because it's such a small piece of work, there's no point in writing an automated test. You're only going to execute it a couple of times鈥攔ight? Not necessarily, as I learned in my own real-world example.

Somebody will want to change your software
While developers typically underestimate the number of test cycles, I think they're even less likely to consider the effort required for testing the software later. Having finished a piece of software and probably manually testing it more times than they ever wanted to, most developers are sick of the software and don't want to think about it any more. In doing so, they are ignoring the likelihood that at some time, somebody will have to test the code again.

Many developers believe that once they write a piece of software, it will require little change in the future and thus require no further testing. Yet in my experience, almost no code that I write (especially if it's written for somebody else) goes through the code-test-deploy lifecycle once and never touched again. In fact, even if the person that I'm writing the code for tells me that it's going to be thrown away, it almost never is (I've worked on a number of "throw-away" prototypes that were subsequently taken into production and have stayed there ever since).

Even if the software doesn't change, the environment will
Even if nobody changes your software, the environment that it lives within can still change. Most software doesn't live in isolation; thus, it cannot dictate the pace of change.

Virtual machines are upgraded. Database drivers are upgraded. Databases are upgraded. Application servers are upgraded. Operating systems are upgraded. These changes are inevitable鈥攊n fact, some argue that, as a best practice, administrators should proactively ensure that their databases, operating systems, and application servers are up-to-date, especially with the latest patches and fixes.

Then there are the changes within your organization's proprietary software. For example, an enterprise datasource developed by another division in your organization is upgraded鈥攁nd you are entirely dependent upon it. Alternately, suppose your software is deployed to an application server that is also hosting some other in-house application. Suddenly, for the other application to work, it becomes critical that the application server is upgraded to the latest version. Your application is going along for the ride whether it wants to or not.

Change is constant, inevitable, and entails risk. To mitigate the risk, you test鈥攂ut as we've seen, manual testing quickly becomes impractical. I believe that more automated testing is the way around this problem.

But what about change management?
Some argue that management should be responsible for coordinating changes; they should track dependencies and ensure that if one of your dependencies changes, you will retest. Cross-system changes will be synchronized with releases. However, in my experience, these dependencies are complex and rarely tracked successfully. I propose an alternate approach鈥攖hat software systems are better able to both test themselves and cope with inevitable change.


his article is about whether we, meaning professional software developers, should be doing more automated testing. This article is targeted to those who find themselves repeating manual tests over and over again, be it developers, testers, or anyone else.

In this article I ask:

  • Are we realistic about how much testing we're going to do?
  • When does it become feasible to automate testing?

Note that this article doesn't discuss whether we should be testing (be it automated or manual). Nor is it about any particular type of testing, be it unit testing, system testing, or user-acceptance testing.

Instead, this article is intended to act as a prompt for discussion and contains opinions based upon my own experience.

A real-world example, Part 1
Let's start with the job that I recently worked on. It involved small changes to a moderately complex online Website鈥攏othing special, just some new dynamically-generated text and images.

Because the system had no unit tests and was consequently not designed in a way that facilitated unit testing, isolating and unit-testing my code changes proved to be difficult. Consequently, my unit tests were more like miniature system tests in that they indirectly tested my changes by exercising the new functionality via the Web interface.

I figured automating the tests would prove pointless, as I guessed I would be running them only a couple of times. So I wrote some plain English test plans that described the manual steps for executing the tests.

Coding and testing the first change was easy. Coding and testing the second change was easy too, but then I also had to re-execute the tests for the first change to make sure I hadn't broken anything. Coding and testing the third change was easy, but then I had to re-execute the tests for the first and second changes to make sure I hadn't broken them. Coding and testing the fourth change was easy, but...well, you get the picture.

What a drag
Whenever I had to rerun the tests, I thought: "Gee running these tests is a drag."

I would then run the tests anyway and, on a couple of occasions, found that I had introduced a defect. On such occasions, I thought: "Gee I'm glad I ran those tests."

Since these two thoughts seemed to contradict each other, I started measuring how long it was actually taking me to run the tests.

Once I had obtained a stable development build, I deployed my changes into a system-testing environment where somebody else would test them. However, because the environment differed, I figured I should re-execute the tests just to make sure they worked there.

Somebody then system-tested my changes and found a defect (something that wasn't covered by my tests). So I had to fix the defect in my development environment, rerun the tests to make sure I hadn't introduced a side effect, and then redeploy.

The end result
By the time I'd finished everything, I had executed the full test suite about eight times. My time measurements suggested that each test cycle took about 10 minutes to execute. So that meant I had spent roughly 80 minutes on manual testing. And I was thinking to myself: "Would it have been easier if I'd just automated those tests early on?"

Do you ever test anything just a couple of times?
I believe the mistake I made in underestimating the effort required to test my work is a mistake also made by other developers. Developers are renowned for underestimating effort, and I don't think that test-effort estimation is any different. In fact, given the disregard many developers have for testing, I think they would be more likely to underestimate the effort required to test their code than they would be to underestimate anything else.

The main cause of this test-effort blow-out is not that executing the test cycle in itself takes longer than expected, but that the number of test cycles that need to be executed over the life of the software is greater than expected. In my experience, it seems that most developers think they'll only test their code a couple of times at most. To such a developer I ask this question: "Have you ever had to test anything just a couple of times?" I certainly haven't.

But what about my JUnit tests?
Sure, you might write lots of low-level JUnit tests, but I'm talking about the higher-level tests that test your system's end-to-end functionality. Many developers consider writing such tests, but put the task off because it seems like a lot of effort given the number of times they believe they will execute the tests. They then proceed to manually execute the tests and often reach a point where the task becomes a drag鈥攚hich is usually just after the point when they thought they wouldn't be executing the tests any more.

Alternately, the developer working on a small piece of work on an existing product (as I was doing) can also fall into this trap. Because it's such a small piece of work, there's no point in writing an automated test. You're only going to execute it a couple of times鈥攔ight? Not necessarily, as I learned in my own real-world example.

Somebody will want to change your software
While developers typically underestimate the number of test cycles, I think they're even less likely to consider the effort required for testing the software later. Having finished a piece of software and probably manually testing it more times than they ever wanted to, most developers are sick of the software and don't want to think about it any more. In doing so, they are ignoring the likelihood that at some time, somebody will have to test the code again.

Many developers believe that once they write a piece of software, it will require little change in the future and thus require no further testing. Yet in my experience, almost no code that I write (especially if it's written for somebody else) goes through the code-test-deploy lifecycle once and never touched again. In fact, even if the person that I'm writing the code for tells me that it's going to be thrown away, it almost never is (I've worked on a number of "throw-away" prototypes that were subsequently taken into production and have stayed there ever since).

Even if the software doesn't change, the environment will
Even if nobody changes your software, the environment that it lives within can still change. Most software doesn't live in isolation; thus, it cannot dictate the pace of change.

Virtual machines are upgraded. Database drivers are upgraded. Databases are upgraded. Application servers are upgraded. Operating systems are upgraded. These changes are inevitable鈥攊n fact, some argue that, as a best practice, administrators should proactively ensure that their databases, operating systems, and application servers are up-to-date, especially with the latest patches and fixes.

Then there are the changes within your organization's proprietary software. For example, an enterprise datasource developed by another division in your organization is upgraded鈥攁nd you are entirely dependent upon it. Alternately, suppose your software is deployed to an application server that is also hosting some other in-house application. Suddenly, for the other application to work, it becomes critical that the application server is upgraded to the latest version. Your application is going along for the ride whether it wants to or not.

Change is constant, inevitable, and entails risk. To mitigate the risk, you test鈥攂ut as we've seen, manual testing quickly becomes impractical. I believe that more automated testing is the way around this problem.

But what about change management?
Some argue that management should be responsible for coordinating changes; they should track dependencies and ensure that if one of your dependencies changes, you will retest. Cross-system changes will be synchronized with releases. However, in my experience, these dependencies are complex and rarely tracked successfully. I propose an alternate approach鈥攖hat software systems are better able to both test themselves and cope with inevitable change.


About the author
Ben Teese is a software engineer at Shine Technologies.



]]>
主站蜘蛛池模板: 国产成人亚洲综合色影视| 日本牲交大片免费观看| 亚洲熟妇av一区二区三区| 中国china体内裑精亚洲日本| 日韩内射激情视频在线播放免费 | 成人无遮挡裸免费视频在线观看| 亚洲黄色高清视频| 18级成人毛片免费观看| 亚洲精品视频久久| 黄页网站在线看免费| 亚洲视频无码高清在线| 欧洲精品免费一区二区三区| 亚洲AV永久无码天堂影院| 国产精品免费电影| 中美日韩在线网免费毛片视频 | 免费v片在线观看品善网| 污视频网站免费在线观看| 无码不卡亚洲成?人片| 一级做a爱过程免费视| 亚洲成av人片天堂网| 99热这里有免费国产精品| 亚洲狠狠狠一区二区三区| 久久久久久国产a免费观看黄色大片 | 亚洲欧洲校园自拍都市| 免费a级毛片无码a∨蜜芽试看| 亚洲第一第二第三第四第五第六| 亚洲国产精品狼友中文久久久| 99精品免费视频| 亚洲欧洲综合在线| 免费国产高清视频| 国产一精品一av一免费爽爽| 亚洲国产午夜精品理论片| 日本特黄特色aa大片免费| 中文字幕免费在线视频| 亚洲依依成人精品| 一本色道久久88亚洲综合| 99视频在线看观免费| 亚洲AV无码成人精品区狼人影院| 国产av无码专区亚洲av果冻传媒| 免费人成视频在线观看网站| 亚洲日韩国产欧美一区二区三区 |