鐩殑錛氬揩閫熷紑鍙戦珮鎬ц兘銆侀珮鍙潬鎬х殑緗戠粶鏈嶅姟鍣ㄥ拰瀹㈡埛绔▼搴?/p>
浼樼偣錛氭彁渚涘紓姝ョ殑銆佷簨浠墮┍鍔ㄧ殑緗戠粶搴旂敤紼嬪簭妗嗘灦鍜屽伐鍏?/p>
閫氫織鐨勮錛氫竴涓ソ浣跨殑澶勭悊Socket鐨勪笢涓?/p>
濡傛灉娌℃湁Netty錛?/p>
榪滃彜錛歫ava.net + java.io
榪戜唬錛歫ava.nio
鍏朵粬錛歁ina錛孏rizzly
涓轟粈涔堜笉鏄疢ina錛?/p>
1銆侀兘鏄疶rustin Lee鐨勪綔鍝侊紝Netty鏇存櫄錛?/p>
2銆丮ina灝嗗唴鏍稿拰涓浜涚壒鎬х殑鑱旂郴榪囦簬绱у瘑錛屼嬌寰楃敤鎴峰湪涓嶉渶瑕佽繖浜涚壒鎬х殑鏃跺欐棤娉曡劚紱伙紝鐩告瘮涓嬫ц兘浼?xì)鏈夋墍涓嬮檷錛孨etty瑙e喅浜?jiǎn)杩欎釜璁捐闂锛?/p>
3銆丯etty鐨勬枃妗f洿娓呮櫚錛屽緢澶歁ina鐨勭壒鎬у湪Netty閲岄兘鏈夛紱
4銆丯etty鏇存柊鍛ㄦ湡鏇寸煭錛屾柊鐗堟湰鐨勫彂甯冩瘮杈冨揩錛?/p>
5銆佸畠浠殑鏋舵瀯宸埆涓嶅ぇ錛孧ina闈燼pache鐢熷瓨錛岃孨etty闈爅boss錛屽拰jboss鐨勭粨鍚堝害闈炲父楂橈紝Netty鏈夊google protocal buf鐨勬敮鎸侊紝鏈夋洿瀹屾暣鐨刬oc瀹瑰櫒鏀寔(spring,guice,jbossmc鍜宱sgi)錛?/p>
6銆丯etty姣擬ina浣跨敤璧鋒潵鏇寸畝鍗曪紝Netty閲屼綘鍙互鑷畾涔夌殑澶勭悊upstream events 鎴?鍜?downstream events錛屽彲浠ヤ嬌鐢╠ecoder鍜宔ncoder鏉ヨВ鐮佸拰緙栫爜鍙戦佸唴瀹癸紱
7銆丯etty鍜孧ina鍦ㄥ鐞哢DP鏃舵湁涓浜涗笉鍚岋紝Netty灝哢DP鏃犺繛鎺ョ殑鐗規(guī)ф毚闇插嚭鏉ワ紱鑰孧ina瀵筓DP榪涜浜?jiǎn)楂樉Uу眰嬈$殑鎶借薄錛屽彲浠ユ妸UDP褰撴垚"闈㈠悜榪炴帴"鐨勫崗璁紝鑰岃Netty鍋氬埌榪欎竴鐐規(guī)瘮杈冨洶闅俱?/p>
Netty鐨勭壒鎬?/p>
璁捐
緇熶竴鐨凙PI錛岄傜敤浜庝笉鍚岀殑鍗忚錛堥樆濉炲拰闈為樆濉烇級(jí)
鍩轟簬鐏墊椿銆佸彲鎵╁睍鐨勪簨浠墮┍鍔ㄦā鍨?/p>
楂樺害鍙畾鍒剁殑綰跨▼妯″瀷
鍙潬鐨勬棤榪炴帴鏁版嵁Socket鏀寔錛圲DP錛?/p>
鎬ц兘
鏇村ソ鐨勫悶鍚愰噺錛屼綆寤惰繜
鏇寸渷璧勬簮
灝介噺鍑忓皯涓嶅繀瑕佺殑鍐呭瓨鎷瘋礉
瀹夊叏
瀹屾暣鐨凷SL/TLS鍜孲TARTTLS鐨勬敮鎸?/p>
鑳藉湪Applet涓嶢ndroid鐨勯檺鍒剁幆澧冭繍琛岃壇濂?/p>
鍋ュ.鎬?/p>
涓嶅啀鍥犺繃蹇佽繃鎱㈡垨瓚呰礋杞借繛鎺ュ鑷碠utOfMemoryError
涓嶅啀鏈夊湪楂橀熺綉緇滅幆澧冧笅NIO璇誨啓棰戠巼涓嶄竴鑷寸殑闂
鏄撶敤
瀹屽杽鐨凧avaDoc錛岀敤鎴鋒寚鍗楀拰鏍蜂緥
綆媧佺畝鍗?/p>
浠呬俊璧栦簬JDK1.5
鐪嬩緥瀛愬惂錛?/p>
Server绔細(xì)
瀹㈡埛绔細(xì)
Netty鏁翠綋鏋舵瀯
Netty緇勪歡
ChannelFactory
Boss
Worker
Channel
ChannelEvent
Pipeline
ChannelContext
Handler
Sink
Server绔牳蹇?jī)绫?/p>
NioServerSocketChannelFactory
NioServerBossPool
NioWorkerPool
NioServerBoss
NioWorker
NioServerSocketChannel
NioAcceptedSocketChannel
DefaultChannelPipeline
NioServerSocketPipelineSink
Channels
ChannelFactory
Channel宸ュ巶錛屽緢閲嶈鐨勭被
淇濆瓨鍚姩鐨勭浉鍏沖弬鏁?/p>
NioServerSocketChannelFactory
NioClientSocketChannelFactory
NioDatagramChannelFactory
榪欐槸Nio鐨勶紝榪樻湁Oio鍜孡ocal鐨?/p>
SelectorPool
Selector鐨勭嚎紼嬫睜
NioServerBossPool 榛樿綰跨▼鏁幫細(xì)1
NioClientBossPool 1
NioWorkerPool 2 * Processor
NioDatagramWorkerPool
Selector
閫夋嫨鍣紝寰堟牳蹇?jī)鐨劸l勪歡
NioServerBoss
NioClientBoss
NioWorker
NioDatagramWorker
Channel
閫氶亾
NioServerSocketChannel
NioClientSocketChannel
NioAcceptedSocketChannel
NioDatagramChannel
Sink
璐熻矗鍜屽簳灞傜殑浜や簰
濡俠ind錛學(xué)rite錛孋lose絳?/p>
NioServerSocketPipelineSink
NioClientSocketPipelineSink
NioDatagramPipelineSink
Pipeline
璐熻矗緇存姢鎵鏈夌殑Handler
ChannelContext
涓涓狢hannel涓涓紝鏄疕andler鍜孭ipeline鐨勪腑闂翠歡
Handler
瀵笴hannel浜嬩歡鐨勫鐞嗗櫒
ChannelPipeline
浼樼鐨勮璁?---浜嬩歡椹卞姩
浼樼鐨勮璁?---綰跨▼妯″瀷
娉ㄦ剰浜嬮」
瑙g爜鏃剁殑Position
Channel鐨勫叧闂?/p>
鏇村Handler
Channel鐨勫叧闂?/p>
鐢ㄥ畬鐨凜hannel錛屽彲浠ョ洿鎺ュ叧闂紱
1銆丆hannelFuture鍔燣istener
2銆亀riteComplete
涓孌墊椂闂存病鐢紝涔熷彲浠ュ叧闂?/p>
TimeoutHandler
ChannelEvent
object. For each read / write, it additionally created a new ChannelBuffer
. It simplified the internals of Netty quite a lot because it delegates resource management and buffer pooling to the JVM. However, it often was the root cause of GC pressure and uncertainty which are sometimes observed in a Netty-based application under high load.4.0 removes event object creation almost completely by replacing the event objects with strongly typed method invocations. 3.x had catch-all event handler methods such as handleUpstream()
andhandleDownstream()
, but this is not the case anymore. Every event type has its own handler method now:
Separation of concerns: The Reactor pattern decouples application-independent demultiplexing and dispatching mechanisms from application-specific hook method functionality. The application-independent mechanisms become reusable components that know how to demultiplex events and dispatch the appropriate hook methods defined byEvent Handlers. In contrast, the application-specific functionality in a hook method knows how to perform a particular type of service.
Improve modularity, reusability, and configurability of event-driven applications: The pattern decouples application functionality into separate classes. For instance, there are two separate classes in the logging server: one for establishing connections and another for receiving and processing logging records. This decoupling enables the reuse of the connection establishment class for different types of connection-oriented services (such as file transfer, remote login, and video-on-demand). Therefore, modifying or extending the functionality of the logging server only affects the implementation of the logging handler class.
Improves application portability: The Initiation Dispatcher’s interface can be reused independently of the OS system calls that perform event demultiplexing. These system calls detect and report the occurrence of one or more events that may occur simultaneously on multiple sources of events. Common sources of events may in- clude I/O handles, timers, and synchronization objects. On UNIX platforms, the event demultiplexing system calls are calledselect and poll [1]. In the Win32 API [16], the WaitForMultipleObjects system call performs event demultiplexing.
Provides coarse-grained concurrency control: The Reactor pattern serializes the invocation of event handlers at the level of event demultiplexing and dispatching within a process or thread. Serialization at the Initiation Dispatcher level often eliminates the need for more complicated synchronization or locking within an application process.
Efficiency: Threading may lead to poor performance due to context switching, synchronization, and data movement [2];
Programming simplicity: Threading may require complex concurrency control schemes;