[轉(zhuǎn)帖] 二十世紀十大算法
2008-10-12 11:03
二十世紀七大算法:
1946年 蒙特卡洛方法;
1951年 矩陣計算的分解方法;
1959~1961年 計算矩陣特征值的QR算法;
1962年 快速排序算法;
1965年 快速傅利葉變換算法;
1977年 整數(shù)關系探測算法;
1987年 快速多極算法。
下面是二十世紀最好的十大算法:
20世紀最好的算法,計算機時代的挑選標準是對科學和工程的研究和實踐影響最大。下面就是按年代次序排列的20世紀最好的10個算法。
1. Monte Carlo方法
1946年,在洛斯阿拉莫斯科學實驗室工作的John von Neumann,Stan Ulam和Nick Metropolis編制了Metropolis算法,也稱為Monte Carlo方法。Metropolis算法旨在通過模仿隨機過程,來得到具有難以控制的大量的自由度的數(shù)值問題和具有階乘規(guī)模的組合問題的近似解法。數(shù)字計算機是確定性問題的計算的強有力工具,但是對于隨機性(不確定性)問題如何當時并不知曉,Metropolis算法可以說是最早的用來生成隨機數(shù),解決不確定性問題的算法之一。
2. 線性規(guī)劃的單純形方法
1947年,蘭德公司的Grorge Dantzig創(chuàng)造了線性規(guī)劃的單純形方法。就其廣泛的應用而言,Dantzig算法一直是最成功的算法之一。線性規(guī)劃對于那些要想在經(jīng)濟上站住腳,同時又有賴于是否具有在預算和其他約束條件下達到最優(yōu)化的能力的工業(yè)界,有著決定性的影響(當然,工業(yè)中的“實際”問題往往是非線性的;使用線性規(guī)劃有時候是由于估計的預算,從而簡化了模型而促成的)。單純形法是一種能達到最優(yōu)解的精細的方法。盡管理論上講其效果是指數(shù)衰減的,但在實踐中該算法是高度有效的——它本身說明了有關計算的本質(zhì)的一些有趣的事情。
3. Krylov子空間疊代法
1950年,來自美國國家標準局的數(shù)值分析研究所的Magnus Hestenes, Eduard Stiefel和Cornelius Lanczos開創(chuàng)了Krylov子空間疊代法的研制。這些算法處理看似簡單的求解形為Ax=b的方程的問題。當然隱藏的困難在于A是一個巨型的n*n 矩陣,致使代數(shù)解x=b/A是不容易計算的(確實,矩陣的“相除”不是一個實際上有用的概念)。疊代法——諸如求解形為Kx(k+1)=Kx(k)+b-Ax(k)的方程,其中K 是一個理想地“接近”A 的較為簡單的矩陣——導致了Krylov子空間的研究。以俄羅斯數(shù)學家Nikolai Krylov命名的Krylov子空間由作用在初始“余量”向量 r(0)=b-Ax(0)上的矩陣冪張成的。當 A是對稱矩陣時,Lanczos找到了一種生成這種子空間的正交基的極好的方法。對于對稱正定的方程組,Hestenes 和Stiefel提出了稱為共軛梯度法的甚至更妙的方法。過去的50年中,許多研究人員改進并擴展了這些算法。當前的一套方法包括非對稱方程組的求解技巧,像字首縮拼詞為GMRES和Bi-CGSTAB那樣的算法。(GMRES和Bi-CGSTAB分別首次出現(xiàn)于1986和1992 SIAM journal on Scientific and Statistical computing(美國工業(yè)與應用數(shù)學學會的科學和統(tǒng)計計算雜志)。
4. 矩陣計算的分解方法
1951年,橡樹嶺國家實驗室的A1ston Householder系統(tǒng)闡述了矩陣計算的分解方法。研究證明能把矩陣因子分解為三角、對角、正交和其他特殊形式的矩陣是極其有用的。這種分解方法使軟件研究人員能生產(chǎn)出靈活有效的矩陣軟件包。這也促進了數(shù)值線性代數(shù)中反復出現(xiàn)的大問題之一的舍入誤差分析問題。 (1961年倫敦國家物理實驗室的James Wilkinson基于把矩陣分解為下和上三角矩陣因子的積的LU分解,在美國計算機協(xié)會(ACM)的雜志上發(fā)表了一篇題為“矩陣逆的直接方法的誤差分析”的重要文章。)
5. Fortran最優(yōu)編譯程序
1957年,John Backus在IBM領導一個小組研制Fortran最優(yōu)編譯程序。Fortran的創(chuàng)造可能是計算機編程歷史上獨一無二的最重要的事件:科學家(和其他人)終于可以無需依靠像地獄那樣可怕的機器代碼,就可告訴計算機他們想要做什么。雖然現(xiàn)代編譯程序的標準并不過分――Fortran I只包含23,500條匯編語言指令――早期的編譯程序仍然能完成令人吃驚的復雜計算。就像Backus本人在1998年在IEEE annals of the History of computing 發(fā)表的有關Fortran I,II, III的近代歷史的文章中回憶道:編譯程序“所產(chǎn)生的如此有效的代碼,使得其輸出令研究它的編程人員都感到嚇了一跳。”
6. 矩陣本征值計算的QR算法
1959—61年,倫敦Ferranti Ltd.的J.G. F. Francis找到了一種稱為QR算法的計算本征值的穩(wěn)定的方法。本征值大概是和矩陣相連在—起的最重要的數(shù)了,而且計算它們可能是最需要技巧的。把—個方陣變換為一個“幾乎是”上三角的矩陣――意即在緊挨著矩陣主對角線下面的一斜列上可能有非零元素――是相對容易的,但要想不產(chǎn)生大量的誤差就把這些非零元素消去,就不是平凡的事了。QR 算法正好是能達到這一目的的方法,基于QR 分解, A可以寫成正交矩陣Q 和一個三角矩陣R 的乘積,這種方法疊代地把 A=Q(k)R(k) 變成 A(k+1)==Q(k)R(k) 就加速收斂到上三角矩陣而言多少有點不能指望。20世紀60年代中期QR 算法把一度難以對付的本征值問題變成了例行程序的計算。
7. 快速分類法
1962:倫敦Elliott Brothers, Ltd.的Tony Hoare提出了快速(按大小)分類法.把n個事物按數(shù)或字母的次序排列起來,在心智上是不會有什么觸動的單調(diào)平凡的事。智力的挑戰(zhàn)在于發(fā)明一種快速完成排序的方法。Hoare的算法利用了古老的分割開和控制的遞歸策略來解決問題:挑一個元素作為“主元”、把其余的元素分成“大的”和“小的”兩堆(當和主元比較時)、再在每一堆中重復這一過程。盡管可能要做受到嚴厲責備的做完全部N(N-1)/2 次的比較(特別是,如果你把主元作為早已按大小分類好的表列的第一個元素的話!),快速分類法運行的平均次數(shù)具有O(Nlog(N)) 的有效性,其優(yōu)美的簡潔性使之成為計算復雜性的著名的例子。
8. 快速Fourier變換
1965年,IBM的T. J. Watson研究中心的James Cooley以及普林斯頓大學和AT&T貝爾實驗室的John Tukey向公眾透露了快速Fourier變換(方法)(FFT)。應用數(shù)學中意義最深遠的算法,無疑是使信號處理實現(xiàn)突破性進展的FFT。其基本思想要追溯到Gauss(他需要計算小行星的軌道),但是Cooley—Tukey的論文弄清楚了Fourier變換計算起來有多容易。就像快速分類法一樣,F(xiàn)FT有賴于用分割開和控制的策略,把表面上令人討厭的O(N*N) 降到令人滿意的O(Nlog(N)) 。但是不像快速分類法,其執(zhí)行(初一看)是非直觀的而且不那么直接。其本身就給計算機科學一種推動力去研究計算問題和算法的固有復雜性。
9. 整數(shù)關系偵查算法
1977年,BrighamYoung大學的Helaman Ferguson 和Rodney Forcade提出了整數(shù)關系偵查算法。這是一個古老的問題:給定—組實數(shù),例如說x(1),x(2),…,x(n) ,是否存在整數(shù)a(1),a(2),..,a(n) (不全為零),使得
a(1)x(1)+a(2)x(2)+…+a(n)x(n)=0
對于n=2 ,歷史悠久的歐幾里得算法能做這項工作、計算x(1)/x(2) 的連分數(shù)展開中的各項。如果x(1)/x(2) 是有理數(shù),展開會終止,在適當展開后就給出了“最小的”整數(shù)a(1)和a(2) 。歐幾里得算法不終止——或者如果你只是簡單地由于厭倦計算——那么展開的過程至少提供了最小整數(shù)關系的大小的下界。Ferguson和Forcade的推廣更有威力,盡管這種推廣更難于執(zhí)行(和理解)。例如,他們的偵查算法被用來求得邏輯斯諦(logistic)映射的第三和第四個分歧點,b(3)=3.544090 和 b(4)=3.564407所滿足的多項式的精確系數(shù)。(后者是120 階的多項式;它的最大的系數(shù)是257^30 。)已證明該算法在簡化量子場論中的Feynman圖的計算中是有用的。
10. 快速多極算法
1987年,耶魯大學的Leslie Greengard 和Vladimir Rokhlin發(fā)明了快速多極算法。該算法克服了N體模擬中最令人頭疼的困難之一:經(jīng)由引力或靜電力相互作用的N個粒子運動的精確計算(想象一下銀河系中的星體,或者蛋白質(zhì)中的原于)看來需要O(N*N) 的計算量——比較每一對質(zhì)點需要一次計算。該算法利用多極展開(凈電荷或質(zhì)量、偶極矩、四矩,等等)來近似遙遠的一組質(zhì)點對當?shù)匾唤M質(zhì)點的影響??臻g的層次分解用來確定當距離增大時,比以往任何時候都更大的質(zhì)點組??焖俣鄻O算法的一個明顯優(yōu)點是具有嚴格的誤差估計,這是許多算法所缺少的性質(zhì)。
三、結束語
2l世紀將會帶來什么樣的新的洞察和算法?對于又一個一百年完整的回答顯然是不知道的。然而,有一點似乎是肯定的。正如20世紀能夠產(chǎn)生最好的l0個算法一樣,新世紀對我們來說既不會是很寧靜的,也不會是弱智的。
|
[轉(zhuǎn)]http://alpswy.spaces.live.com/
By Barry A. Cipra
Algos is the Greek word for pain. Algor is Latin, to be cold. Neither is the root for algorithm, which stems instead from al-Khwarizmi, the name of the ninth-century Arab scholar whose book al-jabrwa’l muqabalah devolved into today’s high school algebra textbooks. Al-Khwarizmi stressed the importance of methodical procedures for solving problems. Were he around today, he’d no doubt be impressed by the advances in his eponymous approach.
Some of the very best algorithms of the computer age are highlighted in the January/February 2000 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society. Guest editors Jack Don-garra of the University of Tennessee and Oak Ridge National Laboratory and Fran-cis Sullivan of the Center for Comput-ing Sciences at the Institute for Defense Analyses put togeth-er a list they call the “Top Ten Algorithms of the Century.”
“We tried to assemble the 10 al-gorithms with the greatest influence on the development and practice of science and engineering in the 20th century,” Dongarra and Sullivan write. As with any top-10 list, their selections—and non-selections—are bound to be controversial, they acknowledge. When it comes to picking the algorithmic best, there seems to be no best algorithm. Without further ado, here’s the CiSE top-10 list, in chronological order. (Dates and names associated with the algorithms should be read as first-order approximations. Most algorithms take shape over time, with many contributors.)
1.蒙特卡洛算法
1946: John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los Alamos Scientific Laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method. The Metropolis algorithm aims to obtain approximate solutions to numerical problems with unmanageably many degrees of freedom and to combinatorial problems of factorial size, by mimicking a random process. Given the digital computer’s reputation for deterministic calculation, it’s fitting that one of its earliest applications was the generation of random numbers.
2.單純形法
1947: George Dantzig, at the RAND Corporation, creates the simplex method for linear programming. In terms of widespread application, Dantzig’s algorithm is one of the most successful of all time: Linear programming dominates the world of industry, where economic survival depends on the ability to optimize within budgetary and other constraints. (Of course, the “real” problems of industry are often nonlinear; the use of linear programming is sometimes dictated by the computational budget.) The simplex method is an elegant way of arriving at optimal answers. Although theoretically susceptible to exponential delays, the algorithm in practice is highly efficient—which in itself says something interesting about the nature of computation. In terms of widespread use, George Dantzig’s simplex method is among the most successful algorithms of all time.
3.Krylov子空間迭代法
1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods. These algorithms address the seemingly simple task of solving equations of the form Ax = b. The catch, of course, is that A is a huge n x n matrix, so that the algebraic answer x = b/A is not so easy to compute. (Indeed, matrix “division” is not a particularly useful concept.) Iterative methods—such as solving equations of the form Kxi + 1 = Kxi + b – Axi with a simpler matrix K that’s ideally “close” to A—lead to the study of Krylov subspaces. Named for the Russian mathematician Nikolai Krylov, Krylov subspaces are spanned by powers of a matrix applied to an initial“remainder” vector r0 = b – Ax0. Lanczos found a nifty way to generate an orthogonal basis for such a subspace when the matrix is symmetric. Hestenes and Stiefel proposed an even niftier method, known as the conjugate gradient method, for systems that are both symmetric and positive definite. Over the last 50 years, numerous researchers have improved and extended these algorithms. The current suite includes techniques for non-symmetric systems, with acronyms like GMRES and Bi-CGSTAB. (GMRES and Bi-CGSTAB premiered in SIAM Journal on Scientific and Statistical Computing, in 1986 and 1992, respectively.)
4.矩陣計算的分解方法
1951: Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations. The ability to factor matrices into triangular, diagonal, orthogonal, and other special forms has turned
out to be extremely useful. The decompositional approach has enabled software developers to produce flexible and efficient matrix packages. It also facilitates the analysis of rounding errors, one of the big bugbears of numerical linear algebra. (In 1961, James Wilkinson of the National Physical Laboratory in London published a seminal paper in the Journal of the ACM, titled “ Error Analysis of Direct Methods of Matrix Inversion,” based on the LU decomposition of a matrix as a product of lower and upper triangular factors.)
5.優(yōu)化的Fortan編譯器
1957: John Backus leads a team at IBM in developing the Fortran optimizing compiler. The creation of Fortran may rank as the single most important event in the history of computer programming: Finally, scientists (and others) could tell the computer what they wanted it to do, without having to descend into the netherworld of machine code. Although modest by modern compiler standards—Fortran I consisted of a mere 23,500 assembly-language instructions—the early compiler was nonetheless capable of surprisingly sophisticated computations. As Backus himself recalls in a recent history of Fortran I, II, and III, published in 1998 in the IEEE Annals of the History of Computing, the compiler “produced code of such efficiency that its output would startle the programmers who studied it.”
6.計算矩陣特征值的QR算法
1959–61: J.G.F. Francis of Ferranti Ltd., London, finds a stable method for computing eigenvalues, known as the QR algorithm. Eigenvalues are arguably the most important numbers associated with matrices—and they can be the trickiest to compute. It’s relatively easy to transform a square matrix into a matrix that’s “ almost” upper triangular, meaning one with a single extra set of nonzero entries just below the main diagonal. But chipping away those final nonzeros, without launching an avalanche of error, is nontrivial. The QR algorithm is just the ticket. Based on the QR decomposition, which writes A as the product of an orthogonal matrix Q and an upper triangular matrix R, this approach iteratively changes Ai = QR into Ai + 1 = RQ, with a few bells and whistles for accelerating convergence to upper triangular form. By the mid-1960s, the QR algorithm had turned once-formidable eigenvalue problems into routine calculations.
7.快速排序算法
1962: Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort. Putting N things in numerical or alphabetical order is mind-numbingly mundane. The intellectual challenge lies in devising ways of doing so quickly. Hoare’s algorithm uses the age-old recursive strategy of divide and conquer to solve the problem: Pick one element as a “pivot, ” separate the rest into piles of “big” and “small” elements (as compared with the pivot), and then repeat this procedure on each pile. Although it’s possible to get stuck doing all N(N – 1)/2 comparisons (especially if you use as your pivot the first item on a list that’s already sorted!), Quicksort runs on average with O(N log N) efficiency. Its elegant simplicity has made Quicksort the pos-terchild of computational complexity.
8.快速傅立葉變換
1965: James Cooley of the IBM T.J. Watson Research Center and John Tukey of Princeton University and AT&T Bell Laboratories unveil the fast Fourier transform. Easily the most far-reaching algo-rithm in applied mathematics, the
FFT revolutionized signal processing. The underlying idea goes back to Gauss (who needed to calculate orbits of asteroids), but it was the Cooley–Tukey paper that made it clear how easily Fourier transforms can be computed. Like Quicksort, the FFT relies on a divide-and-conquer strategy to reduce an ostensibly O(N2) chore to an O(N log N) frolic. But unlike Quick- sort, the implementation is (at first sight) nonintuitive and less than straightforward. This in itself gave computer science an impetus to investigate the inherent complexity of computational problems and algorithms.
9.整數(shù)關系探測算法
1977: Helaman Ferguson and Rodney Forcade of Brigham Young University advance an integer relation detection algorithm. The problem is an old one: Given a bunch of real numbers, say x1, x2, . . . , xn, are there integers a1, a2, . . . , an (not all 0) for which a1x1 + a2x2 + . . . + anxn = 0? For n = 2, the venerable Euclidean algorithm does the job, computing terms in the continued-fraction expansion of x1/x2. If x1/x2 is rational, the expansion terminates and, with proper unraveling, gives the “smallest” integers a1 and a2. If the Euclidean algorithm doesn’t terminate—or if you simply get tired of computing it—then the unraveling procedure at least provides lower bounds on the size of the smallest integer relation. Ferguson and Forcade’s generalization, although much more difficult to implement (and to understand), is also more powerful. Their detection algorithm, for example, has been used to find the precise coefficients of the polynomials satisfied by the third and fourth bifurcation points, B3 = 3.544090 and B4 = 3.564407, of the logistic map. (The latter polynomial is of degree 120; its largest coefficient is 25730.) It has also proved useful in simplifying calculations with Feynman diagrams in quantum field theory.
10.快速多極算法
1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole algorithm. This algorithm overcomes one of the biggest headaches of N-body simulations: the fact that accurate calculations of the motions of N particles interacting via gravitational or electrostatic forces (think stars in a galaxy, or atoms in a protein) would seem to require O(N2) computations—one for each pair of particles. The fast multipole algorithm gets by with O(N) computations. It does so by using multipole expansions (net charge or mass, dipole moment, quadrupole, and so forth) to approximate the effects of a distant group of particles on a local group. A hierarchical decomposition of space is used to define ever-larger groups as distances increase. One of the distinct advantages of the fast multipole algorithm is that it comes equipped with rigorous error estimates, a feature that many methods lack.
20世紀10大算法
1、蒙特卡羅算法。1946: John von Neumann, Stan Ulam, and Nick Metropolis
2、單純形方法。1947: George Dantzig,學過運籌學的人都知道:)
3、Krylov 子空間迭代算法。1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos。在聯(lián)想實習的期間看過/Krylov subspace:span{S,A*S,A^2*S,...,A^(k-1)*S}.
4、矩陣分解算法。1951: Alston Householder。
5、Fotran 最優(yōu)化編譯器。1957: John Backus。不知道這個為什么也算作算法里面。Fotran在科學計算中的確是具有里程碑性質(zhì)的。
6、QR算法。1959–61: J.G.F. Francis
7、快速排序算法。1962: Tony Hoare??戳岁P于計算機排序的研究還不是很早。
8、FFT算法。1965: James Cooley
9、整數(shù)關系確定算法(Integer Relation Detecting Algorithms)。1977: Helaman Ferguson and Rodney Forcade。一個曾讓我輾轉(zhuǎn)反測的算法。
10、快速多極算法(Fast Multipole Algorithms )。1987: Leslie Greengard and Vladimir Rokhlin。N體問題仿真的,不太清楚。