In this lecture, we'll continue our formal treatment of asymptotic notation.
We've already discussed big O notation, which is by far the most important and ubiquitous concept that's part of asymptotic notation, but, for completeness, I do want to tell you about a couple of close relatives of big O, namely omega and theta.
If big O is analogous to less than or equal to, then omega and theta are analogous to greater than or equal to, and equal to, respectively.
But let's treat them a little more precisely.
The formal definition of omega notation closely mirrors that of big O notation.
We say that one function, T of N, is big omega of another function, F of N, if eventually, that is for sufficiently large N, it's lower bounded by a constant multiple of F of N.
And we quantify the ideas of a constant multiple and eventually in exactly the same way as before, namely via explicitly giving two constants, C and N naught, such that T of N is bounded below by C times F of N for all sufficiently large N.
That is, for all N at least N naught.
There's a picture just like there was for big O notation.
Perhaps we have a function T of N which looks something like this green curve.
And then we have another function F of N which is above T of N.
But then when we multiply F of N by one half, we get something that, eventually, is always below T of N.
So in this picture, this is an example where T of N is indeed big Omega of F of N.
As far as what the constants are, well, the multiple that we use, C, is obviously just one half.
That's what we're multiplying F of N by.
And as before, N naught is the crossing point between the two functions.
So, N naught is the point after which C times F of N always lies below T of N forevermore.
So that's Big Omega.
Theta notation is the equivalent of equals, and so it just means that the function is both Big O of F of N and Omega of F of N.
An equivalent way to think about this is that, eventually, T of N is sandwiched between two different constant multiples of F of N.
I'll write that down, and I'll leave it to you to verify that the two notions are equivalent.
That is, one implies the other and vice versa.
So what do I mean by T of N is eventually sandwiched between two multiples of F of N? Well, I just mean we choose two constants.
A small one, C1, and a big constant, C2, and for all N at least N naught, T of N lies between those two constant multiples.
One way that algorithm designers can be quite sloppy is by using O notation instead of theta notation.
So that's a common convention and I will follow that convention often in this class.
Let me give you an example.
Suppose we have a subroutine, which does a linear scan through an array of length N.
It looks at each entry in the array and does a constant amount of work with each entry.
So the merge subroutine would be more or less an example of a subroutine of that type.
So even though the running time of such an algorithm, a subroutine, is patently theta of N, it does constant work for each of N entries, so it's exactly theta of N, we'll often just say that it has running time O of N.
We won't bother to make the stronger statement that it's theta of N.
The reason we do that is because you know, as algorithm designers, what we really care about is upper bounds.
We want guarantees on how long our algorithms are going to run, so naturally we focus on the upper bounds and not so much on the lower bound side.
So don't get confused.
Once in a while, there will a quantity which is obviously theta of F of N, and I'll just make the weaker statement that it's O of F of N.
The next quiz is meant to check your understanding of these three concepts: Big O, Big Omega, and Big Theta notation.
在本講座中铭拧,我們將繼續(xù)對(duì)漸近符號(hào)進(jìn)行正式處理劲赠。
我們已經(jīng)討論過大O符號(hào)执泰,這是迄今為止最重要和最普遍的概念,它是漸進(jìn)符號(hào)的一部分,但是為了完整起見篡撵,我想告訴您一些大O的近親斟冕,即omega和theta 。
如果big O類似于小于或等于譬胎,則ω和theta分別類似于大于或等于和等于。
但是命锄,讓我們更精確地對(duì)待它們堰乔。
Ω表示法的正式定義與大O表示法的定義非常相似。
我們說一個(gè)函數(shù)T of N是另一個(gè)函數(shù)F of N的大歐米茄脐恩,如果最終對(duì)于足夠大的N而言镐侯,它的下限是N F的常數(shù)倍。
然后,我們對(duì)常數(shù)倍的概念進(jìn)行量化苟翻,并最終以與以前完全相同的方式進(jìn)行量化韵卤,即通過顯式給出兩個(gè)常數(shù)C和N零,使得對(duì)于所有足夠大的N崇猫,N的T的下限是C乘以N的F沈条。 。
即诅炉,對(duì)于所有的N蜡歹,至少N個(gè)零。
有一張圖片就像大O符號(hào)一樣涕烧。
也許我們有一個(gè)N的函數(shù)T月而,看起來像這個(gè)綠色曲線。
然后议纯,我們有另一個(gè)函數(shù)N父款,它大于N的T。
但是瞻凤,當(dāng)我們將N的F乘以一半時(shí)铛漓,我們得到的結(jié)果最終總是低于N的T。
所以在這張照片中鲫构,這是一個(gè)例子浓恶,其中N的T確實(shí)是N的F的大歐米茄。
就常量而言结笨,我們使用的倍數(shù)C顯然只是一半包晰。
那就是我們要乘以N的F。
和以前一樣炕吸,N naught是兩個(gè)函數(shù)之間的交點(diǎn)伐憾。
因此,N零是C永遠(yuǎn)等于N永遠(yuǎn)低于N的T的時(shí)刻赫模。
那就是大歐米茄树肃。
Theta表示法等于等號(hào),因此僅表示函數(shù)既是N的F的Big O瀑罗,又是N的F的ω胸嘴。
考慮這一點(diǎn)的等效方法是,最終斩祭,將N的T夾在N F的兩個(gè)不同的常數(shù)倍之間劣像。
我將其寫下來,然后留給您確認(rèn)兩個(gè)概念是否相等摧玫。
也就是說耳奕,一個(gè)暗示另一個(gè),反之亦然。
那么屋群,N的T最終夾在N的F的兩個(gè)倍數(shù)之間是什么意思闸婴?好吧,我只是說我們選擇兩個(gè)常數(shù)芍躏。
一個(gè)小的C1和一個(gè)大的常數(shù)C2邪乍,并且對(duì)于所有N(至少N個(gè)零),T的T位于這兩個(gè)常數(shù)倍之間纸肉。
算法設(shè)計(jì)者相當(dāng)草率的一種方法是使用O表示法而不是theta表示法重罪。
因此感昼,這是一個(gè)常見的約定删掀,在本課程中憔古,我將經(jīng)常遵循該約定靴患。
讓我給你舉個(gè)例子寸爆。
假設(shè)我們有一個(gè)子例程苍凛,它對(duì)長度為N的數(shù)組進(jìn)行線性掃描龄寞。
它查看數(shù)組中的每個(gè)條目壁拉,并對(duì)每個(gè)條目進(jìn)行恒定量的工作谬俄。
因此,合并子例程或多或少將是該類型子例程的示例弃理。
因此溃论,即使這種算法(子程序)的運(yùn)行時(shí)間顯然是N的theta,它也對(duì)N個(gè)條目中的每一個(gè)都進(jìn)行恒定的工作痘昌,所以它恰好是N的theta钥勋,我們經(jīng)常會(huì)說它的運(yùn)行時(shí)間O為N.
我們不會(huì)費(fèi)心做出更強(qiáng)硬的說法,那就是N辆苔。
我們這樣做的原因是因?yàn)槟浪憔模鳛樗惴ㄔO(shè)計(jì)者,我們真正關(guān)心的是上限驻啤。
我們想要保證算法可以運(yùn)行多長時(shí)間菲驴,因此自然而然地,我們專注于上限骑冗,而不是關(guān)注下限赊瞬。
因此,請不要感到困惑贼涩。
偶爾會(huì)有一個(gè)數(shù)量顯然是N的F的theta森逮,而我只是做出一個(gè)較弱的說法,它是N的F的O磁携。
下一個(gè)測驗(yàn)旨在檢查您對(duì)這三個(gè)概念的理解:Big O褒侧,Big Omega和Big Theta表示法。