The correct answer is the second one.
That if you have an array with no split inversions, then everything in the first half is less than everything in the second half.
Why? Well consider the contrapositive.
Suppose you had even one element in the first half which was bigger than any element in the second half.
That pair of element alone would constitute a split inversion.
Okay? So if you have no split inversions, then everything on the left is smaller than everything on the, in the right half of the array.
Now, more to the point, think about the execution of the merge subroutine on an array with this property.
On an input array A where everything in the left half is less than everything in the right half.
What is merge gonna do? All right, so remember it's always looking for whichever is smaller the first element of, remaining in B or the first element remaining in C, and that's what it copies over.
Well if everything in B is less than everything in C, everything in B is gonna get copied over into the [inaudible] ray D before C ever gets touched.
Okay? So merge had an unusually trivial execution on input arrays with no split inversions, with zero split inversions.
First it just goes through B and copies it over.
Then it just concatonates C.
Okay there's no interleaving between the two.
So no split inversions means nothing get copied from C until it absolutely has to, until B is exhausted.
So this suggests that perhaps copying elements over from the second subarray, C, has something to do with the number of split inversions in the original array, and that is, in fact, the case.
So we're gonna see a general pattern about copies from the second element C, second array C [inaudible] exposing split inversions in the original input array A.
So let's look at a more detailed example, to see what that pattern is.
Let's return to it, the example in the previous video.
Which is an array with six elements ordered one, three, five, two, four, six.
So we do our recursive call, and in fact, the left half of the array is sorted, and the right half of the array is already sorted.
So [inaudible] sorting what's gonna be done [inaudible] get to zero inversions for both our recursive calls.
Remember, in this example, it turns out all of the inversions are split inversions.
So, now let's trace through the merge sub-routine invoked on these two sorted sub-arrays, and try to spot a connection with the number of split inversions in the original 6-element array.
So, we initialize indices I and j to point to the first element of each of these sub-arrays.
So, this left one is b, and this right one is c and the output is d.
Now, the first thing we do, is we copy the one over from b into the upward array.
So, one goes there, and we advance this index over to the three.
And, here nothing really interesting happened, there's no.
Reason to count any split inversions, and indeed, the number one is not involved in any split inversions, cuz one is smaller than all of the other elements, and it's also in the first index.
Things are much more interesting, when we copy over the element two from the second array c.
And, notice at this point, we have diverged from the trivial execution that we would see with an array with no split inversions.
Now, we're copying something over from c, before we've exhausted copying d.
So we're hoping this will expose some split in versions.
So we copy over the two.
And we advance the second pointer J into C.
And the thing to notice is this exposes two split inversions.
The two split inversions that involve the element two.
And those inversions are three comma two and five comma two.
So why did this happen? Well, the reason we copied two over is because it's smaller than all the elements we haven't yet looked at in both B and C.
So in particular, two is smaller than the remaining elements in B, the three and the five.
But also because B is the left array.
The indices and the three and five have to be less than the index of this two, so these are inversions.
Two is further to the right than the original input array, and yet it's smaller than these remaining elements in B.
So there are two elements remaining in B, and those are the two split inversions that involve the element two.
So now, let's go back to the emerging subroutine, so what happens next? Well, next, we make a copy from the first array, and we sort of realize that nothing really interesting happens when we copy from the first array, at least with respect to split inversions.
Then we copy the four over, and yet again, we discover a split inversion, the remaining one which is five comma four.
Again, the reason is, given that four was copied over before, what's left in B, it's gotta be smaller than it, but by virtue of being in the rightmost array, it's also gotta have a bigger index.
So it's gotta be a split inversion.
And now the rest of the merge subroutine executes.
Without any real incident.
The five gets copied over and we know copies from the left array are boring.
And then we copy the six over and copies from the right array are generally interesting but not if the left array is empty.
That doesn't involve any split inversions.
And you will recall, from the earlier video that these were inversions in the original array, three:2, five:2, and five:4.
We discovered them all in automated method, by just keeping an eye out when we copy from the right array C.
So this is indeed a general principle, so let me state the general claim.
So the claim is not just in this specific example, and this specific execution, but no matter what the input array is, no matter how many split inversions there might be, the split inversions that involve an element of the second half of the array are precisely.
Those elements remaining in the first array when that element gets copied over to the output array.
So this is exactly what the pattern that we saw in the example.
What works on the right array.
In C, we had the elements two, four, and six.
Remember, every split inversion has to, by definition, involve one element from the first half, and one element from the second half.
So to count [inaudible] inversions, we can just group them according to which element of the second array there, did they involve.
So out of the two, four, and six, the two is involved in the splitter conversions, 3-2, and 5-2.
The three and the five were exactly the elements remaining in B.
Bit over two.
The split inversions involving four is exactly the inversion five four and five is exactly the element that was remaining in B when we copied over the four.
There's no split inversions involving six and indeed the element D was empty when we copied the six over into the output array D.
So what's the general argument? What's quite simple.
Lets just zoom in and fixate on a particular element, X that belongs to that first half of the array that's among the first half of the elements and let's just examine which Y's so which elements of the second array the second half of the original input array involve with split versions of X.
So there are two cases depending on whether X is copies to the output array D before or after Y now if X is copied to the output before Y well then since the [inaudible] in sorted order that means X has got to be shorter than Y so there's not going to be any split in inversion.
On the other hand, if y is copied to the output d before x, then again because we populate d left to right in sort order, that's gotta mean that y is less than x.
Now X.
Is still hanging out in the left array, so it has a less index than Y.
Y comes from the right array.
So this is indeed a split inversion.
So putting these two together it says that the.
Elements X.
Of the array B.
That form split inversions with Y.
Are precisely those that are going to get copied to the output array, after what? So, those are exactly the number of elements remaining in b, when y gets copied over.
So, that proves the general claim.
[sound] So, this slide was really the key insight.
Now that we understand exactly why, counting split inversions is easy.
As we're merging together two sorted sub-arrays, it's a simple matter to just translate this into code, and get a linear time implementation of a sub-routine that both merges and counts the number of split inversions.
Which, then, in the overall recursive algorithm, will have n log n running time, just as in merge sort.
So, let's just spend a quick minute filling in those details.
So.
I'm not gonna write out the pseudocode, I'm just going to write out what you need to augment the merge pseudocode, discussed a few slides ago, by, in order to count split inversions as you're doing the merging.
And this will follow immediately from the previous claim, which indicated how split inversions relate to, the number of elements remaining on the left array as you're doing the merge.
So, the idea is the natural one, as you're doing the merging, according to the previous pseudocode of the two sorted sub-arrays, you just keep a running total of the number of split inversions that you've encountered, all right? So you've got your sorted sub-array b, you've got your sorted sub-array c.
You're merging these into an output array D.
And as you traverse through D and K from one to N you just start the count at zero and you increment it by something each time you do a copy over from either from B or C.
So, what's the increment, well what did we just see? We saw the copies.
Involving B don't count.
We're not gonna look at split inversions when we copy over for B.
Only when we look at them from C, right? Every split inversion involves exactly one element from each of B and C.
So I may as well count them, via the elements in C.
And how many split inversions are involved with the given element of C? Well, it's exactly how many elements of B remain when it gets copied over.
So that tell us how to increment this running count.
And it falls immediately from the claim on the previous slide that this.
Implementation of this running total counts precisely the number of split inversions that the original input array A possesses.
And you'll recall that the left inversions are counted by the first recursive call, the right inversions are counted by the second recursive call.
Every inversion is either left or right or split, it's exactly one of those three types.
So with our three different subroutines, the two recursive ones and this one here we successfully count up all of the inversions of the original input array.
So that's the correctness of the algorithm, what's the running time? What we're calling merge sort, we begin by just analyzing the running time of merge and then we discuss the running time of the entire merge sort [inaudible].
Let's do the same thing here briefly.
So what's the running time of the [inaudible] team for this merging and simultaneously cutting into split versions.
Work that we do in the merging and we already know that that's linear and then the only additional work here is.
Incrementing this running count.
And that's constant time for each element of D, right? Each time we do a copy over, we do [inaudible], a single edition to our running count.
So constant time for element D, or linear time overall.
So I'm being a little sloppy here, sloppy in a very conventional way.
But it is a little sloppy about writing O of N plus O of N is equal to O of N.
Be careful when you make statements like that, right? So if you added O of N to itself N times, it would not be O of N.
But if you add O of N to itself a constant number of times, it is still O of N.
So you might, as an exercise, want to write out, a formal version of what this means.
Basically, there's some constant C [inaudible].
The merge step takes a C11 in steps.
There's a constant C2 so that the rest of the work is in those C2 times N steps.
So when we add them we get, get [inaudible] most quantity C1 plus C2 times N steps, which is still [inaudible] because C1 plus C2 is a constant.
Okay? So linear work for merge, linear work the running count, that's linear work in the subroutine overall.
And no by exactly the same argument we used in merge sort because we have two recursive calls on half the size and we do linear work outside of the cursive calls, the overall running time is O of N log N.
So we really just piggybacked on merge sort The constant factor a little bit to do the counting along the way, but the running time remains at the go of [inaudible].
正確的答案是第二個(gè)胯究。
那就是枉长,如果您有一個(gè)沒(méi)有拆分反轉(zhuǎn)的數(shù)組雕沿,那么上半年的所有內(nèi)容都會(huì)少于下半年的所有內(nèi)容。
為什么?好考慮相反。
假設(shè)您在上半年甚至有一個(gè)元素大于下半年的任何元素。
單單這對(duì)元素就構(gòu)成了分裂反轉(zhuǎn)官疲。
好的?因此亮隙,如果沒(méi)有拆分反轉(zhuǎn)途凫,則數(shù)組右半部分中的所有內(nèi)容都小于其上的所有內(nèi)容。
現(xiàn)在溢吻,更重要的是维费,考慮在具有此屬性的數(shù)組上執(zhí)行合并子例程。
在輸入數(shù)組A上促王,左半部分的內(nèi)容少于右半部分的內(nèi)容犀盟。
合并將要做什么?好吧蝇狼,所以請(qǐng)記住阅畴,它總是在尋找B中的第一個(gè)元素或C中的第一個(gè)元素較小的那個(gè),這就是它要復(fù)制的內(nèi)容迅耘。
好吧贱枣,如果B中的所有內(nèi)容都小于C中的所有內(nèi)容,那么B中的所有內(nèi)容都將在C接觸之前被復(fù)制到[聽不清]射線D中颤专。
好的纽哥?因此,merge在輸入數(shù)組上執(zhí)行的異称茱酰瑣碎的操作沒(méi)有拆分反轉(zhuǎn)春塌,拆分反轉(zhuǎn)為零。
首先,它只是經(jīng)過(guò)B并將其復(fù)制過(guò)來(lái)只壳。
然后俏拱,它只是將C組合起來(lái)。
好的吕世,兩者之間沒(méi)有交織彰触。
因此,沒(méi)有分割反轉(zhuǎn)意味著從C復(fù)制任何內(nèi)容命辖,直到絕對(duì)必須復(fù)制為止,直到B用盡為止分蓖。
因此尔艇,這表明也許從第二個(gè)子數(shù)組C中復(fù)制元素可能與原始數(shù)組中拆分反轉(zhuǎn)的數(shù)量有關(guān),實(shí)際上就是這種情況么鹤。
因此终娃,我們將看到有關(guān)第二個(gè)元素C,第二個(gè)數(shù)組C [聽不清]的副本的一般模式蒸甜,它暴露了原始輸入數(shù)組A中的拆分反轉(zhuǎn)棠耕。
因此,讓我們看一個(gè)更詳細(xì)的示例柠新,看看該模式是什么窍荧。
讓我們回到上一個(gè)視頻中的示例。
這是一個(gè)具有六個(gè)元素的數(shù)組恨憎,它們的順序分別為1蕊退、3、5憔恳、2瓤荔、4、6钥组。
因此输硝,我們進(jìn)行了遞歸調(diào)用,實(shí)際上程梦,數(shù)組的左半部分已排序点把,而數(shù)組的右半部分已排序。
因此作烟,[音頻不清晰]排序?qū)⒁瓿傻墓ぷ鱗音頻不清晰]對(duì)我們的兩個(gè)遞歸調(diào)用都?xì)w零愉粤。
請(qǐng)記住,在此示例中拿撩,事實(shí)證明所有反轉(zhuǎn)都是拆分反轉(zhuǎn)衣厘。
因此,現(xiàn)在讓我們追溯在這兩個(gè)排序的子數(shù)組上調(diào)用的merge子例程,并嘗試找出原始6元素?cái)?shù)組中拆分反轉(zhuǎn)數(shù)目的連接影暴。
因此错邦,我們將索引I和j初始化為指向每個(gè)這些子數(shù)組的第一個(gè)元素。
因此型宙,左邊的那個(gè)是b撬呢,右邊的那個(gè)是c,輸出是d妆兑。
現(xiàn)在魂拦,我們要做的第一件事是將一個(gè)從b復(fù)制過(guò)來(lái)的復(fù)制到向上數(shù)組中。
因此搁嗓,一個(gè)去了那里芯勘,我們將該指數(shù)提高到了三個(gè)。
而且腺逛,這里沒(méi)有真正有趣的事情發(fā)生荷愕,沒(méi)有。
計(jì)數(shù)任何拆分反轉(zhuǎn)的原因棍矛,實(shí)際上安疗,任何拆分反轉(zhuǎn)均不涉及數(shù)字,因?yàn)閏uz小于所有其他元素够委,并且它也在第一個(gè)索引中荐类。
當(dāng)我們從第二個(gè)數(shù)組c復(fù)制元素2時(shí),事情會(huì)變得更加有趣慨绳。
而且掉冶,請(qǐng)注意,在這一點(diǎn)上脐雪,我們與在沒(méi)有拆分反轉(zhuǎn)的數(shù)組中看到的瑣碎執(zhí)行有所不同厌小。
現(xiàn)在,我們要從c復(fù)制一些內(nèi)容战秋,然后再用盡d的復(fù)制璧亚。
因此,我們希望這會(huì)暴露一些版本上的差異脂信。
因此癣蟋,我們復(fù)制了兩個(gè)。
然后我們將第二個(gè)指針J移到C中狰闪。
需要注意的是疯搅,這暴露了兩個(gè)分裂的反轉(zhuǎn)。
涉及元素二的兩個(gè)拆分反演埋泵。
這些反演是三個(gè)逗號(hào)2和五個(gè)逗號(hào)2幔欧。
那么為什么會(huì)這樣呢罪治?好吧,我們之所以復(fù)制兩個(gè)礁蔗,是因?yàn)樗任覀兩形丛贐和C中查看過(guò)的所有元素都小觉义。
因此,尤其是兩個(gè)要小于B中的其余元素浴井,三個(gè)和五個(gè)晒骇。
也是因?yàn)锽是左數(shù)組。
索引以及三個(gè)和五個(gè)必須小于這兩個(gè)的索引磺浙,因此它們是反轉(zhuǎn)的洪囤。
比原始輸入數(shù)組靠右的兩個(gè),但比B中其余的這些元素小屠缭。
因此箍鼓,B中剩余兩個(gè)元素,而這兩個(gè)是涉及元素2的拆分反演呵曹。
現(xiàn)在,讓我們回到新興的子例程何暮,接下來(lái)會(huì)發(fā)生什么奄喂?好吧,接下來(lái)海洼,我們從第一個(gè)數(shù)組進(jìn)行復(fù)制跨新,并且我們意識(shí)到,從第一個(gè)數(shù)組進(jìn)行復(fù)制時(shí)坏逢,至少在拆分反轉(zhuǎn)方面域帐,沒(méi)有發(fā)生真正有趣的事情。
然后是整,我們將四個(gè)復(fù)制過(guò)來(lái)肖揣,再一次,我們發(fā)現(xiàn)一個(gè)分裂的倒數(shù)浮入,剩下的是五個(gè)逗號(hào)四龙优。
再次,原因是事秀,假設(shè)之前已復(fù)制了四個(gè)彤断,則B中剩余的內(nèi)容將小于它,但是由于位于最右邊的數(shù)組中易迹,因此它也必須具有更大的索引宰衙。
因此,必須進(jìn)行分割反轉(zhuǎn)睹欲。
現(xiàn)在供炼,其余的合并子例程將執(zhí)行。
沒(méi)有任何實(shí)際事件。
這五個(gè)被復(fù)制了劲蜻,我們知道左邊數(shù)組的復(fù)制很無(wú)聊陆淀。
然后我們復(fù)制六個(gè)副本,從右數(shù)組復(fù)制通常很有趣先嬉,但如果左數(shù)組為空則不是轧苫。
這不涉及任何拆分反轉(zhuǎn)。
您會(huì)從前面的視頻中回憶起疫蔓,它們是原始數(shù)組中的3:2含懊、5:2和5:4。
當(dāng)我們從正確的數(shù)組C復(fù)制時(shí)衅胀,只要注意一下岔乔,我們便可以自動(dòng)發(fā)現(xiàn)它們。
因此滚躯,這確實(shí)是一個(gè)一般原則雏门,因此讓我陳述一下一般性主張。
因此掸掏,聲明不僅限于此特定示例和特定執(zhí)行茁影,而且無(wú)論輸入數(shù)組是什么,無(wú)論可能有多少個(gè)拆分反轉(zhuǎn)丧凤,涉及數(shù)組后半部分元素的拆分反轉(zhuǎn)都是恰恰募闲。
將那些元素復(fù)制到輸出數(shù)組時(shí),這些元素保留在第一個(gè)數(shù)組中愿待。
因此浩螺,這正是我們?cè)谑纠锌吹降哪J健?/p>
在正確的數(shù)組上有效的方法。
在C語(yǔ)言中仍侥,我們有元素2要出、4和6。
請(qǐng)記住访圃,按照定義厨幻,每個(gè)拆分反演都必須包含上半部分中的一個(gè)元素,以及下半部分中的一個(gè)元素腿时。
因此况脆,要計(jì)算[聽不清]反轉(zhuǎn),我們可以根據(jù)它們涉及的第二個(gè)數(shù)組中的哪個(gè)元素對(duì)其進(jìn)行分組批糟。
因此格了,在兩個(gè),四個(gè)和六個(gè)中徽鼎,兩個(gè)都涉及拆分器轉(zhuǎn)換3-2和5-2盛末。
這三個(gè)和五個(gè)正是B中剩余的元素弹惦。
超過(guò)兩個(gè)
涉及四個(gè)的拆分倒數(shù)恰好是五個(gè)倒數(shù)四個(gè),而五個(gè)恰好是我們?cè)谒膫€(gè)上進(jìn)行復(fù)制時(shí)保留在B中的元素悄但。
沒(méi)有涉及六個(gè)的拆分反轉(zhuǎn)棠隐,并且當(dāng)我們將六個(gè)復(fù)制到輸出數(shù)組D中時(shí),元素D實(shí)際上是空的檐嚣。
那么一般的論點(diǎn)是什么助泽?這很簡(jiǎn)單。
讓我們放大并固定在一個(gè)特定的元素上嚎京,X屬于該元素的前一半之中的數(shù)組的前半部分嗡贺,讓我們僅檢查哪個(gè)Y從而使第二個(gè)數(shù)組的哪個(gè)元素位于原始輸入數(shù)組的后半部分涉及X的拆分版本。
因此鞍帝,有兩種情況取決于X是在Y之前還是之后復(fù)制到輸出數(shù)組D上诫睬,如果X很好地復(fù)制到Y(jié)之前的輸出上,則由于[聽不清]的排序順序意味著X必須比Y短因此在反轉(zhuǎn)中不會(huì)有任何分裂帕涌。
另一方面摄凡,如果將y復(fù)制到x之前的輸出d,則又一次因?yàn)槲覀儼磁判蝽樞驈淖蟮接姨畛鋎蚓曼,所以這意味著y小于x架谎。
現(xiàn)在是X。
仍在左側(cè)數(shù)組中閑逛辟躏,因此索引比Y少。
Y來(lái)自正確的數(shù)組土全。
因此捎琐,這確實(shí)是分裂的倒置。
所以把這兩個(gè)放在一起就說(shuō)了裹匙。
元素X瑞凑。
數(shù)組B。
該形式與Y拆分反演概页。
到底是那些將要復(fù)制到輸出數(shù)組的對(duì)象嗎籽御?因此,當(dāng)y被復(fù)制時(shí)惰匙,這些恰好是b中剩余的元素?cái)?shù)技掏。
因此,證明了一般的主張项鬼。
[聲音]因此哑梳,此幻燈片確實(shí)是關(guān)鍵見解。
現(xiàn)在绘盟,我們確切地知道了為什么鸠真,計(jì)算拆分反轉(zhuǎn)很容易悯仙。
當(dāng)我們將兩個(gè)排序的子數(shù)組合并在一起時(shí),只需將其轉(zhuǎn)換為代碼吠卷,并獲得子例程的線性時(shí)間實(shí)現(xiàn)即可锡垄,這是一個(gè)簡(jiǎn)單的問(wèn)題,該例程既合并又計(jì)算了拆分反轉(zhuǎn)的數(shù)量祭隔。
然后货岭,在整個(gè)遞歸算法中,它將具有n log n個(gè)運(yùn)行時(shí)間序攘,就像合并排序一樣茴她。
因此,讓我們花一點(diǎn)時(shí)間填寫這些細(xì)節(jié)程奠。
所以丈牢。
我不會(huì)寫出偽代碼,我只是寫出需要增加合并偽代碼的內(nèi)容瞄沙,如前幾張幻燈片所述己沛,以便在進(jìn)行合并時(shí)計(jì)算拆分的倒數(shù)。
這將緊接在先前的聲明之后距境,該聲明指出了拆分反轉(zhuǎn)與進(jìn)行合并時(shí)保留在左側(cè)數(shù)組上的元素?cái)?shù)量之間的關(guān)系申尼。
所以,這個(gè)想法很自然垫桂,因?yàn)樵诤喜r(shí)师幕,根據(jù)兩個(gè)已排序子數(shù)組的先前偽代碼,您只需保持所遇到的拆分反轉(zhuǎn)總數(shù)即可诬滩。 霹粥?這樣就得到了已排序的子數(shù)組b和已排序的子數(shù)組c。
您正在將它們合并到輸出數(shù)組D中疼鸟。
當(dāng)您從D到K從1遍歷到N時(shí)后控,您只需從0開始計(jì)數(shù),然后每次從B或C進(jìn)行復(fù)制時(shí)就將其加一空镜。
那么浩淘,增量是多少,那么我們剛才看到了什么吴攒?我們看到了副本张抄。
涉及B的不算在內(nèi)。
當(dāng)我們?yōu)锽復(fù)制時(shí)舶斧,我們不再關(guān)注分割反轉(zhuǎn)欣鳖。
只有當(dāng)我們從C看它們時(shí),對(duì)嗎茴厉?每個(gè)拆分反演都涉及B和C中的一個(gè)元素泽台。
因此什荣,我不妨通過(guò)C中的元素來(lái)計(jì)算它們。
C的給定元素涉及多少個(gè)拆分反轉(zhuǎn)怀酷?好吧稻爬,正是當(dāng)B被復(fù)制時(shí),剩余了B的多少個(gè)元素蜕依。
這樣就告訴我們?nèi)绾卧黾哟诉\(yùn)行計(jì)數(shù)桅锄。
這直接從上一張幻燈片的主張中得出。
該運(yùn)行總計(jì)的實(shí)現(xiàn)精確地計(jì)數(shù)了原始輸入數(shù)組A擁有的分割反轉(zhuǎn)的數(shù)量样眠。
您會(huì)記得友瘤,左反轉(zhuǎn)由第一個(gè)遞歸調(diào)用計(jì)算,右反轉(zhuǎn)由第二個(gè)遞歸調(diào)用計(jì)算檐束。
每個(gè)反轉(zhuǎn)都是向左或向右或分裂的辫秧,這恰好是這三種類型之一。
因此被丧,使用我們的三個(gè)不同子例程盟戏,兩個(gè)遞歸子例程以及這里的一個(gè)子例程,我們成功地計(jì)算了原始輸入數(shù)組的所有求反甥桂。
這就是算法的正確性柿究,運(yùn)行時(shí)間是多少?我們所說(shuō)的合并排序黄选,我們首先分析合并的運(yùn)行時(shí)間蝇摸,然后討論整個(gè)合并排序的運(yùn)行時(shí)間[聽不清]。
讓我們?cè)谶@里簡(jiǎn)短地做同樣的事情办陷。
那么探入,[音頻不清晰]團(tuán)隊(duì)的合并時(shí)間是多少?
我們?cè)诤喜⒅兴龅墓ぷ鞫覀円呀?jīng)知道這是線性的,因此這里唯一的附加工作就是苗膝。
增加此運(yùn)行計(jì)數(shù)殃恒。
那是D的每個(gè)元素的固定時(shí)間,對(duì)嗎辱揭?每次復(fù)制完后离唐,我們都會(huì)[聽不清],將單個(gè)版本記入我們的運(yùn)行記錄中问窃。
因此亥鬓,元素D的時(shí)間恒定,或者總體上是線性時(shí)間域庇。
所以我在這里有些草率嵌戈,以一種非常傳統(tǒng)的方式草率覆积。
但是寫N的O加N的O等于N的O有點(diǎn)草率。
這樣說(shuō)時(shí)要小心熟呛,對(duì)嗎宽档?因此,如果您將N的O加到自身N次庵朝,則不會(huì)是N的O迄本。
但是功舀,如果您將N的O加到自身恒定的次數(shù),則它仍然是N的O。
因此寥袭,作為練習(xí),您可能想要寫出這意味著什么的正式版本蚌本。
基本上加矛,有一些常數(shù)C [聽不清]。
合并步驟分步執(zhí)行C11勾怒。
有一個(gè)恒定的C2婆排,因此其余工作在這些C2中乘以N步。
因此笔链,當(dāng)我們將它們相加時(shí)段只,我們得到[聽不清]數(shù)量最多的C1 + C2乘以N步,這仍然[聽不清]鉴扫,因?yàn)镃1 + C2是一個(gè)常數(shù)赞枕。
好的?因此坪创,用于合并的線性工作炕婶,線性工作的運(yùn)行次數(shù),即整個(gè)子例程中的線性工作莱预。
也不用我們?cè)诤喜⑴判蛑惺褂玫耐耆嗤膮?shù)柠掂,因?yàn)槲覀冇袃蓚€(gè)大小為一半的遞歸調(diào)用,并且在遞歸調(diào)用之外進(jìn)行線性工作依沮,因此總運(yùn)行時(shí)間為N log N的O涯贞。
因此,我們實(shí)際上只是在合并排序上背負(fù)了常量因子來(lái)進(jìn)行計(jì)數(shù)危喉,但是運(yùn)行時(shí)間仍然是[聽不清]宋渔。