第五章 初始化與清理(1)
1.用構(gòu)造器確保初始化
在Java中酪惭,“初始化”和“創(chuàng)建”捆綁在一起绩卤,兩者不能分離。
2.構(gòu)造器沒有返回值碘饼,這與返回值為空(void)明顯不同恩尾。
3.方法重載:方法名相同而形式參數(shù)不同弛说。
注意:
1. 甚至形參順序的不同也足以區(qū)分兩個(gè)方法,如下示例:
public class OverloadingOrder {
static void f(String s, int i) {
print("String: " + s + ", int: " + i);
}
static void f(int i, String s) {
print("int: " + i + ", String: " + s);
}
public static void main(String[] args) {
f("String first", 11);
f(99, "Int first");
}
} /* Output:
String: String first, int: 11
int: 99, String: Int first
*///:~
一般不要這樣做翰意,因?yàn)檫@會(huì)使代碼難以維護(hù)木人。
2. 根據(jù)方法返回值來區(qū)分重載方法是行不通的信柿,因?yàn)楫?dāng)調(diào)用方法但是不關(guān)心返回值時(shí),編譯器無法知曉該調(diào)用哪一種形式醒第,如程序中僅僅寫出如下語句:
void f(){}
int f(){return 1;}
public static void main(String[] args){
f();
}
f();
行不需要利用返回值渔嚷,僅僅是調(diào)用該函數(shù),得到其他效果稠曼。
4.關(guān)于方法重載的傳參
- 整型常量被當(dāng)做int值處理形病,即選擇接受int型參數(shù)的方法。
- 如果傳入的數(shù)據(jù)類型(實(shí)際參數(shù)類型)小于方法中聲明的形式參數(shù)類型霞幅,實(shí)際數(shù)據(jù)類型就會(huì)被提升漠吻。
- char型略有不同,如果無法找到恰好接受char型參數(shù)的方法司恳,就會(huì)把char直接提升至int型途乃。
- 方法接受較小的基本類型作為參數(shù)。如果傳入的實(shí)際參數(shù)較大扔傅,就得通過強(qiáng)制類型轉(zhuǎn)換來執(zhí)行窄化轉(zhuǎn)換耍共,否則,編譯器會(huì)報(bào)錯(cuò)猎塞。
5.默認(rèn)構(gòu)造器
- 如果你寫的類中沒有構(gòu)造器试读,則編譯器會(huì)自動(dòng)幫你創(chuàng)建一個(gè)默認(rèn)構(gòu)造器。
- 但是邢享,如果已經(jīng)定義了一個(gè)構(gòu)造器(無論是否有參數(shù))鹏往,編譯器就不會(huì)幫你自動(dòng)創(chuàng)建默認(rèn)構(gòu)造器。
6.this關(guān)鍵字
可以在一條語句中定義類對(duì)象:
Banana a = new Banana(),
b = new Banana();
this 關(guān)鍵字只能在方法內(nèi)部使用骇塘,表示對(duì)"調(diào)用方法的那個(gè)對(duì)象"的引用伊履。注意,如果在方法內(nèi)部調(diào)用同一個(gè)類的的另一個(gè)方法款违,就不必使用this唐瀑,直接調(diào)用即可。人們期望只在必要處使用this插爹。
7.在構(gòu)造器中調(diào)用構(gòu)造器
通常寫this表示對(duì)當(dāng)前對(duì)象的引用哄辣。但是,在構(gòu)造器中赠尾,如果為this添加參數(shù)列表力穗,便產(chǎn)生對(duì)符合此參數(shù)列表的某個(gè)構(gòu)造器的明確調(diào)用。
注意:
- 盡管可以用this調(diào)用一個(gè)構(gòu)造器气嫁,但卻不能調(diào)用兩個(gè)当窗。
- 必須將構(gòu)造器置于最起始處,否則編譯器會(huì)報(bào)錯(cuò)寸宵。
- 構(gòu)造器參數(shù)的名稱和數(shù)據(jù)成員的名稱相同時(shí)崖面,使用
this.數(shù)據(jù)成員名稱
解決沖突元咙。 - 除構(gòu)造器以外,編譯器禁止在其他任何方法中通過this調(diào)用構(gòu)造器巫员。
8.static含義
static方法就是沒有this的方法庶香,在static方法的內(nèi)部不能調(diào)用非靜態(tài)方法(不是完全不可能:若你傳遞一個(gè)對(duì)象的引用到靜態(tài)方法中(靜態(tài)方法可以創(chuàng)建其自身的對(duì)象),然后通過這個(gè)引用就可以調(diào)用非靜態(tài)方法和訪問非靜態(tài)成員了简识。但是通常要達(dá)到這樣的效果赶掖,只需要寫一個(gè)非靜態(tài)的方法即可。)财异,反過來倒是可以倘零。static方法可以在沒有創(chuàng)建任何對(duì)象的前提下,僅僅通過類本身來調(diào)用戳寸,與Smalltalk語言的"類方法"相對(duì)應(yīng)呈驶。
9.清理:終結(jié)處理--finalize()
- finalize()不是C++中的析構(gòu)函數(shù),因?yàn)闊o論對(duì)象如何創(chuàng)建(即使是對(duì)象中含有其他對(duì)象的這種情況)疫鹊,垃圾回收器都會(huì)負(fù)責(zé)釋放對(duì)象占據(jù)的所有內(nèi)存袖瞻。之所以要有finalize(),是由于在分配內(nèi)存時(shí)可能采用了類似C語言的做法,而非Java中的通常做法拆吆。這種情況逐一發(fā)生在使用"本地方法"的情況下聋迎,本地方法是一種在Java中調(diào)用非Java代碼的方式。本地方法目前只支持C和C++枣耀,但是它們可以調(diào)用其他語言寫的代碼霉晕,所以實(shí)際上可以調(diào)用任何代碼。在非Java代碼中捞奕,也許會(huì)調(diào)用C的malloc()函數(shù)系列來分配存儲(chǔ)空間牺堰,所以需要在finalize()中用本地方法調(diào)用free()。
- finalize()的工作原理:一旦垃圾回收器準(zhǔn)備好釋放對(duì)象占用的存儲(chǔ)空間颅围,將首先調(diào)用其finalize()方法伟葫,并且在下一次垃圾回收動(dòng)作發(fā)生時(shí),才會(huì)真正回收對(duì)象占用的內(nèi)存院促。
原文:When the garbage
collector is ready to release the storage used for your object, it will first call finalize( ), and
only on the next garbage-collection pass will it reclaim the object’s memory.
- Java里的對(duì)象并非總是被垃圾回收筏养,記住三點(diǎn):
- 1.對(duì)象可能不被垃圾回收。
- 2.垃圾回收并不等于析構(gòu)常拓。
- 3.垃圾回收只與內(nèi)存有關(guān)渐溶。
-
注意:
a.只要程序沒有瀕臨存儲(chǔ)空間用完的那一刻,對(duì)象占用的空間就總也得不到釋放弄抬。如果程序執(zhí)行結(jié)束掌猛,并且垃圾回收器一直沒有釋放你創(chuàng)建的任何對(duì)象的存儲(chǔ)空間,則隨著程序的退出眉睹,那些資源也會(huì)全部交還給操作系統(tǒng)(針對(duì)1荔茬、2的說明)。
b.使用垃圾回收器的唯一原因就是為了回收程序不再使用的內(nèi)存竹海。所以對(duì)于與垃圾回收有關(guān)的任何行為來說(尤其是finalize()方法)慕蔚,它們也必須同內(nèi)存及其回收有關(guān)。
- 最后斋配,終結(jié)函數(shù)無法預(yù)料孔飒,常常是危險(xiǎn)的,總之是多余的艰争。
10.清理:垃圾回收
- Java不允許創(chuàng)建局部對(duì)象坏瞄,必須使用new創(chuàng)建對(duì)象。在Java中甩卓,沒有用于釋放對(duì)象的delete鸠匀,因?yàn)槔厥掌鲿?huì)幫助你釋放存儲(chǔ)空間。然而逾柿,垃圾回收器的存在并不能完全代替析構(gòu)函數(shù)(當(dāng)然缀棍,絕對(duì)不能直接調(diào)用finalize(),所以机错,這也不是一種解決方案)爬范。如果希望進(jìn)行除釋放存儲(chǔ)空間之外的清理工作,還是得明確調(diào)用某個(gè)恰當(dāng)?shù)腏ava方法弱匪。這就等同于使用析構(gòu)函數(shù)了青瀑,只是沒有它方便。通常萧诫,并不能指望finalize()斥难,必須創(chuàng)建其他的"清理"方法,并明確的調(diào)用它們财搁。
- 記住蘸炸,無論是"垃圾回收"還是"終結(jié)",都不保證一定會(huì)發(fā)生尖奔。如果JVM并未面臨內(nèi)存耗盡的情形搭儒,它是不會(huì)浪費(fèi)時(shí)間去執(zhí)行垃圾回收以恢復(fù)內(nèi)存的。
- 只要對(duì)象存在沒有被適當(dāng)清理的部分提茁,程序就存在很隱晦的缺陷淹禾,finalize()可以用來最終發(fā)現(xiàn)這種情況——盡管它并不總是被調(diào)用。
- finalize()使用案例:
//: initialization/TerminationCondition.java
package io.github.wzzju; /* Added by Eclipse.py */
// Using finalize() to detect an object that
// hasn't been properly cleaned up.
class Book {
boolean checkedOut = false;
Book(boolean checkOut) {
checkedOut = checkOut;
}
void checkIn() {
checkedOut = false;
}
void checkOut() {
checkedOut = true;
}
protected void finalize() {
try {
super.finalize();
} catch (Throwable e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(checkedOut)
System.out.println("Error: checked out");
// Normally, you'll also do this:
// super.finalize(); // Call the base-class version
}
}
public class TerminationCondition {
public static void main(String[] args) {
Book novel = new Book(true);
// Proper cleanup:
novel.checkIn();
novel.checkOut();
// Drop the reference, forget to clean up:
new Book(true);
// Force garbage collection & finalization:
System.gc();
}
} /* Output:
Error: checked out
*///:~
注意:
1.System.gc()用于強(qiáng)制進(jìn)行垃圾回收和終結(jié)動(dòng)作(finalize())茴扁。
2.總應(yīng)假設(shè)基類版的finalize()也要做某些重要的事铃岔,因此與使用super來調(diào)用它。
3.即使novel的checkedOut字段值也為true,但是卻未輸出毁习,finalize()此時(shí)未調(diào)用(why?是因?yàn)檫€有引用指向?qū)ο髮?shí)體智嚷,其為活的實(shí)體,故而不回收纺且?)盏道。
11.垃圾回收器如何工作?
- Java中所有對(duì)象(基本類型除外)都在堆上分配载碌。
- 垃圾回收器對(duì)于提高對(duì)象的創(chuàng)建速度有明顯的效果猜嘱。這意味著,Java從堆中分配空間的速度嫁艇,可以和其他語言從棧上分配空間的速度相媲美朗伶。
It means that allocating storage for heap objects in Java can be nearly as fast as creating storage
on the stack in other languages.
- 垃圾回收器的工作原理:
For example, you can think of the C++ heap as a yard where each object stakes out its own
piece of turf. This real estate can become abandoned sometime later and must be reused. In
some JVMs, the Java heap is quite different; it’s more like a conveyor belt that moves
forward every time you allocate a new object. This means that object storage allocation is
remarkably rapid. The “heap pointer” is simply moved forward into virgin territory, so it’s
effectively the same as C++’s stack allocation. (Of course, there’s a little extra overhead for
bookkeeping, but it’s nothing like searching for storage.)
You might observe that the heap isn’t in fact a conveyor belt, and if you treat it that way,
you’ll start paging memory—moving it on and off disk, so that you can appear to have more
memory than you actually do. Paging significantly impacts performance. Eventually, after
you create enough objects, you’ll run out of memory. The trick is that the garbage collector
steps in, and while it collects the garbage it compacts all the objects in the heap so that you’ve
effectively moved the “heap pointer” closer to the beginning of the conveyor belt and farther
away from a page fault. The garbage collector rearranges things and makes it possible for the
high-speed, infinite-free-heap model to be used while allocating storage.
To understand garbage collection in Java, it’s helpful learn how garbage-collection schemes
work in other systems. A simple but slow garbage-collection technique is called reference
counting. This means that each object contains a reference counter, and every time a
reference is attached to that object, the reference count is increased. Every time a reference
goes out of scope or is set to null, the reference count is decreased. Thus, managing
reference counts is a small but constant overhead that happens throughout the lifetime of
your program. The garbage collector moves through the entire list of objects, and when it
finds one with a reference count of zero it releases that storage (however, reference counting
schemes often release an object as soon as the count goes to zero). The one drawback is that
if objects circularly refer to each other they can have nonzero reference counts while still
being garbage. Locating such self-referential groups requires significant extra work for the
garbage collector. Reference counting is commonly used to explain one kind of garbage
collection, but it doesn’t seem to be used in any JVM implementations.
In faster schemes, garbage collection is not based on reference counting. Instead, it is based
on the idea that any non-dead object must ultimately be traceable back to a reference that
lives either on the stack or in static storage. The chain might go through several layers of
objects. Thus, if you start in the stack and in the static storage area and walk through all the
references, you’ll find all the live objects. For each reference that you find, you must trace
into the object that it points to and then follow all the references in that object, tracing into
the objects they point to, etc., until you’ve moved through the entire Web that originated with
the reference on the stack or in static storage. Each object that you move through must still
be alive. Note that there is no problem with detached self-referential groups—these are
simply not found, and are therefore automatically garbage.
In the approach described here, the JVM uses an adaptive garbage-collection scheme, and
what it does with the live objects that it locates depends on the variant currently being used.
One of these variants is stop-and-copy. This means that—for reasons that will become
apparent—the program is first stopped (this is not a background collection scheme). Then,
each live object is copied from one heap to another, leaving behind all the garbage. In
addition, as the objects are copied into the new heap, they are packed end-to-end, thus
compacting the new heap (and allowing new storage to simply be reeled off the end as
previously described).
Of course, when an object is moved from one place to another, all references that point at the
object must be changed. The reference that goes from the heap or the static storage area to
the object can be changed right away, but there can be other references pointing to this object that will be encountered later during the “walk.” These are fixed up as they are found (you
could imagine a table that maps old addresses to new ones).
There are two issues that make these so-called “copy collectors” inefficient. The first is the
idea that you have two heaps and you slosh all the memory back and forth between these two
separate heaps, maintaining twice as much memory as you actually need. Some JVMs deal
with this by allocating the heap in chunks as needed and simply copying from one chunk to
another.
The second issue is the copying process itself. Once your program becomes stable, it might be
generating little or no garbage. Despite that, a copy collector will still copy all the memory
from one place to another, which is wasteful. To prevent this, some JVMs detect that no new
garbage is being generated and switch to a different scheme (this is the “adaptive” part). This
other scheme is called mark-and-sweep, and it’s what earlier versions of Sun’s JVM used all
the time. For general use, mark-and-sweep is fairly slow, but when you know you’re
generating little or no garbage, it’s fast.
Mark-and-sweep follows the same logic of starting from the stack and static storage, and
tracing through all the references to find live objects. However, each time it finds a live
object, that object is marked by setting a flag in it, but the object isn’t collected yet. Only
when the marking process is finished does the sweep occur. During the sweep, the dead
objects are released. However, no copying happens, so if the collector chooses to compact a
fragmented heap, it does so by shuffling objects around.
“Stop-and-copy” refers to the idea that this type of garbage collection is not done in the
background; instead, the program is stopped while the garbage collection occurs. In the Sun
literature you’ll find many references to garbage collection as a low-priority background
process, but it turns out that the garbage collection was not implemented that way in earlier
versions of the Sun JVM. Instead, the Sun garbage collector stopped the program when
memory got low. Mark-and-sweep also requires that the program be stopped.
As previously mentioned, in the JVM described here memory is allocated in big blocks. If you
allocate a large object, it gets its own block. Strict stop-and-copy requires copying every live
object from the source heap to a new heap before you can free the old one, which translates
to lots of memory. With blocks, the garbage collection can typically copy objects to dead
blocks as it collects. Each block has a generation count to keep track of whether it’s alive. In
the normal case, only the blocks created since the last garbage collection are compacted; all
other blocks get their generation count bumped if they have been referenced from
somewhere. This handles the normal case of lots of short-lived temporary objects.
Periodically, a full sweep is made—large objects are still not copied (they just get their
generation count bumped), and blocks containing small objects are copied and compacted.
The JVM monitors the efficiency of garbage collection and if it becomes a waste of time
because all objects are long-lived, then it switches to mark-andsweep. Similarly, the JVM
keeps track of how successful mark-and-sweep is, and if the heap starts to become
fragmented, it switches back to stop-and-copy. This is where the “adaptive” part comes in, so
you end up with a mouthful: “Adaptive generational stop-and-copy mark-andsweep.”
There are a number of additional speedups possible in a JVM. An especially important one
involves the operation of the loader and what is called a just-in-time (JIT) compiler. A JIT
compiler partially or fully converts a program into native machine code so that it doesn’t
need to be interpreted by the JVM and thus runs much faster. When a class must be loaded
(typically, the first time you want to create an object of that class), the .class file is located,
and the bytecodes for that class are brought into memory. At this point, one approach is to
simply JIT compile all the code, but this has two drawbacks: It takes a little more time,
which, compounded throughout the life of the program, can add up; and it increases the size
of the executable (bytecodes are significantly more compact than expanded JIT code), and
this might cause paging, which definitely slows down a program. An alternative approach is
lazy evaluation, which means that the code is not JIT compiled until necessary. Thus, code that never gets executed might never be JIT compiled. The Java HotSpot technologies in
recent JDKs take a similar approach by increasingly optimizing a piece of code each time it is
executed, so the more the code is executed, the faster it gets.
- 問題1.對(duì)象在堆中分配,對(duì)象的引用在哪步咪?(stack or static storage area论皆?)
- 問題2.對(duì)象中的引用在哪?(heap?)