Mongodb信息參數(shù)詳解(二)

其他參數(shù)

實(shí)例信息:

"host" : ,
"advisoryHostFQDNs" : ,
"version" : ,
"process" : ,
"pid" : ,
"uptime" : ,
"uptimeMillis" : ,
"uptimeEstimate" : ,
"localTime" : ISODate(“"),

host: 系統(tǒng)的主機(jī)名活箕。在Unix / Linux系統(tǒng)中酿傍,與hostname命令的輸出相同倘屹。
advisoryHostFQDNs: 3.2版本新功能涵紊,全限定域名數(shù)組
version:當(dāng)前MongoDB進(jìn)程的MongoDB版本赤屋。
process:當(dāng)前的MongoDB進(jìn)程肤京,可能的值為mongos或mongod
pid: 進(jìn)程的id號(hào)
uptime: 當(dāng)前MongoDB進(jìn)程處于活動(dòng)狀態(tài)的總秒數(shù)抗斤,即啟動(dòng)時(shí)長(zhǎng)蹈垢。
uptimeMillis: 當(dāng)前MongoDB進(jìn)程處于活動(dòng)狀態(tài)的毫秒數(shù)八千。
uptimeEstimate: MongoDB內(nèi)部粗粒度時(shí)間保持系統(tǒng)以秒為單位的啟動(dòng)時(shí)長(zhǎng)
localTime: ISODate表示服務(wù)器當(dāng)前時(shí)間吗讶,以UTC表示。

斷言asserts:

報(bào)告自MongoDB進(jìn)程啟動(dòng)以來(lái)拋出的斷言數(shù)量的文檔恋捆。雖然assert錯(cuò)誤通常不常見(jiàn)照皆,但如果斷言有非零值,您應(yīng)該檢查日志文件以獲取更多信息沸停。在許多情況下膜毁,這些錯(cuò)誤微不足道,但值得研究愤钾。

> db.serverStatus().asserts
{ "regular" : 0, "warning" : 0, "msg" : 0, "user" : 5, "rollovers" : 0 }
asserts.regular
自MongoDB進(jìn)程啟動(dòng)以來(lái)引發(fā)的常規(guī)斷言的數(shù)量爽茴。查看日志文件以獲得有關(guān)這些消息的更多信息。

asserts.warning
在4.0版更改绰垂。
從MongoDB 4.0開(kāi)始室奏,該字段返回0 0。
在早期版本中劲装,該字段返回自MongoDB進(jìn)程啟動(dòng)以來(lái)引發(fā)的警告次數(shù)胧沫。

asserts.msg
自MongoDB進(jìn)程啟動(dòng)以來(lái)引發(fā)的消息斷言的數(shù)量昌简。查看日志文件以獲得有關(guān)這些消息的更多信息。

asserts.user
自上次MongoDB進(jìn)程啟動(dòng)以來(lái)發(fā)生的“user斷言”的數(shù)量绒怨。這些是用戶可能產(chǎn)生的錯(cuò)誤纯赎,如磁盤(pán)空間不足或重復(fù)密鑰。您可以通過(guò)修復(fù)應(yīng)用程序或部署中的問(wèn)題來(lái)防止這些斷言南蹂。更多信息請(qǐng)查看MongoDB日志犬金。

asserts.rollovers
自上次MongoDB進(jìn)程啟動(dòng)以來(lái),滾動(dòng)計(jì)數(shù)器已滾動(dòng)的次數(shù)六剥。計(jì)數(shù)器將在230個(gè)斷言后切換到零晚顷。使用此值為斷言數(shù)據(jù)結(jié)構(gòu)中的其他值提供上下文。

extra_info提供有關(guān)基礎(chǔ)系統(tǒng)的其他信息的文檔疗疟。

"extra_info" : {
   "note" : "fields vary by platform.",
   "heap_usage_bytes" : ,
   "page_faults" : 
},

> db.serverStatus().extra_info
{ "note" : "fields vary by platform", "page_faults" : 21 }

extra_info.note: 字符串文本 “fields vary by platform.”
extra_info.heap_usage_bytes: 數(shù)據(jù)庫(kù)進(jìn)程使用的堆空間的總大懈媚(以字節(jié)為單位)。僅適用于Unix / Linux系統(tǒng)策彤。
extra_info.page_faults: 缺頁(yè)中斷總數(shù)栓袖。當(dāng)性能瓶頸或者內(nèi)存不足或者數(shù)據(jù)集增大, extra_info.page_faults計(jì)數(shù)器動(dòng)態(tài)的增加店诗。有限和零星的缺頁(yè)中斷不一定表示問(wèn)題裹刮。
Windows區(qū)分“硬”缺頁(yè)中斷包括硬盤(pán)I/O,“軟”缺頁(yè)中斷僅需要內(nèi)存頁(yè)面移動(dòng)。MongoDB在此統(tǒng)計(jì)信息中計(jì)算硬缺頁(yè)中斷和軟缺頁(yè)中斷庞瘸。

globalLock報(bào)告數(shù)據(jù)庫(kù)鎖狀態(tài)的文檔必指。

shard01:SECONDARY> db.serverStatus().globalLock
{
    "totalTime" : NumberLong("4639816000"),
    "currentQueue" : {
        "total" : 0,
        "readers" : 0,
        "writers" : 0
    },
    "activeClients" : {
        "total" : 0,
        "readers" : 0,
        "writers" : 0
    }
}
1. globalLock: 報(bào)告數(shù)據(jù)庫(kù)鎖狀態(tài)的文檔。通常恕洲,鎖文檔提供有關(guān)鎖使用的更詳細(xì)數(shù)據(jù)塔橡。
2. globalLock.totalTime: 自數(shù)據(jù)庫(kù)上次啟動(dòng)和創(chuàng)建全局鎖以來(lái)的時(shí)間(以微秒為單位)。這大致與總服務(wù)器啟動(dòng)時(shí)間相同霜第。
3. globalLock.currentQueue: 鎖引起的排隊(duì)操作數(shù)目的文檔
4. globalLock.currentQueue.total: 等鎖的操作的總數(shù)(即葛家,globalLock.currentQueue.readers和 globalLock.currentQueue.writers的總和)。 持續(xù)很小的隊(duì)列泌类,特別是較短的操作癞谒,不必關(guān)注。綜合考慮globalLock.activeClients 讀寫(xiě)相關(guān)信息刃榨。
5. globalLock.currentQueue.readers: 排隊(duì)等待讀鎖的操作數(shù)弹砚。持續(xù)很小的讀隊(duì)列,尤其是較短的操作枢希,不必關(guān)注桌吃。
6. globalLock.currentQueue.writers: 排隊(duì)等待寫(xiě)鎖的操作數(shù)。持續(xù)很小寫(xiě)隊(duì)列苞轿,特別是較短的操作茅诱,不必關(guān)注逗物。
7. globalLock.activeClients: 正在執(zhí)行讀寫(xiě)操作的已連接客戶端數(shù)目文檔,綜合考慮 globalLock.currentQueue瑟俭。
8. globalLock.activeClients.total: 內(nèi)部客戶端連接db總數(shù)翎卓,包括系統(tǒng)線程以及讀寫(xiě)隊(duì)列。由于包括系統(tǒng)線程摆寄,此值將高于activeClients.readers 和activeClients.writers之和失暴。
9. globalLock.activeClients.readers: 執(zhí)行讀操作的活躍客戶端連接數(shù)。
10. globalLock.activeClients.writers: 執(zhí)行寫(xiě)操作的活躍客戶端連接數(shù)微饥。

locks

報(bào)告每個(gè)鎖和鎖數(shù)據(jù)的文檔逗扒。

> db.serverStatus().locks
{
    "Global" : {
        "acquireCount" : {
            "r" : NumberLong(133211),
            "w" : NumberLong(250),
            "W" : NumberLong(5)
        }
    },
    "Database" : {
        "acquireCount" : {
            "r" : NumberLong(66546),
            "w" : NumberLong(232),
            "R" : NumberLong(4),
            "W" : NumberLong(18)
        },
        "acquireWaitCount" : {
            "r" : NumberLong(2),
            "W" : NumberLong(2)
        },
        "timeAcquiringMicros" : {
            "r" : NumberLong(132),
            "W" : NumberLong(304)
},  
"deadlockCount" : {
            <mode> : NumberLong(<num>),
            ...
         }
    },
    "Collection" : {
        "acquireCount" : {
            "r" : NumberLong(52895),
            "w" : NumberLong(232)
        }
    },
    "oplog" : {
        "acquireCount" : {
            "r" : NumberLong(13647)
        }
    }
}

locks..acquireCount:在特定模式下獲取鎖的次數(shù)。
locks..acquireWaitCount: 因鎖沖突畜号,引起locks.acquireCount鎖等待的次數(shù)。
locks..timeAcquiringMicros: 獲取鎖的等待時(shí)間和(以微秒為單位)允瞧。
locks.timeAcquiringMicros除以 locks.acquireWaitCount給出特定鎖定模式的近似平均等待時(shí)間简软。
locks..deadlockCount: 獲取鎖時(shí)遇到死鎖的次數(shù)。

MongoDB網(wǎng)絡(luò)使用情況(network)

wantRepl:PRIMARY> db.serverStatus().network
{
    "bytesIn" : NumberLong("2573428351867"),
    "bytesOut" : NumberLong("3889407355888"),
    "physicalBytesIn" : NumberLong("2568906769497"),
    "physicalBytesOut" : NumberLong("797923925390"),
    "numRequests" : NumberLong(136468356),
    "compression" : {
        "snappy" : {
            "compressor" : {
                "bytesIn" : NumberLong("3589137805219"),
                "bytesOut" : NumberLong("497232509340")
            },
            "decompressor" : {
                "bytesIn" : NumberLong("15326981527"),
                "bytesOut" : NumberLong("21068338987")
            }
        }
    },
    "serviceExecutorTaskStats" : {
        "executor" : "passthrough",
        "threadsRunning" : 31
    }
}

network.bytesIn: 數(shù)據(jù)庫(kù)接收的網(wǎng)絡(luò)流量字節(jié)數(shù)述暂。使用此值可確保發(fā)送到mongod進(jìn)程的網(wǎng)絡(luò)流量與預(yù)期和整個(gè)應(yīng)用程序間流量一致痹升。
network.bytesOut: 數(shù)據(jù)庫(kù)發(fā)送的網(wǎng)絡(luò)流量的字節(jié)數(shù) 。使用此值可確保mongod進(jìn)程發(fā)送的網(wǎng)絡(luò)流量與預(yù)期和整體應(yīng)用程序間流量一致畦韭。
network.numRequests: 服務(wù)器已收到的不同請(qǐng)求的總數(shù)疼蛾。使用此值為network.bytesIn和network.bytesOut 值提供上下文, 以確保MongoDB的網(wǎng)絡(luò)使用率與期望和應(yīng)用程序使用一致艺配。

opcounters

> db.serverStatus().opcounters
{
    "insert" : 0,
    "query" : 49,
    "update" : 6,
    "delete" : 0,
    "getmore" : 0,
    "command" : 174
}
opcounters.insert:自上次啟動(dòng)mongod實(shí)例以來(lái)收到的插入操作總數(shù) 察郁。
opcounters.query:自 上次啟動(dòng)mongod實(shí)例以來(lái)收到的查詢總數(shù)。
opcounters.update:自上次啟動(dòng)mongod實(shí)例以來(lái)收到的更新操作總數(shù) 转唉。
opcounters.delete:自上次啟動(dòng)mongod實(shí)例以來(lái)的刪除操作總數(shù)皮钠。
opcounters.getmore:自上次啟動(dòng)mongod實(shí)例以來(lái)“getmore”操作的總數(shù)。即使查詢數(shù)目較低赠法,此計(jì)數(shù)器也可能很高麦轰。作為復(fù)制進(jìn)程的一部分,Secondary節(jié)點(diǎn)將發(fā)送 getMore操作
opcounters.command:自mongod上次啟動(dòng)實(shí)例以來(lái)向數(shù)據(jù)庫(kù)發(fā)出的命令總數(shù) 砖织。
opcounters.command計(jì)數(shù)所有的命令 款侵,除了寫(xiě)命令: insert,update侧纯,和delete新锈。

repl

shard01:PRIMARY> db.serverStatus().repl
{
    "topologyVersion" : {
        "processId" : ObjectId("5fdaff1a68fa4882da69da73"),
        "counter" : NumberLong(6)
    },
    "hosts" : [
        "localhost:29018",
        "localhost:29019",
        "localhost:29020"
    ],
    "setName" : "shard01",
    "setVersion" : 2,
    "ismaster" : true,
    "secondary" : false,
    "primary" : "localhost:29019",
    "me" : "localhost:29019",
    "electionId" : ObjectId("7fffffff0000000000000003"),
    "lastWrite" : {
        "opTime" : {
            "ts" : Timestamp(1608194231, 1),
            "t" : NumberLong(3)
        },
        "lastWriteDate" : ISODate("2020-12-17T08:37:11Z"),
        "majorityOpTime" : {
            "ts" : Timestamp(1608194231, 1),
            "t" : NumberLong(3)
        },
        "majorityWriteDate" : ISODate("2020-12-17T08:37:11Z")
    },
    "rbid" : 1
}

repl:報(bào)告副本集配置的文檔。 repl僅在當(dāng)前主機(jī)是副本集時(shí)存在眶熬。
repl.hosts:當(dāng)前副本集成員的主機(jī)名和端口信息(”host:port”)的數(shù)組壕鹉。
repl.setName:當(dāng)前副本集名稱的字符串剃幌。此值反映–replSet命令行參數(shù)或配置文件中replSetName的值。
repl.ismaster:一個(gè)布爾值晾浴,指示當(dāng)前節(jié)點(diǎn)是否是副本集的primary節(jié)點(diǎn) 负乡。
repl.secondary:一個(gè)布爾值,指示當(dāng)前節(jié)點(diǎn)是否是副本集的 secondary成員脊凰。
repl.primary:3.0版中的新功能抖棘。
副本集的當(dāng)前primary成員的主機(jī)名和端口信息(”host:port”) 。
repl.me:3.0版中的新增功能:副本集當(dāng)前成員的主機(jī)名和端口信息(”host:port”)狸涌。
repl.rbid:3.0版中的新功能切省。回滾標(biāo)識(shí)符帕胆。用于確定此mongod實(shí)例是否發(fā)生了回滾朝捆。
repl.replicationProgress:在3.2版中更改:以前名稱serverStatus.repl.slaves。
3.0版中的新功能懒豹。
一個(gè)數(shù)組芙盘,副本集的每個(gè)成員報(bào)告復(fù)制進(jìn)程給這個(gè)成員的一個(gè)數(shù)組文檔。通常脸秽,這個(gè)成員是primary或者使用鏈?zhǔn)綇?fù)制的secondary儒老。
要輸出repl,必須將repl選項(xiàng)傳遞給 serverStatus记餐,如下所示:
db.serverStatus({ “repl”: 1 })
db.runCommand({ “serverStatus”: 1, “repl”: 1 })
repl.replicationProgress部分的內(nèi)容取決于每個(gè)成員復(fù)制的源驮樊。支持內(nèi)部操作,僅供內(nèi)部和診斷使用片酝。
repl.replicationProgress[n].rid:ObjectId用作副本集成員的ID囚衔。僅限內(nèi)部使用。
repl.replicationProgress[n].optime:從這個(gè)成員報(bào)告的雕沿,成員應(yīng)用的oplog最后一個(gè)操作信息佳魔。
repl.replicationProgress[n].host:主機(jī)的名稱[hostname]:[port]格式為副本集的成員。
repl.replicationProgress[n].memberID:此成員的副本集的整數(shù)標(biāo)識(shí)符

sharding

版本3.2中的新功能:運(yùn)行時(shí)mongos晦炊,該命令返回分片信息鞠鲜。
在版本3.6中更改:從MongoDB 3.6開(kāi)始,分片成員返回分片信息断国。

mongos> db.serverStatus().sharding
{
    "configsvrConnectionString" : "configRepl/localhost:29024",
    "lastSeenConfigServerOpTime" : {
        "ts" : Timestamp(1608194582, 2),
        "t" : NumberLong(3)
    },
    "maxChunkSizeInBytes" : NumberLong(67108864)
}

1. sharding:包含分片集群數(shù)據(jù)的文檔贤姆。lastSeenConfigServerOpTime僅存在在mongos或分片成員,而配置節(jié)點(diǎn)不存在稳衬。
2. sharding.configsvrConnectionString:配置服務(wù)器的連接字符串霞捡。
3. sharding.lastSeenConfigServerOpTime:
4. mongos或shard成員可見(jiàn),CSRS primary的最新 optime薄疚。optime文檔包括:
5. ts碧信,操作的時(shí)間戳赊琳。
6. t,term表示操作在primary上最初生成的時(shí)間砰碴。
7. lastSeenConfigServerOpTime僅存在在使用CSRS(副本集)的分片集群中躏筏。
8. sharding.maxChunkSizeInBytes:版本3.6中的新功能。塊的最大大小限制呈枉。如果最近在配置服務(wù)器上更新了塊大小趁尼,則maxChunkSizeInBytes可能無(wú)法反映最新值。

shardingStatistics

shard01:PRIMARY> db.serverStatus().shardingStatistics
{
    "countStaleConfigErrors" : NumberLong(2),
    "countDonorMoveChunkStarted" : NumberLong(0),
    "totalDonorChunkCloneTimeMillis" : NumberLong(0),
    "totalCriticalSectionCommitTimeMillis" : NumberLong(0),
    "totalCriticalSectionTimeMillis" : NumberLong(0),
    "countDocsClonedOnRecipient" : NumberLong(0),
    "countDocsClonedOnDonor" : NumberLong(0),
    "countRecipientMoveChunkStarted" : NumberLong(0),
    "countDocsDeletedOnDonor" : NumberLong(0),
    "countDonorMoveChunkLockTimeout" : NumberLong(0),
    "countDonorMoveChunkAbortConflictingIndexOperation" : NumberLong(0),
    "unfinishedMigrationFromPreviousPrimary" : NumberLong(0),
    "catalogCache" : {
        "numDatabaseEntries" : NumberLong(2),
        "numCollectionEntries" : NumberLong(1),
        "countStaleConfigErrors" : NumberLong(0),
        "totalRefreshWaitTimeMicros" : NumberLong(1041596),
        "numActiveIncrementalRefreshes" : NumberLong(0),
        "countIncrementalRefreshesStarted" : NumberLong(152),
        "numActiveFullRefreshes" : NumberLong(0),
        "countFullRefreshesStarted" : NumberLong(1),
        "countFailedRefreshes" : NumberLong(0)
    },
    "rangeDeleterTasks" : 0
}

shardingStatistics:分片集群上元數(shù)據(jù)刷新的指標(biāo)的文檔猖辫。
shardingStatistics.countStaleConfigErrors:線程命中陳舊配置異常的總次數(shù)酥泞。由于陳舊的配置異常觸發(fā)元數(shù)據(jù)的刷新,因此該數(shù)字大致與元數(shù)據(jù)刷新的數(shù)量成比例啃憎。僅存在在正在運(yùn)行的分片上芝囤。
shardingStatistics.countDonorMoveChunkStarted:作為塊遷移過(guò)程的一部分, moveChunk 命令在分片上啟動(dòng)的總次數(shù)(此節(jié)點(diǎn)是其成員)辛萍。這個(gè)數(shù)字都會(huì)增加不論遷移是否成功悯姊。僅存在在運(yùn)行分片上。
shardingStatistics.totalDonorChunkCloneTimeMillis:從當(dāng)前shard塊遷移的克隆階段所占用的累積時(shí)間(以毫秒為單位)叹阔,此節(jié)點(diǎn)是該節(jié)點(diǎn)的成員挠轴。具體而言传睹,對(duì)于從此分片的每次遷移耳幢,跟蹤時(shí)間從發(fā)起moveChunk命令開(kāi)始, 結(jié)束于目標(biāo)分片進(jìn)入追趕階段之前欧啤,應(yīng)用在塊遷移期間發(fā)生的更改 睛藻。僅存在在運(yùn)行的分片上。
shardingStatistics.totalCriticalSectionCommitTimeMillis:從此分片塊遷移過(guò)程中的更新元數(shù)據(jù)階段所花費(fèi)的累積時(shí)間(以毫秒為單位)邢隧。在更新元數(shù)據(jù)階段店印,將阻止集合上的所有操作。僅存在在運(yùn)行的分片上倒慧。
shardingStatistics.totalCriticalSectionTimeMillis:從此分片塊遷移的追趕階段和更新元數(shù)據(jù)階段所占用的累積時(shí)間(以毫秒為單位)按摘,此節(jié)點(diǎn)是該節(jié)點(diǎn)的成員。要計(jì)算追趕階段的持續(xù)時(shí)間totalCriticalSectionTimeMillis – totalCriticalSectionCommitTimeMillis 僅存在于在運(yùn)行分片上運(yùn)行時(shí)出現(xiàn)纫谅。
shardingStatistics.catalogCache:集群路由信息緩存的統(tǒng)計(jì)信息的文檔炫贤。
shardingStatistics.catalogCache.numDatabaseEntries:當(dāng)前在編目緩存中的數(shù)據(jù)庫(kù)條目總數(shù)。
shardingStatistics.catalogCache.numCollectionEntries:當(dāng)前位于編目緩存中的集合條目總數(shù)(跨所有數(shù)據(jù)庫(kù))付秕。
shardingStatistics.catalogCache.countStaleConfigErrors:線程命中過(guò)時(shí)配置異常的總次數(shù)兰珍。過(guò)時(shí)的配置異常會(huì)觸發(fā)元數(shù)據(jù)的刷新。
shardingStatistics.catalogCache.totalRefreshWaitTimeMicros:線程必須等待刷新元數(shù)據(jù)的累積時(shí)間(以微秒為單位)询吴。
shardingStatistics.catalogCache.numActiveIncrementalRefreshes:當(dāng)前正在等待的增量編目緩存刷新的數(shù)量掠河。
shardingStatistics.countIncrementalRefreshesStarted:已啟動(dòng)的累計(jì)增量刷新次數(shù)亮元。
shardingStatistics.catalogCache.numActiveFullRefreshes:正在等待的全量編目緩存刷新的數(shù)量。
shardingStatistics.catalogCache.countFullRefreshesStarted:已啟動(dòng)的累計(jì)全量刷新數(shù)唠摹。
shardingStatistics.catalogCache.countFailedRefreshes:已失敗的全量或增量刷新的累計(jì)數(shù)量爆捞。

storageEngine

shard01:PRIMARY> db.serverStatus().storageEngine
{
    "name" : "wiredTiger",
    "supportsCommittedReads" : true,
    "oldestRequiredTimestampForCrashRecovery" : Timestamp(1608201974, 1),
    "supportsPendingDrops" : true,
    "dropPendingIdents" : NumberLong(0),
    "supportsTwoPhaseIndexBuild" : true,
    "supportsSnapshotReadConcern" : true,
    "readOnly" : false,
    "persistent" : true,
    "backupCursorOpen" : false
}
1. storageEngine:包含當(dāng)前存儲(chǔ)引擎數(shù)據(jù)的文檔。
2. storageEngine.name:當(dāng)前存儲(chǔ)引擎的名稱跃闹。
3. storageEngine.supportsCommittedReads:版本3.2中的新功能嵌削。一個(gè)布爾值,表示存儲(chǔ)引擎是否支持”majority” read concern望艺。
4. storageEngine.persistent:版本3.2.6中的新功能苛秕。一個(gè)布爾值,表示存儲(chǔ)引擎是否將數(shù)據(jù)持久化到磁盤(pán)找默。

transactions

shard01:PRIMARY> db.serverStatus().transactions
{
    "retriedCommandsCount" : NumberLong(0),
    "retriedStatementsCount" : NumberLong(0),
    "transactionsCollectionWriteCount" : NumberLong(0),
    "currentActive" : NumberLong(0),
    "currentInactive" : NumberLong(0),
    "currentOpen" : NumberLong(0),
    "totalAborted" : NumberLong(0),
    "totalCommitted" : NumberLong(0),
    "totalStarted" : NumberLong(0),
    "totalPrepared" : NumberLong(0),
    "totalPreparedThenCommitted" : NumberLong(0),
    "totalPreparedThenAborted" : NumberLong(0),
    "currentPrepared" : NumberLong(0)
}

1. transactions:包含有關(guān)可重試寫(xiě)入和 多文檔事務(wù)的數(shù)據(jù)的文檔艇劫。
2. transactions.retriedCommandsCount:相應(yīng)的可重試寫(xiě)入命令已經(jīng)提交之后收到的重試總數(shù)。也就是說(shuō)惩激,即使寫(xiě)入已成功并且在config.transactions 集合中存在的事務(wù)和會(huì)話的關(guān)聯(lián)記錄店煞,可重試寫(xiě)入繼續(xù)嘗試,例如客戶端的初始寫(xiě)入響應(yīng)丟失风钻。
注意:MongoDB不會(huì)重新執(zhí)行已提交的寫(xiě)入顷蟀。
總數(shù)包括所有會(huì)話÷饧迹總數(shù)不包括在內(nèi)部塊遷移時(shí)的發(fā)生的可重試寫(xiě)入鸣个。版本3.6.3中的新功能。
3. transactions.retriedStatementsCount:與重試命令transactions.retriedCommandsCount關(guān)聯(lián)的寫(xiě)語(yǔ)句總數(shù)布朦。
4. transactions.transactionsCollectionWriteCount:提交新的可重試寫(xiě)入語(yǔ)句時(shí)觸發(fā)的對(duì)config.transactions 集合的寫(xiě)入總數(shù)囤萤。
5. 對(duì)于更新和刪除命令,由于只有單個(gè)文檔操作可以重試是趴,因此每個(gè)語(yǔ)句都有一個(gè)寫(xiě)入涛舍。
6. 對(duì)于插入操作,插入的每批文檔有一次寫(xiě)入唆途,除非失敗導(dǎo)致每個(gè)文檔單獨(dú)插入富雅。
7. 總數(shù)包括遷移發(fā)生時(shí)部分寫(xiě)入服務(wù)器config.transactions 集合的寫(xiě)入。
版本3.6.3中的新功能肛搬。
8. transactions.currentActive:當(dāng)前正在執(zhí)行命令的打開(kāi)事務(wù)的總數(shù)没佑。版本4.0.2中的新功能。
9. transactions.currentInactive:當(dāng)前未執(zhí)行命令的打開(kāi)事務(wù)的總數(shù)滚婉。版本4.0.2中的新功能图筹。
10. transactions.currentOpen:開(kāi)放事務(wù)總數(shù)。當(dāng)?shù)谝粋€(gè)命令作為該事務(wù)的一部分運(yùn)行時(shí),將打開(kāi)一個(gè)事務(wù)远剩,并在事務(wù)提交或中止之前保持打開(kāi)狀態(tài)扣溺。版本4.0.2中的新功能。
11. transactions.totalAborted:自mongod進(jìn)程上次啟動(dòng)以來(lái)在此服務(wù)器上中止的事務(wù)總數(shù) 瓜晤。版本4.0.2中的新功能锥余。
12. transactions.totalCommitted:自mongod進(jìn)程上次啟動(dòng)以來(lái)在此服務(wù)器上提交的事務(wù)總數(shù) 。版本4.0.2中的新功能痢掠。
13. transactions.totalStarted:自mongod進(jìn)程上次啟動(dòng)以來(lái)在此服務(wù)器上啟動(dòng)的事務(wù)總數(shù) 驱犹。版本4.0.2中的新功能。

mem

shard01:PRIMARY> db.serverStatus().mem
{ "bits" : 64, "resident" : 24, "virtual" : 5496, "supported" : true }

mem:報(bào)告mongod的系統(tǒng)架構(gòu)和當(dāng)前內(nèi)存使用的文檔 足画。
mem.bits:可選數(shù)字64或32雄驹,表示已編譯的mongodb實(shí)例是32位還是64位體系結(jié)構(gòu)。
mem.resident:該值mem.resident大致相當(dāng)于數(shù)據(jù)庫(kù)進(jìn)程當(dāng)前使用的RAM量(以兆字節(jié)(MB)為單位)淹辞。在正常使用期間医舆,該值趨于增長(zhǎng)。在專用數(shù)據(jù)庫(kù)服務(wù)器中象缀,此數(shù)字接近系統(tǒng)內(nèi)存總量蔬将。
mem.virtual:mem.virtual顯示mongod進(jìn)程使用的虛擬內(nèi)存的總量(以兆字節(jié)(MB)為單位)。
日志啟用并且使用MMAPv1存儲(chǔ)引擎央星,mem.virtual值至少兩倍的mem.mapped霞怀。如果 mem.virtual值顯著大于 mem.mapped(例如3倍或更多倍),則這可能表示內(nèi)存泄漏莉给。
mem.supported:一個(gè)布爾值毙石,指示底層系統(tǒng)是否支持?jǐn)U展內(nèi)存信息。如果為false禁谦,表示系統(tǒng)不支持?jǐn)U展內(nèi)存信息胁黑,則數(shù)據(jù)庫(kù)服務(wù)器可能無(wú)法訪問(wèn)其他 mem值废封。
mem.mapped:僅適用于MMAPv1存儲(chǔ)引擎州泊。數(shù)據(jù)庫(kù)的映射內(nèi)存量(以兆字節(jié)(MB)為單位)。由于MongoDB使用內(nèi)存映射文件漂洋,因此該值可能大致等于數(shù)據(jù)庫(kù)或數(shù)據(jù)庫(kù)的總大小遥皂。
mem.mappedWithJournal:僅適用于MMAPv1存儲(chǔ)引擎。映射內(nèi)存量刽漂,以兆字節(jié)(MB)為單位演训,包括用于journaling的內(nèi)存。該值始終是值的兩倍 mem.mapped贝咙。僅在啟用 journaling 功能時(shí)才包含此字段样悟。
mem.note:mem.note如果mem.supported為false,則顯示 該字段。該mem.note字段顯示文本:”not all mem info support on thisplatform”

metrics

shard01:PRIMARY> db.serverStatus().metrics
{
    "commands" : {
        "aggregate" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(580982)
        },
        "buildInfo" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(8729344)
        },
        "collStats" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(30)
        },
        "connectionStatus" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(5)
        },
        "count" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(284679)
        },
        "create" : {
            "failed" : NumberLong(1),
            "total" : NumberLong(1)
        },
        "createIndexes" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(462633)
        },
        "createUser" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "dbStats" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(8)
        },
        "delete" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(87983)
        },
        "drop" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(7)
        },
        "endSessions" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(4360233)
        },
        "find" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(11631918)
        },
        "getCmdLineOpts" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "getFreeMonitoringStatus" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "getLastError" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(17677)
        },
        "getLog" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "getMore" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(17435278)
        },
        "insert" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(4724448)
        },
        "isMaster" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(16034722)
        },
        "killCursors" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(15)
        },
        "listCollections" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(164)
        },
        "listDatabases" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(22)
        },
        "listIndexes" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(67271)
        },
        "logout" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(20)
        },
        "ping" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(8504)
        },
        "replSetGetRBID" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "replSetGetStatus" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(8711552)
        },
        "replSetHeartbeat" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(5042177)
        },
        "replSetUpdatePosition" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(17123049)
        },
        "rolesInfo" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(2)
        },
        "saslContinue" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(9074388)
        },
        "saslStart" : {
            "failed" : NumberLong(8),
            "total" : NumberLong(4537203)
        },
        "serverStatus" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(4355749)
        },
        "update" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(19842123)
        },
        "usersInfo" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(1)
        },
        "whatsmyuri" : {
            "failed" : NumberLong(0),
            "total" : NumberLong(4355789)
        }
    },
    "cursor" : {
        "timedOut" : NumberLong(30),
        "open" : {
            "noTimeout" : NumberLong(0),
            "pinned" : NumberLong(1),
            "total" : NumberLong(1)
        }
    },
    "document" : {
        "deleted" : NumberLong("3425958759"),
        "inserted" : NumberLong("3432065606"),
        "returned" : NumberLong("7275879520"),
        "updated" : NumberLong(53083893)
    },
    "getLastError" : {
        "wtime" : {
            "num" : 94439,
            "totalMillis" : 464983
        },
        "wtimeouts" : NumberLong(0)
    },
    "operation" : {
        "scanAndOrder" : NumberLong(8336779),
        "writeConflicts" : NumberLong(160097)
    },
    "query" : {
        "planCacheTotalSizeEstimateBytes" : NumberLong(3135730),
        "updateOneOpStyleBroadcastWithExactIDCount" : NumberLong(0),
        "upsertReplacementCannotTargetByQueryCount" : NumberLong(0)
    },
    "queryExecutor" : {
        "scanned" : NumberLong("5825296338"),
        "scannedObjects" : NumberLong("12705570863")
    },
    "record" : {
        "moves" : NumberLong(0)
    },
    "repl" : {
        "executor" : {
            "pool" : {
                "inProgressCount" : 0
            },
            "queues" : {
                "networkInProgress" : 0,
                "sleepers" : 2
            },
            "unsignaledEvents" : 0,
            "shuttingDown" : false,
            "networkInterface" : "DEPRECATED: getDiagnosticString is deprecated in NetworkInterfaceTL"
        },
        "apply" : {
            "attemptsToBecomeSecondary" : NumberLong(1),
            "batchSize" : NumberLong(0),
            "batches" : {
                "num" : 0,
                "totalMillis" : 0
            },
            "ops" : NumberLong(0)
        },
        "buffer" : {
            "count" : NumberLong(0),
            "maxSizeBytes" : NumberLong(268435456),
            "sizeBytes" : NumberLong(0)
        },
        "initialSync" : {
            "completed" : NumberLong(0),
            "failedAttempts" : NumberLong(0),
            "failures" : NumberLong(0)
        },
        "network" : {
            "bytes" : NumberLong(0),
            "getmores" : {
                "num" : 0,
                "totalMillis" : 0
            },
            "ops" : NumberLong(0),
            "readersCreated" : NumberLong(0)
        },
        "preload" : {
            "docs" : {
                "num" : 0,
                "totalMillis" : 0
            },
            "indexes" : {
                "num" : 0,
                "totalMillis" : 0
            }
        }
    },
    "storage" : {
        "freelist" : {
            "search" : {
                "bucketExhausted" : NumberLong(0),
                "requests" : NumberLong(0),
                "scanned" : NumberLong(0)
            }
        }
    },
    "ttl" : {
        "deletedDocuments" : NumberLong(11648),
        "passes" : NumberLong(168171)
    }
}

metrics:返回反映當(dāng)前使用情況和正在運(yùn)行的mongod實(shí)例狀態(tài)的各種統(tǒng)計(jì)信息的文檔窟她。
metrics.commands:3.0版中的新功能陈症。報(bào)告數(shù)據(jù)庫(kù)命令使用情況的文檔。這些字段metrics.commands是數(shù)據(jù)庫(kù)命令的名稱震糖,每個(gè)值都是一個(gè)文檔录肯,用于報(bào)告執(zhí)行的命令總數(shù)以及失敗的執(zhí)行次數(shù)。
metrics.commands..failed:mongod中 失敗 的次數(shù)吊说。
metrics.commands..total:mongod 中執(zhí)行 的次數(shù)论咏。
metrics.document:反映文檔訪問(wèn)和修改模式的文檔。將這些值與opcounters 文檔中的數(shù)據(jù)進(jìn)行比較颁井,該數(shù)據(jù)跟蹤總操作數(shù)厅贪。
metrics.document.deleted:刪除的文檔總數(shù)。
metrics.document.inserted:插入的文檔總數(shù)雅宾。
metrics.document.returned:查詢返回的文檔總數(shù)卦溢。
metrics.document.updated:更新的文件總數(shù)。
metrics.executor:版本3.2中的新功能秀又。報(bào)告復(fù)制執(zhí)行器的各種統(tǒng)計(jì)信息的文檔单寂。
metrics.getLastError:報(bào)告getLastError使用的文件。
metrics.getLastError.wtime:報(bào)告getLastError操作計(jì)數(shù)的文檔吐辙,其w參數(shù)大于1宣决。
metrics.getLastError.wtime.num:指定write concern(即w)的getLastError 操作總數(shù),即等待副本集的一個(gè)或多個(gè)成員確認(rèn)寫(xiě)入操作(即w大于1)昏苏。
metrics.getLastError.wtime.totalMillis:指定write concern(即w)mongod寫(xiě)操作操作所花費(fèi)的總時(shí)間(以毫秒為單位w)尊沸,即等待副本集的一個(gè)或多個(gè)成員確認(rèn)寫(xiě)操作(即w大于1)。
metrics.getLastError.wtimeouts:write concern操作由于wtimeout閾值而 超時(shí)到中g(shù)etLastError的次數(shù)贤惯。
metrics.operation:用于保存MongoDB使用特定操作類型處理的幾種類型的更新和查詢操作的計(jì)數(shù)器文檔洼专。
metrics.operation.fastmod:在3.4中刪除。如果使用MMAPv1存儲(chǔ)引擎孵构,那么更新操作數(shù)既不會(huì)導(dǎo)致文檔增長(zhǎng)也不需要更新索引屁商。例如,此計(jì)數(shù)器將記錄更新操作颈墅,使用$inc 操作使用運(yùn)算符來(lái)遞增未被索引的字段的值蜡镶。
metrics.operation.idhack:在3.4中刪除。包含該_id字段的查詢數(shù)恤筛。對(duì)于這些查詢官还,MongoDB將在該_id字段上使用默認(rèn)索引并跳過(guò)所有查詢執(zhí)行計(jì)劃。
metrics.operation.scanAndOrder:返回?zé)o法使用索引的排序操作的已排序數(shù)目的查詢總數(shù)毒坛。
metrics.operation.writeConflicts:遇到寫(xiě)入沖突的查詢總數(shù)望伦。
metrics.queryExecutor:報(bào)告來(lái)自查詢執(zhí)行系統(tǒng)數(shù)據(jù)的文檔林说。
metrics.queryExecutor.scanned:在查詢和查詢計(jì)劃評(píng)估期間索引掃描的總數(shù)。此計(jì)數(shù)器totalKeysExamined與輸出中的 計(jì)數(shù)器相同 explain()屯伞。
metrics.queryExecutor.scannedObjects:查詢和查詢計(jì)劃評(píng)估期間掃描的文檔總數(shù)述么。此計(jì)數(shù)器totalDocsExamined與explain()輸出中的 計(jì)數(shù)器相同 。
metrics.record:報(bào)告與磁盤(pán)存儲(chǔ)文件中的記錄分配相關(guān)的數(shù)據(jù)的文檔愕掏。
metrics.record.moves:對(duì)于MMAPv1存儲(chǔ)引擎度秘,metrics.record.moves 報(bào)告文檔在MongoDB數(shù)據(jù)集的磁盤(pán)表示內(nèi)移動(dòng)的總次數(shù)。文檔移動(dòng)是因?yàn)椴僮鲿?huì)增加文檔大小超出其分配的記錄大小饵撑。
metrics.repl:報(bào)告與復(fù)制過(guò)程相關(guān)的指標(biāo)的文檔剑梳。 metrics.repl文檔出現(xiàn)在所有mongod實(shí)例上,包括副本集成員的實(shí)例 滑潘。
metrics.repl.apply:從復(fù)制oplog應(yīng)用到應(yīng)用程序的文檔垢乙。
metrics.repl.apply.batchSize:版本4.0.6中的新功能:(也可在3.6.11+中使用)應(yīng)用的oplog操作總數(shù)。該 metrics.repl.apply.batchSize在批量操作邊界時(shí)的操作數(shù)目遞增语卤,而不是每次操作后遞增追逮。要獲得更精細(xì)的粒度,請(qǐng)參閱metrics.repl.apply.ops粹舵。
metrics.repl.apply.batches:metrics.repl.apply.batches報(bào)告在副本集的secondary成員上的oplog應(yīng)用進(jìn)程钮孵。有關(guān)oplog應(yīng)用程序進(jìn)程的更多信息,請(qǐng)參見(jiàn) 多線程復(fù)制
metrics.repl.apply.batches.num:所有數(shù)據(jù)庫(kù)中應(yīng)用的批次總數(shù)眼滤。
metrics.repl.apply.batches.totalMillis
mongod從oplog應(yīng)用操作所花費(fèi)的總時(shí)間(以毫秒為單位)巴席。
metrics.repl.apply.ops:應(yīng)用的oplog操作總數(shù)。 metrics.repl.apply.ops每次操作后遞增诅需。參閱:metrics.repl.apply.batchSize
metrics.repl.buffer:在批量應(yīng)用oplog條目之前漾唉,MongoDB會(huì)從復(fù)制源緩沖區(qū)中緩沖oplog操作。metrics.repl.buffer提供了一種跟蹤oplog緩沖區(qū)的方法堰塌。有關(guān)oplog應(yīng)用程序進(jìn)程的更多信息赵刑,請(qǐng)參見(jiàn) 多線程復(fù)制。
metrics.repl.buffer.count:oplog緩沖區(qū)中的當(dāng)前操作數(shù)场刑。
metrics.repl.buffer.maxSizeBytes:緩沖區(qū)的最大大小般此。此值是mongod的常量設(shè)置,不可配置摇邦。
metrics.repl.buffer.sizeBytes:oplog緩沖區(qū)內(nèi)容的當(dāng)前大小恤煞。
metrics.repl.network:metrics.repl.network 報(bào)告復(fù)制過(guò)程的網(wǎng)絡(luò)信息屎勘。
metrics.repl.network.bytes:metrics.repl.network.bytes 報(bào)告從復(fù)制同步源讀取的數(shù)據(jù)總量施籍。
metrics.repl.network.getmores:metrics.repl.network.getmores報(bào)告 getmore操作,oplog復(fù)制進(jìn)程中oplog 游標(biāo)的額外請(qǐng)求結(jié)果概漱。
metrics.repl.network.getmores.num:metrics.repl.network.getmores.num報(bào)告getmore操作總數(shù)丑慎,從復(fù)制同步源請(qǐng)求其他操作的操作。
metrics.repl.network.getmores.totalMillis: 報(bào)告從getmore操作中收集數(shù)據(jù)所需的總時(shí)間 。這個(gè)數(shù)字可能非常大竿裂,因?yàn)榧词筭etmore操作沒(méi)有初始返回?cái)?shù)據(jù)玉吁,MongoDB也會(huì)等待更多數(shù)據(jù)。
metrics.repl.network.ops:metrics.repl.network.ops 報(bào)告從復(fù)制源讀取的操作總數(shù)腻异。
metrics.repl.network.readersCreated:metrics.repl.network.readersCreated報(bào)告創(chuàng)建的oplog查詢進(jìn)程的總數(shù)。將在連接中發(fā)生錯(cuò)誤(包括超時(shí)或網(wǎng)絡(luò)操作)時(shí),MongoDB將創(chuàng)建新的oplog查詢穗酥。此外瞬浓,metrics.repl.network.readersCreated每次MongoDB選擇新的復(fù)制源時(shí), 都會(huì)遞增机打。
metrics.repl.preload: metrics.repl.preload 報(bào)告“預(yù)讀”階段矫户,其中MongoDB將文檔和索引加載到RAM中以提高復(fù)制吞吐量。有關(guān)復(fù)制過(guò)程的預(yù)讀階段的詳細(xì)信息残邀,請(qǐng)參閱多線程復(fù)制皆辽。
metrics.repl.preload.docs:報(bào)告在預(yù)讀階段加載到內(nèi)存中的文檔的文檔。
metrics.repl.preload.docs.num:在復(fù)制的預(yù)讀階段加載的文檔總數(shù)芥挣。
metrics.repl.preload.docs.totalMillis:復(fù)制預(yù)取階段加載文檔所花費(fèi)的總時(shí)間驱闷。
metrics.repl.preload.indexes:在復(fù)制預(yù)讀階段報(bào)告加載到內(nèi)存中的索引項(xiàng)的文檔。有關(guān)預(yù)取復(fù)制階段的詳細(xì)信息空免,請(qǐng)參閱多線程復(fù)制遗嗽。
metrics.repl.preload.indexes.num:作為復(fù)制預(yù)取階段的一部分,在更新文檔之前由成員加載的索引條目總數(shù)鼓蜒。
metrics.repl.preload.indexes.totalMillis:作為復(fù)制預(yù)讀階段的一部分痹换,加載索引條目所花費(fèi)的總時(shí)間(以毫秒為單位)。
metrics.storage.freelist.search.bucketExhausted:mongod已檢查空閑列表中沒(méi)有找到合適的大記錄分配的次數(shù)都弹。
metrics.storage.freelist.search.requests:mongod搜索可用記錄分配的次數(shù)娇豫。
metrics.storage.freelist.search.scanned:mongod搜索可用記錄分配的數(shù)量。
metrics.ttl:報(bào)告ttl索引進(jìn)程的資源使用的文檔 畅厢。
metrics.ttl.deletedDocuments:使用ttl索引從集合中刪除的文檔總數(shù) 冯痢。
metrics.ttl.passes:后臺(tái)進(jìn)程使用ttl索引從集合中刪除文檔的次數(shù)。
metrics.cursor:2.6版中的新功能框杜。有關(guān)游標(biāo)狀態(tài)和使用的數(shù)據(jù)的文檔浦楣。
metrics.cursor.timedOut:2.6版中的新功能。自服務(wù)器進(jìn)程啟動(dòng)以來(lái)已超時(shí)的游標(biāo)總數(shù)咪辱。如果此數(shù)字很大或以常規(guī)速率增長(zhǎng)振劳,則可能表示應(yīng)用程序錯(cuò)誤。
metrics.cursor.open:2.6版中的新功能油狂。有關(guān)打開(kāi)游標(biāo)的數(shù)據(jù)的文檔历恐。
metrics.cursor.open.noTimeout:2.6版中的新功能寸癌。打開(kāi)游標(biāo)的數(shù)量,選項(xiàng) DBQuery.Option.noTimeout設(shè)置為在一段時(shí)間不活動(dòng)后防止超時(shí)弱贼。
metrics.cursor.open.pinned:2.6版中的新功能蒸苇。“固定”打開(kāi)游標(biāo)的數(shù)量吮旅。
metrics.cursor.open.total:2.6版中的新功能溪烤。MongoDB為客戶端維護(hù)的游標(biāo)數(shù)量。因?yàn)镸ongoDB耗盡了未使用的游標(biāo)庇勃,通常這個(gè)值很小或?yàn)榱惴帐病5牵绻嬖陉?duì)列匪凉,過(guò)時(shí)的tailable游標(biāo)或大量操作枪眉,則此值可能會(huì)上升。
metrics.cursor.open.singleTarget:3.0版中的新功能再层。僅針對(duì)單個(gè)分片的游標(biāo)總數(shù)贸铜。僅 mongos實(shí)例報(bào)告metrics.cursor.open.singleTarget值。
metrics.cursor.open.multiTarget:3.0版中的新功能聂受。僅針對(duì)多個(gè)分片的游標(biāo)總數(shù)蒿秦。僅mongos實(shí)例報(bào)告metrics.cursor.open.multiTarget值。

wiredTiger

shard01:PRIMARY> db.serverStatus().wiredTiger
{
    "uri" : "statistics:",
    "async" : {
        "current work queue length" : 0,
        "maximum work queue length" : 0,
        "number of allocation state races" : 0,
        "number of flush calls" : 0,
        "number of operation slots viewed for allocation" : 0,
        "number of times operation allocation failed" : 0,
        "number of times worker found no work" : 0,
        "total allocations" : 0,
        "total compact calls" : 0,
        "total insert calls" : 0,
        "total remove calls" : 0,
        "total search calls" : 0,
        "total update calls" : 0
    },
    "block-manager" : {
        "blocks pre-loaded" : 67,
        "blocks read" : 4815,
        "blocks written" : 22114,
        "bytes read" : 19845120,
        "bytes read via memory map API" : 0,
        "bytes read via system call API" : 0,
        "bytes written" : 166707200,
        "bytes written for checkpoint" : 166703104,
        "bytes written via memory map API" : 0,
        "bytes written via system call API" : 0,
        "mapped blocks read" : 0,
        "mapped bytes read" : 0,
        "number of times the file was remapped because it changed size via fallocate or truncate" : 0,
        "number of times the region was remapped via write" : 0
    },
    "cache" : {
        "application threads page read from disk to cache count" : 30,
        "application threads page read from disk to cache time (usecs)" : 10875,
        "application threads page write from cache to disk count" : 11634,
        "application threads page write from cache to disk time (usecs)" : 2352569,
        "bytes allocated for updates" : 1584785,
        "bytes belonging to page images in the cache" : 663850,
        "bytes belonging to the history store table in the cache" : 2462,
        "bytes currently in the cache" : 2302335,
        "bytes dirty in the cache cumulative" : 1154502803,
        "bytes not belonging to page images in the cache" : 1638485,
        "bytes read into cache" : 615392,
        "bytes written from cache" : 187154920,
        "cache overflow score" : 0,
        "checkpoint blocked page eviction" : 0,
        "eviction calls to get a page" : 6958,
        "eviction calls to get a page found queue empty" : 5861,
        "eviction calls to get a page found queue empty after locking" : 19,
        "eviction currently operating in aggressive mode" : 0,
        "eviction empty score" : 0,
        "eviction passes of a file" : 0,
        "eviction server candidate queue empty when topping up" : 0,
        "eviction server candidate queue not empty when topping up" : 0,
        "eviction server evicting pages" : 0,
        "eviction server slept, because we did not make progress with eviction" : 1772,
        "eviction server unable to reach eviction goal" : 0,
        "eviction server waiting for a leaf page" : 1,
        "eviction state" : 64,
        "eviction walk target pages histogram - 0-9" : 0,
        "eviction walk target pages histogram - 10-31" : 0,
        "eviction walk target pages histogram - 128 and higher" : 0,
        "eviction walk target pages histogram - 32-63" : 0,
        "eviction walk target pages histogram - 64-128" : 0,
        "eviction walk target strategy both clean and dirty pages" : 0,
        "eviction walk target strategy only clean pages" : 0,
        "eviction walk target strategy only dirty pages" : 0,
        "eviction walks abandoned" : 0,
        "eviction walks gave up because they restarted their walk twice" : 0,
        "eviction walks gave up because they saw too many pages and found no candidates" : 0,
        "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
        "eviction walks reached end of tree" : 0,
        "eviction walks started from root of tree" : 0,
        "eviction walks started from saved location in tree" : 0,
        "eviction worker thread active" : 4,
        "eviction worker thread created" : 0,
        "eviction worker thread evicting pages" : 1039,
        "eviction worker thread removed" : 0,
        "eviction worker thread stable number" : 0,
        "files with active eviction walks" : 0,
        "files with new eviction walks started" : 0,
        "force re-tuning of eviction workers once in a while" : 0,
        "forced eviction - history store pages failed to evict while session has history store cursor open" : 0,
        "forced eviction - history store pages selected while session has history store cursor open" : 0,
        "forced eviction - history store pages successfully evicted while session has history store cursor open" : 0,
        "forced eviction - pages evicted that were clean count" : 0,
        "forced eviction - pages evicted that were clean time (usecs)" : 0,
        "forced eviction - pages evicted that were dirty count" : 1,
        "forced eviction - pages evicted that were dirty time (usecs)" : 237,
        "forced eviction - pages selected because of too many deleted items count" : 5,
        "forced eviction - pages selected count" : 1,
        "forced eviction - pages selected unable to be evicted count" : 0,
        "forced eviction - pages selected unable to be evicted time" : 0,
        "forced eviction - session returned rollback error while force evicting due to being oldest" : 0,
        "hazard pointer blocked page eviction" : 2,
        "hazard pointer check calls" : 1040,
        "hazard pointer check entries walked" : 618,
        "hazard pointer maximum array length" : 1,
        "history store key truncation calls that returned restart" : 0,
        "history store key truncation due to mixed timestamps" : 0,
        "history store key truncation due to the key being removed from the data page" : 0,
        "history store score" : 0,
        "history store table insert calls" : 3,
        "history store table insert calls that returned restart" : 0,
        "history store table max on-disk size" : 0,
        "history store table on-disk size" : 36864,
        "history store table out-of-order resolved updates that lose their durable timestamp" : 0,
        "history store table out-of-order updates that were fixed up by moving existing records" : 0,
        "history store table out-of-order updates that were fixed up during insertion" : 0,
        "history store table reads" : 0,
        "history store table reads missed" : 0,
        "history store table reads requiring squashed modifies" : 0,
        "history store table remove calls due to key truncation" : 0,
        "history store table writes requiring squashed modifies" : 0,
        "in-memory page passed criteria to be split" : 0,
        "in-memory page splits" : 0,
        "internal pages evicted" : 0,
        "internal pages queued for eviction" : 0,
        "internal pages seen by eviction walk" : 0,
        "internal pages seen by eviction walk that are already queued" : 0,
        "internal pages split during eviction" : 0,
        "leaf pages split during eviction" : 0,
        "maximum bytes configured" : 1073741824,
        "maximum page size at eviction" : 376,
        "modified pages evicted" : 1038,
        "modified pages evicted by application threads" : 0,
        "operations timed out waiting for space in cache" : 0,
        "overflow pages read into cache" : 0,
        "page split during eviction deepened the tree" : 0,
        "page written requiring history store records" : 1168,
        "pages currently held in the cache" : 78,
        "pages evicted by application threads" : 0,
        "pages queued for eviction" : 0,
        "pages queued for eviction post lru sorting" : 0,
        "pages queued for urgent eviction" : 1040,
        "pages queued for urgent eviction during walk" : 0,
        "pages read into cache" : 74,
        "pages read into cache after truncate" : 1036,
        "pages read into cache after truncate in prepare state" : 0,
        "pages requested from the cache" : 2567797,
        "pages seen by eviction walk" : 0,
        "pages seen by eviction walk that are already queued" : 0,
        "pages selected for eviction unable to be evicted" : 2,
        "pages selected for eviction unable to be evicted as the parent page has overflow items" : 0,
        "pages selected for eviction unable to be evicted because of active children on an internal page" : 0,
        "pages selected for eviction unable to be evicted because of failure in reconciliation" : 0,
        "pages walked for eviction" : 0,
        "pages written from cache" : 11678,
        "pages written requiring in-memory restoration" : 1,
        "percentage overhead" : 8,
        "tracked bytes belonging to internal pages in the cache" : 24830,
        "tracked bytes belonging to leaf pages in the cache" : 2277505,
        "tracked dirty bytes in the cache" : 965,
        "tracked dirty pages in the cache" : 2,
        "unmodified pages evicted" : 0
    },
    "capacity" : {
        "background fsync file handles considered" : 0,
        "background fsync file handles synced" : 0,
        "background fsync time (msecs)" : 0,
        "bytes read" : 425984,
        "bytes written for checkpoint" : 92029428,
        "bytes written for eviction" : 77,
        "bytes written for log" : 825313536,
        "bytes written total" : 917343041,
        "threshold to call fsync" : 0,
        "time waiting due to total capacity (usecs)" : 0,
        "time waiting during checkpoint (usecs)" : 0,
        "time waiting during eviction (usecs)" : 0,
        "time waiting during logging (usecs)" : 0,
        "time waiting during read (usecs)" : 0
    },
    "checkpoint-cleanup" : {
        "pages added for eviction" : 1035,
        "pages removed" : 0,
        "pages skipped during tree walk" : 24192,
        "pages visited" : 34496
    },
    "connection" : {
        "auto adjusting condition resets" : 12656,
        "auto adjusting condition wait calls" : 435335,
        "auto adjusting condition wait raced to update timeout and skipped updating" : 0,
        "detected system time went backwards" : 0,
        "files currently open" : 47,
        "memory allocations" : 8880632,
        "memory frees" : 8864180,
        "memory re-allocations" : 1480705,
        "pthread mutex condition wait calls" : 1101371,
        "pthread mutex shared lock read-lock calls" : 5961333,
        "pthread mutex shared lock write-lock calls" : 271188,
        "total fsync I/Os" : 20536,
        "total read I/Os" : 6840,
        "total write I/Os" : 34031
    },
    "cursor" : {
        "Total number of entries skipped by cursor next calls" : 647,
        "Total number of entries skipped by cursor prev calls" : 97,
        "Total number of entries skipped to position the history store cursor" : 0,
        "cached cursor count" : 70,
        "cursor bulk loaded cursor insert calls" : 0,
        "cursor close calls that result in cache" : 2082208,
        "cursor create calls" : 208084,
        "cursor insert calls" : 15105,
        "cursor insert key and value bytes" : 7689438,
        "cursor modify calls" : 6929,
        "cursor modify key and value bytes affected" : 492073,
        "cursor modify value bytes modified" : 55543,
        "cursor next calls" : 135323,
        "cursor next calls that skip greater than or equal to 100 entries" : 0,
        "cursor next calls that skip less than 100 entries" : 134073,
        "cursor operation restarted" : 0,
        "cursor prev calls" : 653235,
        "cursor prev calls that skip due to a globally visible history store tombstone" : 0,
        "cursor prev calls that skip due to a globally visible history store tombstone in rollback to stable" : 0,
        "cursor prev calls that skip greater than or equal to 100 entries" : 0,
        "cursor prev calls that skip less than 100 entries" : 653235,
        "cursor remove calls" : 90,
        "cursor remove key bytes removed" : 2327,
        "cursor reserve calls" : 0,
        "cursor reset calls" : 5873856,
        "cursor search calls" : 1185916,
        "cursor search history store calls" : 0,
        "cursor search near calls" : 656174,
        "cursor sweep buckets" : 433926,
        "cursor sweep cursors closed" : 0,
        "cursor sweep cursors examined" : 14980,
        "cursor sweeps" : 72321,
        "cursor truncate calls" : 0,
        "cursor update calls" : 0,
        "cursor update key and value bytes" : 0,
        "cursor update value size change" : 0,
        "cursors reused from cache" : 2081876,
        "open cursor count" : 17
    },
    "data-handle" : {
        "connection data handle size" : 456,
        "connection data handles currently active" : 79,
            "connection sweep candidate became referenced" : 0,
        "connection sweep dhandles closed" : 0,
        "connection sweep dhandles removed from hash list" : 5729,
        "connection sweep time-of-death sets" : 23850,
        "connection sweeps" : 6917,
        "session dhandles swept" : 16015,
        "session sweep attempts" : 1246
    },
    "lock" : {
        "checkpoint lock acquisitions" : 1153,
        "checkpoint lock application thread wait time (usecs)" : 34,
        "checkpoint lock internal thread wait time (usecs)" : 0,
        "dhandle lock application thread time waiting (usecs)" : 0,
        "dhandle lock internal thread time waiting (usecs)" : 138,
        "dhandle read lock acquisitions" : 289023,
        "dhandle write lock acquisitions" : 11537,
        "durable timestamp queue lock application thread time waiting (usecs)" : 64,
        "durable timestamp queue lock internal thread time waiting (usecs)" : 0,
        "durable timestamp queue read lock acquisitions" : 2,
        "durable timestamp queue write lock acquisitions" : 6958,
        "metadata lock acquisitions" : 1153,
        "metadata lock application thread wait time (usecs)" : 100,
        "metadata lock internal thread wait time (usecs)" : 1,
        "read timestamp queue lock application thread time waiting (usecs)" : 0,
        "read timestamp queue lock internal thread time waiting (usecs)" : 0,
        "read timestamp queue read lock acquisitions" : 0,
        "read timestamp queue write lock acquisitions" : 1156,
        "schema lock acquisitions" : 1188,
        "schema lock application thread wait time (usecs)" : 10,
        "schema lock internal thread wait time (usecs)" : 7,
        "table lock application thread time waiting for the table lock (usecs)" : 26769,
        "table lock internal thread time waiting for the table lock (usecs)" : 0,
        "table read lock acquisitions" : 0,
        "table write lock acquisitions" : 138467,
        "txn global lock application thread time waiting (usecs)" : 45,
        "txn global lock internal thread time waiting (usecs)" : 106,
        "txn global read lock acquisitions" : 23561,
        "txn global write lock acquisitions" : 33856
    },
    "log" : {
        "busy returns attempting to switch slots" : 18,
        "force archive time sleeping (usecs)" : 0,
        "log bytes of payload data" : 3346940,
        "log bytes written" : 4892032,
        "log files manually zero-filled" : 0,
        "log flush operations" : 651105,
        "log force write operations" : 728222,
        "log force write operations skipped" : 718616,
        "log records compressed" : 1158,
        "log records not compressed" : 6969,
        "log records too small to compress" : 10370,
        "log release advances write LSN" : 1154,
        "log scan operations" : 4,
        "log scan records requiring two reads" : 3,
        "log server thread advances write LSN" : 9606,
        "log server thread write LSN walk skipped" : 163993,
        "log sync operations" : 10184,
        "log sync time duration (usecs)" : 83326246,
        "log sync_dir operations" : 1,
        "log sync_dir time duration (usecs)" : 6611,
        "log write operations" : 18497,
        "logging bytes consolidated" : 4891520,
        "maximum log file size" : 104857600,
        "number of pre-allocated log files to create" : 2,
        "pre-allocated log files not ready and missed" : 1,
        "pre-allocated log files prepared" : 2,
        "pre-allocated log files used" : 0,
        "records processed by log scan" : 14,
        "slot close lost race" : 0,
        "slot close unbuffered waits" : 0,
        "slot closures" : 10760,
        "slot join atomic update races" : 0,
        "slot join calls atomic updates raced" : 0,
        "slot join calls did not yield" : 18497,
        "slot join calls found active slot closed" : 0,
        "slot join calls slept" : 0,
        "slot join calls yielded" : 0,
        "slot join found active slot closed" : 0,
        "slot joins yield time (usecs)" : 0,
        "slot transitions unable to find free slot" : 0,
        "slot unbuffered writes" : 0,
        "total in-memory size of compressed records" : 6789223,
        "total log buffer size" : 33554432,
        "total size of compressed records" : 1751691,
        "written slots coalesced" : 0,
        "yields waiting for previous log file close" : 0
    },
    "perf" : {
        "file system read latency histogram (bucket 1) - 10-49ms" : 0,
        "file system read latency histogram (bucket 2) - 50-99ms" : 0,
        "file system read latency histogram (bucket 3) - 100-249ms" : 0,
        "file system read latency histogram (bucket 4) - 250-499ms" : 0,
        "file system read latency histogram (bucket 5) - 500-999ms" : 0,
        "file system read latency histogram (bucket 6) - 1000ms+" : 0,
        "file system write latency histogram (bucket 1) - 10-49ms" : 9,
        "file system write latency histogram (bucket 2) - 50-99ms" : 0,
        "file system write latency histogram (bucket 3) - 100-249ms" : 0,
        "file system write latency histogram (bucket 4) - 250-499ms" : 0,
        "file system write latency histogram (bucket 5) - 500-999ms" : 0,
        "file system write latency histogram (bucket 6) - 1000ms+" : 0,
        "operation read latency histogram (bucket 1) - 100-249us" : 983,
        "operation read latency histogram (bucket 2) - 250-499us" : 146,
        "operation read latency histogram (bucket 3) - 500-999us" : 40,
        "operation read latency histogram (bucket 4) - 1000-9999us" : 89,
        "operation read latency histogram (bucket 5) - 10000us+" : 0,
        "operation write latency histogram (bucket 1) - 100-249us" : 51,
        "operation write latency histogram (bucket 2) - 250-499us" : 9,
        "operation write latency histogram (bucket 3) - 500-999us" : 3,
        "operation write latency histogram (bucket 4) - 1000-9999us" : 1,
        "operation write latency histogram (bucket 5) - 10000us+" : 0
    },
    "reconciliation" : {
        "approximate byte size of timestamps in pages written" : 5760,
        "approximate byte size of transaction IDs in pages written" : 55856,
        "fast-path pages deleted" : 0,
        "maximum seconds spent in a reconciliation call" : 0,
        "page reconciliation calls" : 14773,
        "page reconciliation calls for eviction" : 1038,
        "page reconciliation calls that resulted in values with prepared transaction metadata" : 0,
        "page reconciliation calls that resulted in values with timestamps" : 214,
        "page reconciliation calls that resulted in values with transaction ids" : 3462,
        "pages deleted" : 3107,
        "pages written including an aggregated newest start durable timestamp " : 1185,
        "pages written including an aggregated newest stop durable timestamp " : 25,
        "pages written including an aggregated newest stop timestamp " : 9,
        "pages written including an aggregated newest stop transaction ID" : 9,
        "pages written including an aggregated oldest start timestamp " : 13,
        "pages written including an aggregated oldest start transaction ID " : 7,
        "pages written including an aggregated prepare" : 0,
        "pages written including at least one prepare state" : 0,
        "pages written including at least one start durable timestamp" : 216,
        "pages written including at least one start timestamp" : 216,
        "pages written including at least one start transaction ID" : 3464,
        "pages written including at least one stop durable timestamp" : 24,
        "pages written including at least one stop timestamp" : 24,
        "pages written including at least one stop transaction ID" : 24,
        "records written including a prepare state" : 0,
        "records written including a start durable timestamp" : 296,
        "records written including a start timestamp" : 296,
        "records written including a start transaction ID" : 6918,
        "records written including a stop durable timestamp" : 64,
        "records written including a stop timestamp" : 64,
        "records written including a stop transaction ID" : 64,
        "split bytes currently awaiting free" : 0,
        "split objects currently awaiting free" : 0
    },
    "session" : {
        "open session count" : 17,
        "session query timestamp calls" : 4,
        "table alter failed calls" : 0,
        "table alter successful calls" : 0,
        "table alter unchanged and skipped" : 0,
        "table compact failed calls" : 0,
        "table compact successful calls" : 0,
        "table create failed calls" : 0,
        "table create successful calls" : 1,
        "table drop failed calls" : 0,
        "table drop successful calls" : 0,
        "table import failed calls" : 0,
        "table import successful calls" : 0,
        "table rebalance failed calls" : 0,
        "table rebalance successful calls" : 0,
        "table rename failed calls" : 0,
        "table rename successful calls" : 0,
        "table salvage failed calls" : 0,
        "table salvage successful calls" : 0,
        "table truncate failed calls" : 0,
        "table truncate successful calls" : 0,
        "table verify failed calls" : 0,
        "table verify successful calls" : 0
    },
    "thread-state" : {
        "active filesystem fsync calls" : 0,
        "active filesystem read calls" : 0,
        "active filesystem write calls" : 0
    },
    "thread-yield" : {
        "application thread time evicting (usecs)" : 0,
        "application thread time waiting for cache (usecs)" : 0,
        "connection close blocked waiting for transaction state stabilization" : 0,
        "connection close yielded for lsm manager shutdown" : 0,
        "data handle lock yielded" : 0,
        "get reference for page index and slot time sleeping (usecs)" : 0,
        "log server sync yielded for log write" : 0,
        "page access yielded due to prepare state change" : 0,
        "page acquire busy blocked" : 0,
        "page acquire eviction blocked" : 0,
        "page acquire locked blocked" : 0,
        "page acquire read blocked" : 0,
        "page acquire time sleeping (usecs)" : 0,
        "page delete rollback time sleeping for state change (usecs)" : 0,
        "page reconciliation yielded due to child modification" : 0
    },
    "transaction" : {
        "Number of prepared updates" : 0,
        "durable timestamp queue entries walked" : 1466,
        "durable timestamp queue insert to empty" : 5492,
        "durable timestamp queue inserts to head" : 1466,
        "durable timestamp queue inserts total" : 6958,
        "durable timestamp queue length" : 1,
        "prepared transactions" : 0,
        "prepared transactions committed" : 0,
        "prepared transactions currently active" : 0,
        "prepared transactions rolled back" : 0,
        "query timestamp calls" : 802904,
        "read timestamp queue entries walked" : 676,
        "read timestamp queue insert to empty" : 480,
        "read timestamp queue inserts to head" : 676,
        "read timestamp queue inserts total" : 1156,
        "read timestamp queue length" : 1,
        "rollback to stable calls" : 0,
        "rollback to stable hs records with stop timestamps older than newer records" : 0,
        "rollback to stable keys removed" : 0,
        "rollback to stable keys restored" : 0,
        "rollback to stable pages visited" : 0,
        "rollback to stable restored tombstones from history store" : 0,
        "rollback to stable sweeping history store keys" : 0,
        "rollback to stable tree walk skipping pages" : 0,
        "rollback to stable updates aborted" : 0,
        "rollback to stable updates removed from history store" : 0,
        "set timestamp calls" : 13866,
        "set timestamp durable calls" : 0,
        "set timestamp durable updates" : 0,
        "set timestamp oldest calls" : 6933,
        "set timestamp oldest updates" : 6933,
        "set timestamp stable calls" : 6933,
        "set timestamp stable updates" : 6932,
        "transaction begins" : 1510630,
        "transaction checkpoint currently running" : 0,
        "transaction checkpoint generation" : 1154,
        "transaction checkpoint history store file duration (usecs)" : 81,
        "transaction checkpoint max time (msecs)" : 291,
        "transaction checkpoint min time (msecs)" : 37,
        "transaction checkpoint most recent time (msecs)" : 75,
        "transaction checkpoint prepare currently running" : 0,
        "transaction checkpoint prepare max time (msecs)" : 11,
        "transaction checkpoint prepare min time (msecs)" : 1,
        "transaction checkpoint prepare most recent time (msecs)" : 3,
        "transaction checkpoint prepare total time (msecs)" : 2887,
        "transaction checkpoint scrub dirty target" : 0,
        "transaction checkpoint scrub time (msecs)" : 0,
        "transaction checkpoint total time (msecs)" : 82776,
        "transaction checkpoints" : 1153,
        "transaction checkpoints skipped because database was clean" : 0,
        "transaction failures due to history store" : 0,
        "transaction fsync calls for checkpoint after allocating the transaction ID" : 1153,
        "transaction fsync duration for checkpoint after allocating the transaction ID (usecs)" : 25206,
        "transaction range of IDs currently pinned" : 0,
        "transaction range of IDs currently pinned by a checkpoint" : 0,
        "transaction range of timestamps currently pinned" : 21474836480,
        "transaction range of timestamps pinned by a checkpoint" : NumberLong("6907410767591505921"),
        "transaction range of timestamps pinned by the oldest active read timestamp" : 0,
        "transaction range of timestamps pinned by the oldest timestamp" : 21474836480,
        "transaction read timestamp of the oldest active reader" : 0,
        "transaction sync calls" : 0,
        "transactions committed" : 15040,
        "transactions rolled back" : 1496352,
        "update conflicts" : 0
    },
    "concurrentTransactions" : {
        "write" : {
            "out" : 0,
            "available" : 128,
            "totalTickets" : 128
        },
        "read" : {
            "out" : 1,
            "available" : 127,
            "totalTickets" : 128
        }
    },
    "snapshot-window-settings" : {
        "cache pressure percentage threshold" : 95,
        "current cache pressure percentage" : NumberLong(0),
        "total number of SnapshotTooOld errors" : NumberLong(0),
        "max target available snapshots window size in seconds" : 5,
        "target available snapshots window size in seconds" : 5,
        "current available snapshots window size in seconds" : 5,
        "latest majority snapshot timestamp available" : "Dec 18 10:01:35:1",
        "oldest majority snapshot timestamp available" : "Dec 18 10:01:30:1"
    },
    "oplog" : {
        "visibility timestamp" : Timestamp(1608256895, 1)
    }
}


1. wiredTiger.uri:3.0版中的新功能蛋济。一個(gè)字符串棍鳖。供MongoDB內(nèi)部使用一個(gè)字符。
2. wiredTiger.LSM:3.0版中的新功能碗旅。返回LSM(Log-Structured Merge)樹(shù)的統(tǒng)計(jì)信息的文檔渡处。這些值反映了此服務(wù)器中使用的所有LSM樹(shù)的統(tǒng)計(jì)信息。
3. wiredTiger.async:3.0版中的新功能祟辟。返回與異步操作API相關(guān)的統(tǒng)計(jì)信息的文檔医瘫。MongoDB沒(méi)有使用它。
4. wiredTiger.block-manager:3.0版中的新功能旧困。返回塊管理器操作統(tǒng)計(jì)信息的文檔醇份。
5. wiredTiger.cache:3.0版中的新功能:返回緩存和緩存中頁(yè)面移除的統(tǒng)計(jì)信息的文檔。
以下描述了一些 wiredTiger.cache的key統(tǒng)計(jì)數(shù)據(jù):
6. wiredTiger.cache.maximum bytes configured:最大緩存大小吼具。
7. wiredTiger.cache.bytes currently in the cache:當(dāng)前在緩存中的數(shù)據(jù)的字節(jié)大小僚纷。該值不應(yīng)大于maximum bytesconfigured。
8. wiredTiger.cache.unmodified pages evicted:頁(yè)面移除的主要統(tǒng)計(jì)數(shù)據(jù)拗盒。
9. wiredTiger.cache.tracked dirty bytes in the cache:緩存中臟數(shù)據(jù)的大胁澜摺(以字節(jié)為單位)。該值應(yīng)小于bytes currently in the cache锣咒。
10. wiredTiger.cache.pages read into cache:讀入緩存的頁(yè)數(shù)侵状。 wiredTiger.cache.pages read intocache和wiredTiger.cache.pages written from cache可以提供I / O 信息赞弥。
11. wiredTiger.cache.pages written from cache:從緩存寫(xiě)入的頁(yè)數(shù)毅整。 wiredTiger.cache.pages written fromcache和wiredTiger.cache.pages read into cache可以提供I / O的信息趣兄。要調(diào)整WiredTiger內(nèi)部緩存的大小,請(qǐng)參閱storage.wiredTiger.engineConfig.cacheSizeGB和 –wiredTigerCacheSizeGB悼嫉。避免將WiredTiger內(nèi)部緩存大小增加到其默認(rèn)值以上艇潭。
12. wiredTiger.connection:3.0版中的新功能。返回與WiredTiger連接相關(guān)的統(tǒng)計(jì)信息的文檔戏蔑。
13. wiredTiger.cursor:3.0版中的新功能蹋凝。返回WiredTiger游標(biāo)統(tǒng)計(jì)信息的文檔。
14. wiredTiger.data-handle:3.0版中的新功能总棵。返回有關(guān)數(shù)據(jù)句柄和掃描的統(tǒng)計(jì)信息的文檔鳍寂。
15. wiredTiger.log:3.0版中的新功能。返回WiredTiger的預(yù)寫(xiě)日志的統(tǒng)計(jì)信息的文檔情龄。參閱:日記和WiredTiger存儲(chǔ)引擎
16. wiredTiger.reconciliation:3.0版中的新功能迄汛。返回協(xié)調(diào)進(jìn)程統(tǒng)計(jì)信息的文檔。
17. wiredTiger.session:3.0版中的新功能骤视。返回會(huì)話的打開(kāi)游標(biāo)計(jì)數(shù)和打開(kāi)會(huì)話計(jì)數(shù)的文檔鞍爱。
18. wiredTiger.thread-yield:3.0版中的新功能。頁(yè)面請(qǐng)求量的統(tǒng)計(jì)信息的文檔专酗。
19. wiredTiger.transaction:3.0版中的新功能睹逃。返回有關(guān)事務(wù)檢查點(diǎn)和操作的統(tǒng)計(jì)信息的文檔。
20. wiredTiger.transaction.transaction checkpoint most recent time:創(chuàng)建最新檢查點(diǎn)的時(shí)間量(以毫秒為單位)祷肯。在固定寫(xiě)入負(fù)載下該值增加可能表示I / O系統(tǒng)飽和沉填。
21. wiredTiger.concurrentTransactions:3.0版中的新功能。返回允許進(jìn)入WiredTiger存儲(chǔ)引擎的讀寫(xiě)事務(wù)并發(fā)數(shù)的信息文檔佑笋。這些設(shè)置是特定于MongoDB的拜轨。要更改并發(fā)讀取和寫(xiě)入事務(wù)的設(shè)置,請(qǐng)參閱wiredTigerConcurrentReadTransactions和wiredTigerConcurrentWriteTransactions允青。
22. writeBacksQueued :一個(gè)布爾值橄碾,指示是否有來(lái)自mongos實(shí)例排隊(duì)等待重試的的操作 。通常颠锉,此值為false法牲。另請(qǐng)參見(jiàn)writeBacks。
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末琼掠,一起剝皮案震驚了整個(gè)濱河市拒垃,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌瓷蛙,老刑警劉巖悼瓮,帶你破解...
    沈念sama閱讀 222,252評(píng)論 6 516
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件戈毒,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡横堡,警方通過(guò)查閱死者的電腦和手機(jī)埋市,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,886評(píng)論 3 399
  • 文/潘曉璐 我一進(jìn)店門(mén),熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)命贴,“玉大人道宅,你說(shuō)我怎么就攤上這事⌒刂耄” “怎么了污茵?”我有些...
    開(kāi)封第一講書(shū)人閱讀 168,814評(píng)論 0 361
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)葬项。 經(jīng)常有香客問(wèn)我泞当,道長(zhǎng),這世上最難降的妖魔是什么民珍? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 59,869評(píng)論 1 299
  • 正文 為了忘掉前任襟士,我火速辦了婚禮,結(jié)果婚禮上穷缤,老公的妹妹穿的比我還像新娘敌蜂。我一直安慰自己,他們只是感情好津肛,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,888評(píng)論 6 398
  • 文/花漫 我一把揭開(kāi)白布章喉。 她就那樣靜靜地躺著,像睡著了一般身坐。 火紅的嫁衣襯著肌膚如雪秸脱。 梳的紋絲不亂的頭發(fā)上,一...
    開(kāi)封第一講書(shū)人閱讀 52,475評(píng)論 1 312
  • 那天部蛇,我揣著相機(jī)與錄音摊唇,去河邊找鬼。 笑死涯鲁,一個(gè)胖子當(dāng)著我的面吹牛巷查,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播抹腿,決...
    沈念sama閱讀 41,010評(píng)論 3 422
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼岛请,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了警绩?” 一聲冷哼從身側(cè)響起崇败,我...
    開(kāi)封第一講書(shū)人閱讀 39,924評(píng)論 0 277
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后后室,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體缩膝,經(jīng)...
    沈念sama閱讀 46,469評(píng)論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,552評(píng)論 3 342
  • 正文 我和宋清朗相戀三年岸霹,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了疾层。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,680評(píng)論 1 353
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡松申,死狀恐怖云芦,靈堂內(nèi)的尸體忽然破棺而出俯逾,到底是詐尸還是另有隱情贸桶,我是刑警寧澤,帶...
    沈念sama閱讀 36,362評(píng)論 5 351
  • 正文 年R本政府宣布桌肴,位于F島的核電站皇筛,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏坠七。R本人自食惡果不足惜水醋,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,037評(píng)論 3 335
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望彪置。 院中可真熱鬧拄踪,春花似錦、人聲如沸拳魁。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 32,519評(píng)論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)潘懊。三九已至姚糊,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間授舟,已是汗流浹背救恨。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 33,621評(píng)論 1 274
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留释树,地道東北人肠槽。 一個(gè)月前我還...
    沈念sama閱讀 49,099評(píng)論 3 378
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像奢啥,于是被迫代替她去往敵國(guó)和親秸仙。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,691評(píng)論 2 361

推薦閱讀更多精彩內(nèi)容