安裝:
1.準備Python3環(huán)境(帶pip3)
略過,最快的方法是先安裝下面第三步的ius庫,然后執(zhí)行
yum install python3
2.安裝Java,最好安裝14版本的java,后面發(fā)現(xiàn)有些命令是需要14版本的jdk的
yum install java-1.8.0-openjdk-devel
設置環(huán)境變量
vim /etc/profile
# 添加如下行
export JAVA_HOME=/usr/lib/jvm/java
export JRE_HOME=$JAVA_HOME/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
3.安裝git
這里通過部署ius的庫來安裝git,來的快一點
[root@hlet-prod-elastic-01 ycsb-0.17.0]# curl https://setup.ius.io | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 604 100 604 0 0 73 0 0:00:08 0:00:08 --:--:-- 163
已加載插件:fastestmirror
Repository epel is listed more than once in the configuration
Repository epel-debuginfo is listed more than once in the configuration
Repository epel-source is listed more than once in the configuration
epel-release-latest-7.noarch.rpm | 15 kB 00:00:00
正在檢查 /var/tmp/yum-root-lNV64Y/epel-release-latest-7.noarch.rpm: epel-release-7-12.noarch
/var/tmp/yum-root-lNV64Y/epel-release-latest-7.noarch.rpm 將作為 epel-release-7-11.noarch 的更新
ius-release-el7.rpm | 8.2 kB 00:00:00
正在檢查 /var/tmp/yum-root-lNV64Y/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
/var/tmp/yum-root-lNV64Y/ius-release-el7.rpm 將被安裝
正在解決依賴關系
--> 正在檢查事務
---> 軟件包 epel-release.noarch.0.7-11 將被 升級
---> 軟件包 epel-release.noarch.0.7-12 將被 更新
---> 軟件包 ius-release.noarch.0.2-1.el7.ius 將被 安裝
--> 解決依賴關系完成
依賴關系解決
=======================================================================================================================================================
Package 架構 版本 源 大小
=======================================================================================================================================================
正在安裝:
ius-release noarch 2-1.el7.ius /ius-release-el7 4.5 k
正在更新:
epel-release noarch 7-12 /epel-release-latest-7.noarch 24 k
事務概要
=======================================================================================================================================================
安裝 1 軟件包
升級 1 軟件包
總計:29 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在更新 : epel-release-7-12.noarch 1/3
警告:/etc/yum.repos.d/epel.repo 已建立為 /etc/yum.repos.d/epel.repo.rpmnew
正在安裝 : ius-release-2-1.el7.ius.noarch 2/3
清理 : epel-release-7-11.noarch 3/3
驗證中 : ius-release-2-1.el7.ius.noarch 1/3
驗證中 : epel-release-7-12.noarch 2/3
驗證中 : epel-release-7-11.noarch 3/3
已安裝:
ius-release.noarch 0:2-1.el7.ius
更新完畢:
epel-release.noarch 0:7-12
完畢!
嘗試搜索git2:
[root@hlet-prod-elastic-01 ycsb-0.17.0]# yum search git2
已加載插件:fastestmirror
Repository epel is listed more than once in the configuration
Repository epel-debuginfo is listed more than once in the configuration
Repository epel-source is listed more than once in the configuration
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
================================================================== N/S matched: git2 ==================================================================
libgit2-devel.x86_64 : Development files for libgit2
libgit2-glib-devel.x86_64 : Development files for libgit2-glib
python-pygit2.x86_64 : Python bindings for libgit2
python-pygit2-doc.noarch : Documentation for python-pygit2
git222.x86_64 : Fast Version Control System
git222-all.noarch : Meta-package to pull in all git tools
git222-core.x86_64 : Core package of git with minimal functionality
git222-core-doc.noarch : Documentation files for git-core
git222-cvs.noarch : Git tools for importing CVS repositories
git222-daemon.x86_64 : Git protocol daemon
git222-email.noarch : Git tools for sending patches via email
git222-gitk.noarch : Git repository browser
git222-gitweb.noarch : Simple web interface to git repositories
git222-gui.noarch : Graphical interface to Git
git222-instaweb.noarch : Repository browser in gitweb
git222-p4.noarch : Git tools for working with Perforce depots
git222-perl-Git.noarch : Perl interface to Git
git222-perl-Git-SVN.noarch : Perl interface to Git::SVN
git222-subtree.x86_64 : Git tools to merge and split repositories
git222-svn.noarch : Git tools for interacting with Subversion repositories
git224.x86_64 : Fast Version Control System
git224-all.noarch : Meta-package to pull in all git tools
git224-core.x86_64 : Core package of git with minimal functionality
git224-core-doc.noarch : Documentation files for git-core
git224-cvs.noarch : Git tools for importing CVS repositories
git224-daemon.x86_64 : Git protocol daemon
git224-email.noarch : Git tools for sending patches via email
git224-gitk.noarch : Git repository browser
git224-gitweb.noarch : Simple web interface to git repositories
git224-gui.noarch : Graphical interface to Git
git224-instaweb.noarch : Repository browser in gitweb
git224-p4.noarch : Git tools for working with Perforce depots
git224-perl-Git.noarch : Perl interface to Git
git224-perl-Git-SVN.noarch : Perl interface to Git::SVN
git224-subtree.x86_64 : Git tools to merge and split repositories
git224-svn.noarch : Git tools for interacting with Subversion repositories
git2cl.noarch : Converts git logs to GNU style ChangeLog format
libgit2.x86_64 : C implementation of the Git core methods as a library with a solid API
libgit2-glib.x86_64 : Git library for GLib
名稱和簡介匹配 only,使用“search all”試試杀捻。
刪除舊的git,安裝git224版本:
yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc
yum remove git
yum install git224
pip安裝esrally
pip3 install esrally -i https://mirrors.aliyun.com/pypi/simple/
安裝完成后,會在root下生成一個.rally
文件夾,我們需要編輯一下rally.ini來指定測試用例數(shù)據(jù)的存放地址
[root@hlet-test-other ~]# pwd
/root
[root@hlet-test-other ~]# tree .rally -L 2
.rally
├── benchmarks
│ ├── data
│ ├── races
│ ├── teams
│ └── tracks
├── logging.json
├── logs
│ └── rally.log
└── rally.ini
6 directories, 3 files
編輯 .rally/rally.ini
文件,重新指定測試用例存放地址
[meta]
config.version = 17
[system]
env.name = local
[node]
root.dir = /root/.rally/benchmarks
src.root.dir = /root/.rally/benchmarks/src
[source]
remote.repo.url = https://github.com/elastic/elasticsearch.git
elasticsearch.src.subdir = elasticsearch
[benchmarks]
#local.dataset.cache = /root/.rally/benchmarks/data
local.dataset.cache = /home/rally/benchmarks/data # 這里我重新指定到了/home/rally目錄
[reporting]
datastore.type = in-memory
datastore.host =
datastore.port =
datastore.secure = False
datastore.user =
datastore.password =
[tracks]
default.url = https://github.com/elastic/rally-tracks
[teams]
default.url = https://github.com/elastic/rally-teams
[defaults]
preserve_benchmark_candidate = False
[distributions]
release.cache = true
使用:
設置代理,用來下載測試用例(10.20.192.51是我自己電腦的地址,1082端口翻墻開啟的代理端口)
export http_proxy=http://10.20.192.51:1082;export https_proxy=http://10.20.192.51:1082;
查看可用的用例:
[root@hlet-test-other rally-tracks]# esrally list tracks
____ ____
/ __ \____ _/ / /_ __
/ /_/ / __ `/ / / / / /
/ _, _/ /_/ / / / /_/ /
/_/ |_|\__,_/_/_/\__, /
/____/
Available tracks:
Name Description Documents Compressed Size Uncompressed Size Default Challenge All Challenges
------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- ----------------- ------------------- ----------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------
eventdata This benchmark indexes HTTP access logs generated based sample logs from the elastic.co website using the generator available in https://github.com/elastic/rally-eventdata-track 20,000,000 756.0 MB 15.3 GB append-no-conflicts append-no-conflicts
geonames POIs from Geonames 11,396,503 252.9 MB 3.3 GB append-no-conflicts append-no-conflicts,append-no-conflicts-index-only,append-sorted-no-conflicts,append-fast-with-conflicts
geopoint Point coordinates from PlanetOSM 60,844,404 482.1 MB 2.3 GB append-no-conflicts append-no-conflicts,append-no-conflicts-index-only,append-fast-with-conflicts
geopointshape Point coordinates from PlanetOSM indexed as geoshapes 60,844,404 470.8 MB 2.6 GB append-no-conflicts append-no-conflicts,append-no-conflicts-index-only,append-fast-with-conflicts
geoshape Shapes from PlanetOSM 60,523,283 13.4 GB 45.4 GB append-no-conflicts append-no-conflicts
http_logs HTTP server log data 247,249,096 1.2 GB 31.1 GB append-no-conflicts append-no-conflicts,append-no-conflicts-index-only,append-sorted-no-conflicts,append-index-only-with-ingest-pipeline,update,append-no-conflicts-index-reindex-only
metricbeat Metricbeat data 1,079,600 87.7 MB 1.2 GB append-no-conflicts append-no-conflicts
nested StackOverflow Q&A stored as nested docs 11,203,029 663.3 MB 3.4 GB nested-search-challenge nested-search-challenge,index-only
noaa Global daily weather measurements from NOAA 33,659,481 949.4 MB 9.0 GB append-no-conflicts append-no-conflicts,append-no-conflicts-index-only,top_metrics,sub-bucket-aggs
nyc_taxis Taxi rides in New York in 2015 165,346,692 4.5 GB 74.3 GB append-no-conflicts append-no-conflicts,append-no-conflicts-index-only,append-sorted-no-conflicts-index-only,update,append-ml,date-histogram
percolator Percolator benchmark based on AOL queries 2,000,000 121.1 kB 104.9 MB append-no-conflicts append-no-conflicts
pmc Full text benchmark with academic papers from PMC 574,199 5.5 GB 21.7 GB append-no-conflicts append-no-conflicts,append-no-conflicts-index-only,append-sorted-no-conflicts,append-fast-with-conflicts
so Indexing benchmark using up to questions and answers from StackOverflow 36,062,278 8.9 GB 33.1 GB append-no-conflicts append-no-conflicts
--------------------------------
[INFO] SUCCESS (took 14 seconds)
--------------------------------
[root@hlet-test-other rally-tracks]# esrally list pipeline
usage: esrally list [-h] [--limit LIMIT] [--distribution-version DISTRIBUTION_VERSION] [--runtime-jdk RUNTIME_JDK]
[--track-repository TRACK_REPOSITORY | --track-path TRACK_PATH] [--track-revision TRACK_REVISION]
[--team-repository TEAM_REPOSITORY] [--team-revision TEAM_REVISION] [--offline] [--quiet]
configuration
esrally list: error: argument configuration: invalid choice: 'pipeline' (choose from 'telemetry', 'tracks', 'pipelines', 'races', 'cars', 'elasticsearch-plugins')
[root@hlet-test-other rally-tracks]# esrally list pipelines
____ ____
/ __ \____ _/ / /_ __
/ /_/ / __ `/ / / / / /
/ _, _/ /_/ / / / /_/ /
/_/ |_|\__,_/_/_/\__, /
/____/
Available pipelines:
Name Description
----------------------- ---------------------------------------------------------------------------------------------
from-sources-complete Builds and provisions Elasticsearch, runs a benchmark and reports results.
from-sources-skip-build Provisions Elasticsearch (skips the build), runs a benchmark and reports results.
from-distribution Downloads an Elasticsearch distribution, provisions it, runs a benchmark and reports results.
benchmark-only Assumes an already running Elasticsearch instance, runs a benchmark and reports results
-------------------------------
[INFO] SUCCESS (took 1 seconds)
-------------------------------
測試:
[root@hlet-test-other rally-tracks]# /usr/local/bin/esrally --target-hosts=10.1.99.101:9200,10.1.99.102:9200,10.1.99.103:9200 --pipeline=benchmark-only --track=eventdata --challenge append-no-conflicts
____ ____
/ __ \____ _/ / /_ __
/ /_/ / __ `/ / / / / /
/ _, _/ /_/ / / / /_/ /
/_/ |_|\__,_/_/_/\__, /
/____/
************************************************************************
************** WARNING: A dark dungeon lies ahead of you **************
************************************************************************
Rally does not have control over the configuration of the benchmarked
Elasticsearch cluster.
Be aware that results may be misleading due to problems with the setup.
Rally is also not able to gather lots of metrics at all (like CPU usage
of the benchmarked cluster) or may even produce misleading metrics (like
the index size).
************************************************************************
****** Use this pipeline only if you are aware of the tradeoffs. ******
*************************** Watch your step! ***************************
************************************************************************
[INFO] Racing on track [eventdata], challenge [append-no-conflicts] and car ['external'] with version [7.8.0].
[WARNING] merges_total_time is 719577 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
[WARNING] merges_total_throttled_time is 270080 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
[WARNING] indexing_total_time is 3993082 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
[WARNING] refresh_total_time is 171012 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
[WARNING] flush_total_time is 162828 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
Running delete-index [100% done]
Running create-index [100% done]
Running check-cluster-health [100% done]
Running index-append [100% done]
Running force-merge [100% done]
Running wait-until-merges-finish [100% done]
------------------------------------------------------
_______ __ _____
/ ____(_)___ ____ _/ / / ___/_________ ________
/ /_ / / __ \/ __ `/ / \__ \/ ___/ __ \/ ___/ _ \
/ __/ / / / / / /_/ / / ___/ / /__/ /_/ / / / __/
/_/ /_/_/ /_/\__,_/_/ /____/\___/\____/_/ \___/
------------------------------------------------------
| Metric | Task | Value | Unit |
|---------------------------------------------------------------:|-------------:|----------:|-------:|
| Cumulative indexing time of primary shards | | 66.6795 | min |
| Min cumulative indexing time across primary shards | | 0 | min |
| Median cumulative indexing time across primary shards | | 3.4653 | min |
| Max cumulative indexing time across primary shards | | 9.45613 | min |
| Cumulative indexing throttle time of primary shards | | 0 | min |
| Min cumulative indexing throttle time across primary shards | | 0 | min |
| Median cumulative indexing throttle time across primary shards | | 0 | min |
| Max cumulative indexing throttle time across primary shards | | 0 | min |
| Cumulative merge time of primary shards | | 11.7906 | min |
| Cumulative merge count of primary shards | | 412 | |
| Min cumulative merge time across primary shards | | 0 | min |
| Median cumulative merge time across primary shards | | 0.557333 | min |
| Max cumulative merge time across primary shards | | 1.65365 | min |
| Cumulative merge throttle time of primary shards | | 4.3383 | min |
| Min cumulative merge throttle time across primary shards | | 0 | min |
| Median cumulative merge throttle time across primary shards | | 0.0624667 | min |
| Max cumulative merge throttle time across primary shards | | 0.84955 | min |
| Cumulative refresh time of primary shards | | 2.8777 | min |
| Cumulative refresh count of primary shards | | 3117 | |
| Min cumulative refresh time across primary shards | | 0 | min |
| Median cumulative refresh time across primary shards | | 0.0499667 | min |
| Max cumulative refresh time across primary shards | | 0.8428 | min |
| Cumulative flush time of primary shards | | 2.49692 | min |
| Cumulative flush count of primary shards | | 66 | |
| Min cumulative flush time across primary shards | | 0 | min |
| Median cumulative flush time across primary shards | | 0.128367 | min |
| Max cumulative flush time across primary shards | | 0.3846 | min |
| Total Young Gen GC | | 28.5 | s |
| Total Old Gen GC | | 0.471 | s |
| Store size | | 9.66638 | GB |
| Translog size | | 0.257318 | GB |
| Heap used for segments | | 2.15145 | MB |
| Heap used for doc values | | 0.361546 | MB |
| Heap used for terms | | 1.44589 | MB |
| Heap used for norms | | 0.081543 | MB |
| Heap used for points | | 0 | MB |
| Heap used for stored fields | | 0.262474 | MB |
| Segment count | | 357 | |
| Min Throughput | index-append | 82512.1 | docs/s |
| Median Throughput | index-append | 84339.9 | docs/s |
| Max Throughput | index-append | 85837.3 | docs/s |
| 50th percentile latency | index-append | 399.425 | ms |
| 90th percentile latency | index-append | 590.092 | ms |
| 99th percentile latency | index-append | 1499.79 | ms |
| 99.9th percentile latency | index-append | 2084.86 | ms |
| 100th percentile latency | index-append | 2283.27 | ms |
| 50th percentile service time | index-append | 399.425 | ms |
| 90th percentile service time | index-append | 590.092 | ms |
| 99th percentile service time | index-append | 1499.79 | ms |
| 99.9th percentile service time | index-append | 2084.86 | ms |
| 100th percentile service time | index-append | 2283.27 | ms |
| error rate | index-append | 0 | % |
---------------------------------
[INFO] SUCCESS (took 303 seconds)
---------------------------------
測試成績對比:
[root@hlet-test-other rally-tracks]# esrally list races
____ ____
/ __ \____ _/ / /_ __
/ /_/ / __ `/ / / / / /
/ _, _/ /_/ / / / /_/ /
/_/ |_|\__,_/_/_/\__, /
/____/
Recent races:
Race ID Race Timestamp Track Track Parameters Challenge Car User Tags Track Revision Team Revision
------------------------------------ ---------------- --------- ------------------ ------------------- -------- ----------- ---------------- ---------------
f57cac25-6082-4c11-9417-f77ea2b7adea 20200725T072252Z eventdata append-no-conflicts external 9a95395
fede9702-c7f7-4934-a4ad-e4e9ca7ee739 20200725T062459Z eventdata append-no-conflicts external 9a95395
e3d39018-f7a7-4a25-b313-92c81cbd0cb9 20200725T022921Z geonames append-no-conflicts external 9a95395
-------------------------------
[INFO] SUCCESS (took 1 seconds)
-------------------------------
[root@hlet-test-other rally-tracks]# esrally compare --baseline=f57cac25-6082-4c11-9417-f77ea2b7adea --contender=fede9702-c7f7-4934-a4ad-e4e9ca7ee739
____ ____
/ __ \____ _/ / /_ __
/ /_/ / __ `/ / / / / /
/ _, _/ /_/ / / / /_/ /
/_/ |_|\__,_/_/_/\__, /
/____/
Comparing baseline
Race ID: f57cac25-6082-4c11-9417-f77ea2b7adea
Race timestamp: 2020-07-25 07:22:52
Challenge: append-no-conflicts
Car: external
with contender
Race ID: fede9702-c7f7-4934-a4ad-e4e9ca7ee739
Race timestamp: 2020-07-25 06:24:59
Challenge: append-no-conflicts
Car: external
------------------------------------------------------
_______ __ _____
/ ____(_)___ ____ _/ / / ___/_________ ________
/ /_ / / __ \/ __ `/ / \__ \/ ___/ __ \/ ___/ _ \
/ __/ / / / / / /_/ / / ___/ / /__/ /_/ / / / __/
/_/ /_/_/ /_/\__,_/_/ /____/\___/\____/_/ \___/
------------------------------------------------------
| Metric | Task | Baseline | Contender | Diff | Unit |
|--------------------------------------------------------------:|-------------:|-----------:|------------:|---------:|-------:|
| Cumulative indexing time of primary shards | | 66.9842 | 66.7444 | -0.23982 | min |
| Min cumulative indexing time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative indexing time across primary shard | | 3.4653 | 3.4653 | 0 | min |
| Max cumulative indexing time across primary shard | | 9.5956 | 9.5006 | -0.095 | min |
| Cumulative indexing throttle time of primary shards | | 0 | 0 | 0 | min |
| Min cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Max cumulative indexing throttle time across primary shard | | 0 | 0 | 0 | min |
| Cumulative merge time of primary shards | | 11.4208 | 11.0402 | -0.38055 | min |
| Cumulative merge count of primary shards | | 379 | 360 | -19 | |
| Min cumulative merge time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative merge time across primary shard | | 0.557333 | 0.557333 | 0 | min |
| Max cumulative merge time across primary shard | | 1.55348 | 1.48317 | -0.07032 | min |
| Cumulative merge throttle time of primary shards | | 4.35182 | 4.20278 | -0.14903 | min |
| Min cumulative merge throttle time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative merge throttle time across primary shard | | 0.0624667 | 0.0624667 | 0 | min |
| Max cumulative merge throttle time across primary shard | | 0.839967 | 0.831683 | -0.00828 | min |
| Cumulative refresh time of primary shards | | 2.76322 | 2.72313 | -0.04008 | min |
| Cumulative refresh count of primary shards | | 2820 | 2660 | -160 | |
| Min cumulative refresh time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative refresh time across primary shard | | 0.0538333 | 0.0571167 | 0.00328 | min |
| Max cumulative refresh time across primary shard | | 0.729217 | 0.6752 | -0.05402 | min |
| Cumulative flush time of primary shards | | 2.58798 | 2.6156 | 0.02762 | min |
| Cumulative flush count of primary shards | | 64 | 63 | -1 | |
| Min cumulative flush time across primary shard | | 0 | 0 | 0 | min |
| Median cumulative flush time across primary shard | | 0.128367 | 0.128367 | 0 | min |
| Max cumulative flush time across primary shard | | 0.4261 | 0.4313 | 0.0052 | min |
| Total Young Gen GC | | 27.184 | 29.881 | 2.697 | s |
| Total Old Gen GC | | 0.559 | 0.494 | -0.065 | s |
| Store size | | 9.38297 | 9.76534 | 0.38237 | GB |
| Translog size | | 0.234053 | 0.220702 | -0.01335 | GB |
| Heap used for segments | | 2.14906 | 2.27391 | 0.12485 | MB |
| Heap used for doc values | | 0.382111 | 0.472027 | 0.08992 | MB |
| Heap used for terms | | 1.42567 | 1.45815 | 0.03247 | MB |
| Heap used for norms | | 0.081543 | 0.081543 | 0 | MB |
| Heap used for points | | 0 | 0 | 0 | MB |
| Heap used for stored fields | | 0.259735 | 0.262199 | 0.00246 | MB |
| Segment count | | 352 | 359 | 7 | |
| Min Throughput | index-append | 82424.8 | 82763.8 | 338.984 | docs/s |
| Median Throughput | index-append | 83741.2 | 84054.4 | 313.162 | docs/s |
| Max Throughput | index-append | 84746.2 | 85094 | 347.78 | docs/s |
| 50th percentile latency | index-append | 393.631 | 392.123 | -1.5088 | ms |
| 90th percentile latency | index-append | 573.56 | 567.942 | -5.61797 | ms |
| 99th percentile latency | index-append | 1670.18 | 1342.7 | -327.479 | ms |
| 99.9th percentile latency | index-append | 2789.62 | 1842.49 | -947.132 | ms |
| 100th percentile latency | index-append | 2836.03 | 2848.85 | 12.8242 | ms |
| 50th percentile service time | index-append | 393.631 | 392.123 | -1.5088 | ms |
| 90th percentile service time | index-append | 573.56 | 567.942 | -5.61797 | ms |
| 99th percentile service time | index-append | 1670.18 | 1342.7 | -327.479 | ms |
| 99.9th percentile service time | index-append | 2789.62 | 1842.49 | -947.132 | ms |
| 100th percentile service time | index-append | 2836.03 | 2848.85 | 12.8242 | ms |
| error rate | index-append | 0 | 0 | 0 | % |
-------------------------------
[INFO] SUCCESS (took 1 seconds)
-------------------------------
默認的測試方案(不指定challenge):
[root@hle --track=geonamesy-tracks]# /usr/local/bin/esrally --target-hosts=10.1.99.101:9200,10.1.99.102:9200,10.1.99.103:9200 --pipeline=benchmark-only
____ ____
/ __ \____ _/ / /_ __
/ /_/ / __ `/ / / / / /
/ _, _/ /_/ / / / /_/ /
/_/ |_|\__,_/_/_/\__, /
/____/
************************************************************************
************** WARNING: A dark dungeon lies ahead of you **************
************************************************************************
Rally does not have control over the configuration of the benchmarked
Elasticsearch cluster.
Be aware that results may be misleading due to problems with the setup.
Rally is also not able to gather lots of metrics at all (like CPU usage
of the benchmarked cluster) or may even produce misleading metrics (like
the index size).
************************************************************************
****** Use this pipeline only if you are aware of the tradeoffs. ******
*************************** Watch your step! ***************************
************************************************************************
[WARNING] Could not update tracks. Continuing with your locally available state.
[INFO] Downloading data for track geonames (252.9 MB total size) [100.0%]
[INFO] Decompressing track data from [/home/rally/benchmarks/data/geonames/documents-2.json.bz2] to [/home/rally/benchmarks/data/geonames/documents-2.json] (resulting size: 3.30 GB) ... [OK]
[INFO] Preparing file offset table for [/home/rally/benchmarks/data/geonames/documents-2.json] ... [OK]
[INFO] Racing on track [geonames], challenge [append-no-conflicts] and car ['external'] with version [7.8.0].
[WARNING] merges_total_time is 5346 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
[WARNING] indexing_total_time is 58481 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
[WARNING] refresh_total_time is 14775 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
[WARNING] flush_total_time is 329 ms indicating that the cluster is not in a defined clean state. Recorded index time metrics may be misleading.
Running delete-index [100% done]
Running create-index [100% done]
Running check-cluster-health [100% done]
Running index-append [100% done]
Running refresh-after-index [100% done]
Running force-merge [100% done]
Running refresh-after-force-merge [100% done]
Running wait-until-merges-finish [100% done]
Running index-stats [100% done]
Running node-stats [100% done]
Running default [100% done]
Running term [100% done]
Running phrase [100% done]
Running country_agg_uncached [100% done]
Running country_agg_cached [100% done]
Running scroll [100% done]
Running expression [100% done]
Running painless_static [100% done]
Running painless_dynamic [100% done]
Running decay_geo_gauss_function_score [100% done]
Running decay_geo_gauss_script_score [100% done]
Running field_value_function_score [100% done]
Running field_value_script_score [100% done]
Running random_function_score [100% done]
Running random_script_score [100% done]
Running large_terms [100% done]
Running large_filtered_terms [100% done]
Running large_prohibited_terms [100% done]
Running desc_sort_population [100% done]
Running asc_sort_population [100% done]
Running desc_sort_geonameid [100% done]
Running asc_sort_geonameid [100% done]
------------------------------------------------------
_______ __ _____
/ ____(_)___ ____ _/ / / ___/_________ ________
/ /_ / / __ \/ __ `/ / \__ \/ ___/ __ \/ ___/ _ \
/ __/ / / / / / /_/ / / ___/ / /__/ /_/ / / / __/
/_/ /_/_/ /_/\__,_/_/ /____/\___/\____/_/ \___/
------------------------------------------------------
| Metric | Task | Value | Unit |
|---------------------------------------------------------------:|-------------------------------:|-----------:|--------:|
| Cumulative indexing time of primary shards | | 19.853 | min |
| Min cumulative indexing time across primary shards | | 0 | min |
| Median cumulative indexing time across primary shards | | 0.163617 | min |
| Max cumulative indexing time across primary shards | | 3.86128 | min |
| Cumulative indexing throttle time of primary shards | | 0 | min |
| Min cumulative indexing throttle time across primary shards | | 0 | min |
| Median cumulative indexing throttle time across primary shards | | 0 | min |
| Max cumulative indexing throttle time across primary shards | | 0 | min |
| Cumulative merge time of primary shards | | 3.42978 | min |
| Cumulative merge count of primary shards | | 148 | |
| Min cumulative merge time across primary shards | | 0 | min |
| Median cumulative merge time across primary shards | | 0.018225 | min |
| Max cumulative merge time across primary shards | | 0.732933 | min |
| Cumulative merge throttle time of primary shards | | 0.497483 | min |
| Min cumulative merge throttle time across primary shards | | 0 | min |
| Median cumulative merge throttle time across primary shards | | 0 | min |
| Max cumulative merge throttle time across primary shards | | 0.132383 | min |
| Cumulative refresh time of primary shards | | 2.01323 | min |
| Cumulative refresh count of primary shards | | 1316 | |
| Min cumulative refresh time across primary shards | | 0 | min |
| Median cumulative refresh time across primary shards | | 0.0762167 | min |
| Max cumulative refresh time across primary shards | | 0.323067 | min |
| Cumulative flush time of primary shards | | 0.723967 | min |
| Cumulative flush count of primary shards | | 30 | |
| Min cumulative flush time across primary shards | | 0 | min |
| Median cumulative flush time across primary shards | | 0.00101667 | min |
| Max cumulative flush time across primary shards | | 0.157717 | min |
| Total Young Gen GC | | 37.485 | s |
| Total Old Gen GC | | 0 | s |
| Store size | | 3.27061 | GB |
| Translog size | | 0.114163 | GB |
| Heap used for segments | | 1.24924 | MB |
| Heap used for doc values | | 0.332607 | MB |
| Heap used for terms | | 0.756104 | MB |
| Heap used for norms | | 0.0812988 | MB |
| Heap used for points | | 0 | MB |
| Heap used for stored fields | | 0.0792313 | MB |
| Segment count | | 133 | |
| error rate | index-append | 0 | % |
| Min Throughput | index-stats | 90.03 | ops/s |
| Median Throughput | index-stats | 90.03 | ops/s |
| Max Throughput | index-stats | 90.06 | ops/s |
| 50th percentile latency | index-stats | 6.05789 | ms |
| 90th percentile latency | index-stats | 8.21848 | ms |
| 99th percentile latency | index-stats | 11.9599 | ms |
| 99.9th percentile latency | index-stats | 19.7866 | ms |
| 100th percentile latency | index-stats | 25.1448 | ms |
| 50th percentile service time | index-stats | 5.98573 | ms |
| 90th percentile service time | index-stats | 8.09021 | ms |
| 99th percentile service time | index-stats | 10.4001 | ms |
| 99.9th percentile service time | index-stats | 17.2934 | ms |
| 100th percentile service time | index-stats | 19.1273 | ms |
| error rate | index-stats | 0 | % |
| Min Throughput | node-stats | 90 | ops/s |
| Median Throughput | node-stats | 90.03 | ops/s |
| Max Throughput | node-stats | 90.09 | ops/s |
| 50th percentile latency | node-stats | 8.10563 | ms |
| 90th percentile latency | node-stats | 9.09376 | ms |
| 99th percentile latency | node-stats | 12.8489 | ms |
| 99.9th percentile latency | node-stats | 17.3221 | ms |
| 100th percentile latency | node-stats | 20.5821 | ms |
| 50th percentile service time | node-stats | 7.99756 | ms |
| 90th percentile service time | node-stats | 8.82627 | ms |
| 99th percentile service time | node-stats | 12.309 | ms |
| 99.9th percentile service time | node-stats | 14.1551 | ms |
| 100th percentile service time | node-stats | 20.478 | ms |
| error rate | node-stats | 0 | % |
| Min Throughput | default | 50.03 | ops/s |
| Median Throughput | default | 50.04 | ops/s |
| Max Throughput | default | 50.08 | ops/s |
| 50th percentile latency | default | 4.50129 | ms |
| 90th percentile latency | default | 5.04859 | ms |
| 99th percentile latency | default | 8.15288 | ms |
| 99.9th percentile latency | default | 34.6208 | ms |
| 100th percentile latency | default | 49.8767 | ms |
| 50th percentile service time | default | 4.39278 | ms |
| 90th percentile service time | default | 4.93608 | ms |
| 99th percentile service time | default | 6.66241 | ms |
| 99.9th percentile service time | default | 15.8646 | ms |
| 100th percentile service time | default | 49.7674 | ms |
| error rate | default | 0 | % |
| Min Throughput | term | 149.74 | ops/s |
| Median Throughput | term | 150.05 | ops/s |
| Max Throughput | term | 150.08 | ops/s |
| 50th percentile latency | term | 4.40562 | ms |
| 90th percentile latency | term | 5.3817 | ms |
| 99th percentile latency | term | 7.85092 | ms |
| 99.9th percentile latency | term | 14.0425 | ms |
| 100th percentile latency | term | 15.4412 | ms |
| 50th percentile service time | term | 4.29468 | ms |
| 90th percentile service time | term | 5.23876 | ms |
| 99th percentile service time | term | 6.1456 | ms |
| 99.9th percentile service time | term | 13.9712 | ms |
| 100th percentile service time | term | 15.369 | ms |
| error rate | term | 0 | % |
| Min Throughput | phrase | 149.8 | ops/s |
| Median Throughput | phrase | 150.04 | ops/s |
| Max Throughput | phrase | 150.06 | ops/s |
| 50th percentile latency | phrase | 4.85974 | ms |
| 90th percentile latency | phrase | 35.7331 | ms |
| 99th percentile latency | phrase | 81.9604 | ms |
| 99.9th percentile latency | phrase | 92.9983 | ms |
| 100th percentile latency | phrase | 94.2518 | ms |
| 50th percentile service time | phrase | 4.62955 | ms |
| 90th percentile service time | phrase | 5.35967 | ms |
| 99th percentile service time | phrase | 14.5701 | ms |
| 99.9th percentile service time | phrase | 63.9793 | ms |
| 100th percentile service time | phrase | 65.3739 | ms |
| error rate | phrase | 0 | % |
| Min Throughput | country_agg_uncached | 3.32 | ops/s |
| Median Throughput | country_agg_uncached | 3.33 | ops/s |
| Max Throughput | country_agg_uncached | 3.33 | ops/s |
| 50th percentile latency | country_agg_uncached | 6106.7 | ms |
| 90th percentile latency | country_agg_uncached | 6942.88 | ms |
| 99th percentile latency | country_agg_uncached | 7138.75 | ms |
| 100th percentile latency | country_agg_uncached | 7144.03 | ms |
| 50th percentile service time | country_agg_uncached | 290.809 | ms |
| 90th percentile service time | country_agg_uncached | 326.527 | ms |
| 99th percentile service time | country_agg_uncached | 370.942 | ms |
| 100th percentile service time | country_agg_uncached | 372.958 | ms |
| error rate | country_agg_uncached | 0 | % |
| Min Throughput | country_agg_cached | 99.83 | ops/s |
| Median Throughput | country_agg_cached | 100.04 | ops/s |
| Max Throughput | country_agg_cached | 100.06 | ops/s |
| 50th percentile latency | country_agg_cached | 4.03605 | ms |
| 90th percentile latency | country_agg_cached | 4.27843 | ms |
| 99th percentile latency | country_agg_cached | 5.28781 | ms |
| 99.9th percentile latency | country_agg_cached | 37.6212 | ms |
| 100th percentile latency | country_agg_cached | 43.7555 | ms |
| 50th percentile service time | country_agg_cached | 3.93219 | ms |
| 90th percentile service time | country_agg_cached | 4.16809 | ms |
| 99th percentile service time | country_agg_cached | 4.73316 | ms |
| 99.9th percentile service time | country_agg_cached | 5.21323 | ms |
| 100th percentile service time | country_agg_cached | 43.6568 | ms |
| error rate | country_agg_cached | 0 | % |
| Min Throughput | scroll | 20.01 | pages/s |
| Median Throughput | scroll | 20.02 | pages/s |
| Max Throughput | scroll | 20.03 | pages/s |
| 50th percentile latency | scroll | 929.962 | ms |
| 90th percentile latency | scroll | 952.058 | ms |
| 99th percentile latency | scroll | 981.561 | ms |
| 100th percentile latency | scroll | 991.399 | ms |
| 50th percentile service time | scroll | 929.58 | ms |
| 90th percentile service time | scroll | 951.682 | ms |
| 99th percentile service time | scroll | 981.221 | ms |
| 100th percentile service time | scroll | 990.97 | ms |
| error rate | scroll | 0 | % |
| Min Throughput | expression | 1.38 | ops/s |
| Median Throughput | expression | 1.39 | ops/s |
| Max Throughput | expression | 1.4 | ops/s |
| 50th percentile latency | expression | 55044.6 | ms |
| 90th percentile latency | expression | 62928.8 | ms |
| 99th percentile latency | expression | 64607.5 | ms |
| 100th percentile latency | expression | 64913.1 | ms |
| 50th percentile service time | expression | 666.456 | ms |
| 90th percentile service time | expression | 766.347 | ms |
| 99th percentile service time | expression | 924.74 | ms |
| 100th percentile service time | expression | 926.346 | ms |
| error rate | expression | 0 | % |
| Min Throughput | painless_static | 1.3 | ops/s |
| Median Throughput | painless_static | 1.31 | ops/s |
| Max Throughput | painless_static | 1.32 | ops/s |
| 50th percentile latency | painless_static | 24566.6 | ms |
| 90th percentile latency | painless_static | 29686.5 | ms |
| 99th percentile latency | painless_static | 30650.5 | ms |
| 100th percentile latency | painless_static | 30811.8 | ms |
| 50th percentile service time | painless_static | 758.755 | ms |
| 90th percentile service time | painless_static | 877.797 | ms |
| 99th percentile service time | painless_static | 921.769 | ms |
| 100th percentile service time | painless_static | 926.221 | ms |
| error rate | painless_static | 0 | % |
| Min Throughput | painless_dynamic | 1.28 | ops/s |
| Median Throughput | painless_dynamic | 1.28 | ops/s |
| Max Throughput | painless_dynamic | 1.28 | ops/s |
| 50th percentile latency | painless_dynamic | 29120 | ms |
| 90th percentile latency | painless_dynamic | 34257 | ms |
| 99th percentile latency | painless_dynamic | 35169.6 | ms |
| 100th percentile latency | painless_dynamic | 35261.5 | ms |
| 50th percentile service time | painless_dynamic | 757.207 | ms |
| 90th percentile service time | painless_dynamic | 862.991 | ms |
| 99th percentile service time | painless_dynamic | 972.241 | ms |
| 100th percentile service time | painless_dynamic | 978.997 | ms |
| error rate | painless_dynamic | 0 | % |
| Min Throughput | decay_geo_gauss_function_score | 1 | ops/s |
| Median Throughput | decay_geo_gauss_function_score | 1 | ops/s |
| Max Throughput | decay_geo_gauss_function_score | 1 | ops/s |
| 50th percentile latency | decay_geo_gauss_function_score | 738.547 | ms |
| 90th percentile latency | decay_geo_gauss_function_score | 845.996 | ms |
| 99th percentile latency | decay_geo_gauss_function_score | 970.167 | ms |
| 100th percentile latency | decay_geo_gauss_function_score | 971.008 | ms |
| 50th percentile service time | decay_geo_gauss_function_score | 738.228 | ms |
| 90th percentile service time | decay_geo_gauss_function_score | 845.877 | ms |
| 99th percentile service time | decay_geo_gauss_function_score | 969.865 | ms |
| 100th percentile service time | decay_geo_gauss_function_score | 970.68 | ms |
| error rate | decay_geo_gauss_function_score | 0 | % |
| Min Throughput | decay_geo_gauss_script_score | 1 | ops/s |
| Median Throughput | decay_geo_gauss_script_score | 1 | ops/s |
| Max Throughput | decay_geo_gauss_script_score | 1 | ops/s |
| 50th percentile latency | decay_geo_gauss_script_score | 676.368 | ms |
| 90th percentile latency | decay_geo_gauss_script_score | 707.341 | ms |
| 99th percentile latency | decay_geo_gauss_script_score | 748.275 | ms |
| 100th percentile latency | decay_geo_gauss_script_score | 770.331 | ms |
| 50th percentile service time | decay_geo_gauss_script_score | 676.007 | ms |
| 90th percentile service time | decay_geo_gauss_script_score | 706.969 | ms |
| 99th percentile service time | decay_geo_gauss_script_score | 747.884 | ms |
| 100th percentile service time | decay_geo_gauss_script_score | 769.956 | ms |
| error rate | decay_geo_gauss_script_score | 0 | % |
| Min Throughput | field_value_function_score | 1.5 | ops/s |
| Median Throughput | field_value_function_score | 1.5 | ops/s |
| Max Throughput | field_value_function_score | 1.5 | ops/s |
| 50th percentile latency | field_value_function_score | 339.512 | ms |
| 90th percentile latency | field_value_function_score | 346.959 | ms |
| 99th percentile latency | field_value_function_score | 425.24 | ms |
| 100th percentile latency | field_value_function_score | 434.581 | ms |
| 50th percentile service time | field_value_function_score | 339.121 | ms |
| 90th percentile service time | field_value_function_score | 346.569 | ms |
| 99th percentile service time | field_value_function_score | 424.861 | ms |
| 100th percentile service time | field_value_function_score | 434.259 | ms |
| error rate | field_value_function_score | 0 | % |
| Min Throughput | field_value_script_score | 1.5 | ops/s |
| Median Throughput | field_value_script_score | 1.5 | ops/s |
| Max Throughput | field_value_script_score | 1.5 | ops/s |
| 50th percentile latency | field_value_script_score | 387.726 | ms |
| 90th percentile latency | field_value_script_score | 431.669 | ms |
| 99th percentile latency | field_value_script_score | 484.714 | ms |
| 100th percentile latency | field_value_script_score | 491.88 | ms |
| 50th percentile service time | field_value_script_score | 387.401 | ms |
| 90th percentile service time | field_value_script_score | 431.355 | ms |
| 99th percentile service time | field_value_script_score | 484.385 | ms |
| 100th percentile service time | field_value_script_score | 491.558 | ms |
| error rate | field_value_script_score | 0 | % |
| Min Throughput | random_function_score | 1.5 | ops/s |
| Median Throughput | random_function_score | 1.5 | ops/s |
| Max Throughput | random_function_score | 1.5 | ops/s |
| 50th percentile latency | random_function_score | 619.357 | ms |
| 90th percentile latency | random_function_score | 909.796 | ms |
| 99th percentile latency | random_function_score | 1109.83 | ms |
| 100th percentile latency | random_function_score | 1128.9 | ms |
| 50th percentile service time | random_function_score | 548.137 | ms |
| 90th percentile service time | random_function_score | 846.115 | ms |
| 99th percentile service time | random_function_score | 879.938 | ms |
| 100th percentile service time | random_function_score | 887.702 | ms |
| error rate | random_function_score | 0 | % |
| Min Throughput | random_script_score | 1.5 | ops/s |
| Median Throughput | random_script_score | 1.5 | ops/s |
| Max Throughput | random_script_score | 1.5 | ops/s |
| 50th percentile latency | random_script_score | 498.98 | ms |
| 90th percentile latency | random_script_score | 552.149 | ms |
| 99th percentile latency | random_script_score | 648.419 | ms |
| 100th percentile latency | random_script_score | 681.576 | ms |
| 50th percentile service time | random_script_score | 497.295 | ms |
| 90th percentile service time | random_script_score | 551.946 | ms |
| 99th percentile service time | random_script_score | 648.218 | ms |
| 100th percentile service time | random_script_score | 681.36 | ms |
| error rate | random_script_score | 0 | % |
| Min Throughput | large_terms | 1.06 | ops/s |
| Median Throughput | large_terms | 1.06 | ops/s |
| Max Throughput | large_terms | 1.07 | ops/s |
| 50th percentile latency | large_terms | 8641.99 | ms |
| 90th percentile latency | large_terms | 11513 | ms |
| 99th percentile latency | large_terms | 12073.4 | ms |
| 100th percentile latency | large_terms | 12160.4 | ms |
| 50th percentile service time | large_terms | 958.44 | ms |
| 90th percentile service time | large_terms | 1013.72 | ms |
| 99th percentile service time | large_terms | 1076.5 | ms |
| 100th percentile service time | large_terms | 1079.32 | ms |
| error rate | large_terms | 0 | % |
| Min Throughput | large_filtered_terms | 1.02 | ops/s |
| Median Throughput | large_filtered_terms | 1.02 | ops/s |
| Max Throughput | large_filtered_terms | 1.02 | ops/s |
| 50th percentile latency | large_filtered_terms | 19141.9 | ms |
| 90th percentile latency | large_filtered_terms | 22428.1 | ms |
| 99th percentile latency | large_filtered_terms | 23133.4 | ms |
| 100th percentile latency | large_filtered_terms | 23224.7 | ms |
| 50th percentile service time | large_filtered_terms | 983.187 | ms |
| 90th percentile service time | large_filtered_terms | 1040.85 | ms |
| 99th percentile service time | large_filtered_terms | 1098.89 | ms |
| 100th percentile service time | large_filtered_terms | 1108.66 | ms |
| error rate | large_filtered_terms | 0 | % |
| Min Throughput | large_prohibited_terms | 1.01 | ops/s |
| Median Throughput | large_prohibited_terms | 1.01 | ops/s |
| Max Throughput | large_prohibited_terms | 1.02 | ops/s |
| 50th percentile latency | large_prohibited_terms | 20044.4 | ms |
| 90th percentile latency | large_prohibited_terms | 23465.6 | ms |
| 99th percentile latency | large_prohibited_terms | 24187.8 | ms |
| 100th percentile latency | large_prohibited_terms | 24298.1 | ms |
| 50th percentile service time | large_prohibited_terms | 980.416 | ms |
| 90th percentile service time | large_prohibited_terms | 1039.5 | ms |
| 99th percentile service time | large_prohibited_terms | 1103.34 | ms |
| 100th percentile service time | large_prohibited_terms | 1207.27 | ms |
| error rate | large_prohibited_terms | 0 | % |
| Min Throughput | desc_sort_population | 1.5 | ops/s |
| Median Throughput | desc_sort_population | 1.51 | ops/s |
| Max Throughput | desc_sort_population | 1.51 | ops/s |
| 50th percentile latency | desc_sort_population | 90.4621 | ms |
| 90th percentile latency | desc_sort_population | 95.5839 | ms |
| 99th percentile latency | desc_sort_population | 126.549 | ms |
| 100th percentile latency | desc_sort_population | 127.997 | ms |
| 50th percentile service time | desc_sort_population | 89.8768 | ms |
| 90th percentile service time | desc_sort_population | 94.9458 | ms |
| 99th percentile service time | desc_sort_population | 125.905 | ms |
| 100th percentile service time | desc_sort_population | 127.342 | ms |
| error rate | desc_sort_population | 0 | % |
| Min Throughput | asc_sort_population | 1.5 | ops/s |
| Median Throughput | asc_sort_population | 1.51 | ops/s |
| Max Throughput | asc_sort_population | 1.51 | ops/s |
| 50th percentile latency | asc_sort_population | 92.9115 | ms |
| 90th percentile latency | asc_sort_population | 99.0865 | ms |
| 99th percentile latency | asc_sort_population | 141.764 | ms |
| 100th percentile latency | asc_sort_population | 156.852 | ms |
| 50th percentile service time | asc_sort_population | 92.2562 | ms |
| 90th percentile service time | asc_sort_population | 98.4629 | ms |
| 99th percentile service time | asc_sort_population | 141.061 | ms |
| 100th percentile service time | asc_sort_population | 156.206 | ms |
| error rate | asc_sort_population | 0 | % |
| Min Throughput | desc_sort_geonameid | 6.02 | ops/s |
| Median Throughput | desc_sort_geonameid | 6.02 | ops/s |
| Max Throughput | desc_sort_geonameid | 6.03 | ops/s |
| 50th percentile latency | desc_sort_geonameid | 15.2863 | ms |
| 90th percentile latency | desc_sort_geonameid | 16.1081 | ms |
| 99th percentile latency | desc_sort_geonameid | 16.3204 | ms |
| 100th percentile latency | desc_sort_geonameid | 19.3265 | ms |
| 50th percentile service time | desc_sort_geonameid | 15.0864 | ms |
| 90th percentile service time | desc_sort_geonameid | 15.9155 | ms |
| 99th percentile service time | desc_sort_geonameid | 16.1203 | ms |
| 100th percentile service time | desc_sort_geonameid | 19.1145 | ms |
| error rate | desc_sort_geonameid | 0 | % |
| Min Throughput | asc_sort_geonameid | 6.02 | ops/s |
| Median Throughput | asc_sort_geonameid | 6.02 | ops/s |
| Max Throughput | asc_sort_geonameid | 6.03 | ops/s |
| 50th percentile latency | asc_sort_geonameid | 11.7612 | ms |
| 90th percentile latency | asc_sort_geonameid | 12.5419 | ms |
| 99th percentile latency | asc_sort_geonameid | 15.966 | ms |
| 100th percentile latency | asc_sort_geonameid | 62.4172 | ms |
| 50th percentile service time | asc_sort_geonameid | 11.555 | ms |
| 90th percentile service time | asc_sort_geonameid | 12.2999 | ms |
| 99th percentile service time | asc_sort_geonameid | 15.7352 | ms |
| 100th percentile service time | asc_sort_geonameid | 62.2381 | ms |
| error rate | asc_sort_geonameid | 0 | % |
[WARNING] No throughput metrics available for [index-append]. Likely cause: The benchmark ended already during warmup.
----------------------------------
[INFO] SUCCESS (took 4778 seconds)
----------------------------------
比較坑爹的是導入數(shù)據(jù)的吞吐量由于動作太快了沒來得及記錄就結束了.