NEST 基礎(chǔ)使用
1. 建立連接仙辟,創(chuàng)建client
var nodes = new[]
{
new Uri("http://localhost:9200")
};
var pool = new StaticConnectionPool(nodes);
var settings = new ConnectionSettings(pool); ;
var client = new ElasticClient(settings);
2.獲取所有索引
var indexs = client.Cat.Indices();
3. 創(chuàng)建一個(gè)索引
- 可以增加多個(gè)map 之類碍粥,多個(gè)之類相當(dāng)于屬性拼成了一個(gè)大表
- subClass1 與 subClass2之間如果有重復(fù)名稱的字段器腋,那么先map的會(huì)保留娩缰,后續(xù)的不會(huì)覆蓋前面的同名字段 —— 如下例:先map的Company,后map的Employee,那么生成index的時(shí)候,實(shí)際生成的是Company
的 字段五芝, 也就是 string 類型的 Name
client.Indices.Create("indexTest1", c => c
.Map(m => m
.AutoMap<Company>()
.AutoMap<Employee>()
)
);
public abstract class Document
{
public JoinField Join { get; set; }
}
public class Company : Document
{
public string Name { get; set; }
public List<Employee> Employees { get; set; }
}
public class Employee : Document
{
public int Name { get; set; }
public int Salary { get; set; }
public DateTime Birthday { get; set; }
public bool IsManager { get; set; }
public List<Employee> Employees { get; set; }
public TimeSpan Hours { get; set; }
}
4. 給Es的對象加上對應(yīng)的解析特性
- 類似JsonProperty那種,在創(chuàng)建索引的時(shí)候,或者查詢數(shù)據(jù)的時(shí)候,NEST可以根據(jù)Attribute來解析 —— 相應(yīng)的對應(yīng)類型說明https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/auto-map.html
[ElasticsearchType(RelationName = "employee")]
public class Employee
{
[Text(Name = "first_name", Norms = false, Similarity = "LMDirichlet")]
public string FirstName { get; set; }
[Text(Name = "last_name")]
public string LastName { get; set; }
[Number(DocValues = false, IgnoreMalformed = true, Coerce = true)]
public int Salary { get; set; }
[Date(Format = "MMddyyyy")]
public DateTime Birthday { get; set; }
[Boolean(NullValue = false, Store = true)]
public bool IsManager { get; set; }
[Nested]
[PropertyName("empl")]
public List<Employee> Employees { get; set; }
[Text(Name = "office_hours")]
public TimeSpan? OfficeHours { get; set; }
[Object]
public List<Skill> Skills { get; set; }
}
public class Skill
{
[Text]
public string Name { get; set; }
[Number(NumberType.Byte, Name = "level")]
public int Proficiency { get; set; }
}
5. 查詢
- 最簡單的查詢示例
var qr1 = client.Search<NodeLogSearchEntity>(s => s
.Index("log.test_mix-2021.01.18")
.Query(q => q
.MatchAll()
)
);
- 較為復(fù)雜的查詢
var result = client.Search<VendorPriceInfo>(
s => s
.Explain() //參數(shù)可以提供查詢的更多詳情辕万。
.FielddataFields(fs => fs //對指定字段進(jìn)行分析
.Field(p => p.vendorFullName)
.Field(p => p.cbName)
)
.From(0) //跳過的數(shù)據(jù)個(gè)數(shù)
.Size(50) //返回?cái)?shù)據(jù)個(gè)數(shù)
.Query(q =>
q.Term(p => p.vendorID, 100) // 主要用于精確匹配哪些值枢步,比如數(shù)字,日期渐尿,布爾值或 not_analyzed的字符串(未經(jīng)分析的文本數(shù)據(jù)類型):
&&
q.Term(p => p.vendorName.Suffix("temp"), "姓名") //用于自定義屬性的查詢 (定義方法查看MappingDemo)
&&
q.Bool( //bool 查詢
b => b
.Must(mt => mt //所有分句必須全部匹配醉途,與 AND 相同
.TermRange(p => p.Field(f => f.priceID).GreaterThan("0").LessThan("1"))) //指定范圍查找
.Should(sd => sd //至少有一個(gè)分句匹配,與 OR 相同
.Term(p => p.priceID, 32915),
sd => sd.Terms(t => t.Field(fd => fd.priceID).Terms(new[] {10, 20, 30})),//多值
//||
//sd.Term(p => p.priceID, 1001)
//||
//sd.Term(p => p.priceID, 1005)
sd => sd.TermRange(tr => tr.GreaterThan("10").LessThan("12").Field(f => f.vendorPrice))
)
.MustNot(mn => mn//所有分句都必須不匹配砖茸,與 NOT 相同
.Term(p => p.priceID, 1001)
,
mn => mn.Bool(
bb=>bb.Must(mt=>mt
.Match(mc=>mc.Field(fd=>fd.carName).Query("至尊"))
))
)
)
)//查詢條件
.Sort(st => st.Ascending(asc => asc.vendorPrice))//排序
.Source(sc => sc.Include(ic => ic
.Fields(
fd => fd.vendorName,
fd => fd.vendorID,
fd => fd.priceID,
fd => fd.vendorPrice))) //返回特定的字段
);
二结蟋、Elasticsearch的文本的查詢
es的text數(shù)據(jù)存儲,實(shí)際上插入一條數(shù)據(jù)的時(shí)候渔彰,會(huì)默認(rèn)的分詞,分詞后再倒排索引推正,后面方便查詢恍涂。text在被index的時(shí)候,會(huì)保留一個(gè)它的子字段 text.keyword植榕,改字段是不被分詞的text字段再沧。(ps: map的時(shí)候也可以設(shè)置不分析)
0. 查看一個(gè)text的分析
- 在kibana里面的devtool里面執(zhí)行即可
POST _analyze
{
"analyzer": "standard",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
1. 精準(zhǔn)匹配
屬性 | 查詢之key 分詞的查詢json |
查詢之key 不分詞的查詢json |
---|---|---|
es中數(shù)據(jù) => 分詞 | { "query": { "match": { "key": "value" } } } |
{ "query": { "term": { "key": "value" } } } |
es中數(shù)據(jù) => 不分詞 | { "query": { "match": { "key.keyword": "value" } } } |
{ "query": { "term": { "key.keyword": "value" } } } |
2. 模糊匹配 —— 字符串 Levenshtein distance =》 fuzzy
fuzzy匹配是根據(jù) Levenshtein distance 來判斷是否匹配,一般長度為 0,1,2 太大將不會(huì)匹配尊残,因?yàn)榻Y(jié)果太多了
- eg: cat,kat,他們就一個(gè)字符不同所以 Levenshtein distance("kat","cat") = 1
屬性 | fuzzy的key 的查詢json |
---|---|
es中數(shù)據(jù) => 分詞 | { "query": { "fuzzy": { "key": { "value": "cat" "fuzziness": "1" } } } } |
es中數(shù)據(jù) => 不分詞 | { "query": { "fuzzy": { "key.keyword": { "value": "cat" "fuzziness": "1" } } } } |
3. 模糊匹配 —— 使用通配符 wildcard 匹配 =》 WildCard
WildCard 匹配 就很像我們SQL里面的like匹配 只不過這里使用 *或者? 來匹配
- eg: kiy, kity, kimchy 加入要匹配這三個(gè)
屬性 | fuzzy的key 的查詢json |
---|---|
es中數(shù)據(jù) => 分詞 | { "query": { "wildcard": { "key": { "value": "ki*y" } } } } |
es中數(shù)據(jù) => 不分詞 | { "query": { "wildcard": { "key.keyword": { "value": "ki*y" } } } } |