windows下Logstash6.5.3版本读取⽂件输⼊不⽣效、配置elasticsea。。。
  ⾸先讲个题外话。logstash配置⽂件hello-world.json上篇也提到过,不过那是7.9.0版本的,注意mapping下⾯是没有type的,因为默认的type就是_doc:
{
"index_patterns": ["hello-world-%{+YYYY.MM.dd}"],
"order": 0,
"settings": {
"fresh_interval": "10s"
},
"mappings": {
"properties": {
"createTime": {
"type": "long"
},
"sessionId": {
"type": "text",
"fielddata": true,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
鹿晗图片
}
},
"chart": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
}
}
  如果我们拿上⾯这个模板放到logstash的6.5.3版本去跑,会提⽰mapper_parsing_exception,原因是:
Root mapping definition has unsupported parameters:  [createTime : {type=long}] [sessionId : {fielddata=true, type=text, fields={keyword={ignore_above=256, type=keyword}}}] [chart : {search_analyzer=ik_max_word, analyzer=ik_max_word, type
  为啥不⽀持,因为我们没有提供mapping的type,所以我们只能加上⼀个映射类型,⽐如我们就使⽤_doc,模板⽂件内容新增节点_doc:
{
"index_patterns": [
"hello-world-%{+YYYY.MM.dd}"
],
"order": 0,
"settings": {
"fresh_interval": "10s"
},
"mappings": {
"_doc": {
"properties": {
"createTime": {
"type": "long"
},
"sessionId": {
"type": "text",
"fielddata": true,
"fields": {
读取配置文件失败
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"chart": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word"
}
}
}
心艾}
房屋租赁终止协议}
  好了,重启logstash,这次启动没报错。回归正题,我们给logstash配置输⼊源是:
input{
file {
path => "D:\wlf\logs\*"
start_position => "beginning"
type => "log"
}
洛克王国小游戏
}
  启动没有报错,但静悄悄,⽂件在D盘⾥,数据也有的,但没有任何动静,elasticsearch也没有收到数据。这⾥⽐较奇葩的是path指定的⽬录分隔符,它要求使⽤的不再是
我们习惯上windows环境下的反斜杠,⽽是linux下的斜杠,所以把path改成:
path => "D:/wlf/logs/*"
  重启启动logstash,终于在启动结束后有了动静,数据往elasticsearch插⼊了,只不过插⼊失败了,我们迎来了新的报错:
[2020-09-10T09:46:23,198][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"hello-world-2020.09.10", :_type=>"doc", :routing=>nil}, #<LogStash::Event:
  ⽇志说elasticsearch的hello-world-2020.09.10索引要的映射type是_doc,⽽我们给的却是doc,所以拒绝为此索引更新映射。我们到kiban确认下这个问题:
  模板是对的,插⼊⼀条数据试试:
  ⼀点问题没有,那么按⽇志⾥的提⽰,我们试试映射类型为doc的:
  问题重现了,这⾥为啥logstash要使⽤doc类型⽽不是_doc类型呢?还不是默认值搞的⿁。我们不使⽤默认值就好了,在输出器中指定映射类型为_doc:
output {
elasticsearch{
hosts => "localhost:9200"
index => "hello-world-%{+YYYY.MM.dd}"
manage_template => true
张曼玉 结婚template_name => "hello"
template_overwrite => true
template => "D:\elk\logstash-6.5.3\config\hello-world.json"
document_type => "_doc"
}
}
  再次重启logstash,启动ok,但之前D盘⾥的⽂件已经传输过⼀次(虽然是失败的),所以elasticsearch还是静悄悄,我们可以⼿动打开其中某个⽂件,复制其中⼀条数据后保存,或者直接复制⼀个⽂件,随后新的数据会被传输到elasticsearch去,这次会顺利,没有异常,直接到elasticsearch去查同步的数据即可。