其实rsyslog、syslog-ng、nxlog这三种东西真的是都差不多。随便选一个用都没问题。
比较喜欢nxlog的route和json以及箭头的功能,很简洁,所以用它来推数据到elasticsearch
方法一、用om_elasticsearch推:
1...
2<Input in>
3 Module im_tcp
4 Host 0.0.0.0
5 Port 1514
6 InputType Binary
7</Input>
8
9<Output es>
10 Module om_elasticsearch
11 URL http://localhost:9200/_bulk
12 FlushInterval 2
13 FlushLimit 100
14 # Create an index daily
15 Index strftime($EventTime, "nxlog-%Y%m%d")
16 IndexType "My logs"
17
18 # Use the following if you don't have $EventTime set
19 #Index strftime(now(),"nxlog-%Y%m%d")
20</Output>
21
22<Route r>
23 Path in => es
24</Route>
25...
方法二、用om_http推:
1...
2<Output elasticsearch>
3 Module om_http
4 URL http://elasticsearch:9200
5 ContentType application/json
6 Exec set_http_request_path(strftime($EventTime, "/nxlog-%Y%m%d/" + $SourceModuleName)); rename_field("timestamp","@timestamp"); to_json();
7</Output>
8...
我们生产上是将各个机器上的日志通过rsyslog发到nxlog,再由nxlog导入elasticsearch,然后用kinaba看。
json化的F5日志如下:
1 {
2 "MessageSourceAddress":"172.1.2.2",
3 "EventReceivedTime":"2016-01-02 14:04:07",
4 "SourceModuleName":"in_udp",
5 "SourceModuleType":"im_udp",
6 "SyslogFacilityValue":22,
7 "SyslogFacility":"LOCAL6",
8 "SyslogSeverityValue":6,
9 "SyslogSeverity":"INFO",
10 "SeverityValue":2,
11 "Severity":"INFO",
12 "Hostname":"www",
13 "EventTime":"2016-01-02 14:04:07",
14 "Message":"info logger: [ssl_req][02/Jun/2016:14:04:07 +0800] 127.0.0.1 TLSv1 AES256-SHA \"/iControl/iControlPortal.cgi\" 656"
15 }
nxlog的配置如下:
1...
2<Extension json>
3 Module xm_json
4</Extension>
5
6<Input in_udp>
7 Module im_udp
8 Host 0.0.0.0
9 Port 514
10 Exec parse_syslog(); to_json();
11</Input>
12
13<Output nxlog_out>
14 Module om_file
15 File "/var/log/nxlog/nxlog.out"
16</Output>
17
18<Processor buffer_udp>
19 Module pm_buffer
20 # 1Mb buffer
21 MaxSize 1024
22 Type Mem
23 # warn at 512k
24 WarnLimit 512
25</Processor>
26...
27<Output elasticsearch>
28 Module om_http
29 URL http://localhost:9200
30 ContentType application/json
31 Exec set_http_request_path(strftime($EventTime, "/logstash-%Y.%m.%d/F5-Log"));
32 Exec set_http_request_path(strftime($EventTime, "/logstash-%Y.%m.%d/F5-Log"));
33 Exec delete($EventReceivedTime);
34 Exec delete($SourceModuleName);
35 Exec delete($SourceModuleType);
36 Exec delete($SyslogFacilityValue);
37 Exec delete($SyslogFacility);
38 Exec delete($SyslogSeverityValue);
39 #Exec delete($SyslogSeverity);
40 Exec delete($SeverityValue);
41 Exec delete($Severity);
42 Exec $type="F5-Log";
43 Exec $t=strftime($EventTime, "%Y-%m-%dT%H:%M:%S%z");
44 Exec rename_field("t","@timestamp");
45 Exec to_json();
46</Output>
47
48<Route udp>
49 Path in_udp => buffer_udp => nxlog_out => elasticsearch
50</Route>
注意我们在in_udp的时候就已经把数据变成了json,随后发到buffer_udp,缓冲区,缓冲区的作用是当elasticsearch坏了需要重启的时候,数据会先放到buffer,当好了的时候重发。
nxlog_out是为了查看到底送过来什么字段,我们好调试。实际生产中,看到一条数据就够了。然后就可以删除这个route。
elasticsearch中,我们删去了无用的废物字段。加上了type和@timestamp,如果不加, kibana无法判断。
所以如上,我们可以直送json数据到Elasticsearch,然后再grafana显示出来。