需求:需要把fluentd输出的文件命名为包含“年-月-日-时-分”的文件,并且其按照5分钟进行分片处理。也就是每3分钟会有一次日志输出。
配置文件,以及输出案例
time_slice_format
把这个选项设置为%Y%m%dT%H%M,
然后把从chunk推送出去的时间flush_interval
设置为3m,
我以为这样会每3分钟生成1个文件。但是事实不是这样:
但事实是:会每分钟生成1个文件,然后每个文件都会放在缓冲区中:
如果buffer_chunk_limit
满了,或者flush_interval
时间到了,就会生成一个文件。
所以要按每3分钟生成一个文件的话,那需要把time_slice_format
把这个选项设置为%Y%m%dT%H,
然后把从chunk推送出去的时间flush_interval
设置为3m,效果如下:
time_slice_wait
这个选项的作用:What if new logs come after the time corresponding the current chunk? For example, what happens to an event, timestamped at 2013-01-01 02:59:45 UTC, comes in at 2013-01-01 03:00:15 UTC? Would it make into the 2013-01-01 02:00:00-02:59:59 chunk?
This issue is addressed by setting the time_slice_wait parameter. time_slice_wait sets, in seconds, how long fluentd waits to accept “late” events into the chunk past the max time corresponding to that chunk. The default value is 600, which means it waits for 10 minutes before moving on. So, in the current example, as long as the events come in before 2013-01-01 03:10:00, it will make it in to the 2013-01-01 02:00:00-02:59:59 chunk.
Alternatively, you can also flush the chunks regularly using flush_interval. Note that flush_interval and time_slice_wait are mutually exclusive. If you set flush_interval, time_slice_wait will be ignored and fluentd would issue a warning.
大意:
如果一个记录产生在2013-01-01 02:59:45,但是到达是在2013-01-01 03:00:15,那么根据
time_slice_wait
配置文件以及输出日志格式:
[ini title=”客户机上运行命令”]
root@localhost:temp# docker run –name test01 \
> –log-driver=fluentd \
> –log-opt tag=”docker.{{.Name}}” \
> –log-opt fluentd-async-connect=true \
> -d -p 8001:8000 imekaku/simple-web python /work/simple.py
7a57a12a3e48a553fb94b909adb99679ff96a3b4e7e26607125288ef3cf89101
[/ini]
<source> type forward port 24224 bind 0.0.0.0 </source> <match docker.**> type rewrite_tag_filter rewriterule1 source stdout system_out.${tag} rewriterule2 source stderr system_err.${tag} </match> <match system_err.**> type copy <store> type grep regexp1 log \s+200\s+ add_tag_prefix program_200 </store> <store> type grep regexp1 log \s+304\s+ add_tag_prefix program_304 </store> <store> type grep regexp1 log \s+404\s+ add_tag_prefix program_404 </store> </match> <match **> type forward <server> host 192.168.126.136 port 24224 </server> flush_interval 5s </match>
<source> type forward port 24224 bind 0.0.0.0 </source> <match system_out.docker.*.**> type forest subtype file <template> time_slice_format %Y%m%dT%H path /home/lee/fluentd-log/${tag_parts[0]}/${tag_parts[2]}/t3 buffer_chunk_limit 256m buffer_queue_limit 128 flush_interval 3m disable_retry_limit false retry_limit 17 retry_wait 1s </template> </match> <match program_200.system_err.docker.*.**> type forest subtype file <template> time_slice_format %Y%m%dT%H path /home/lee/fluentd-log/${tag_parts[0]}/${tag_parts[3]}/t3 buffer_chunk_limit 256m buffer_queue_limit 128 flush_interval 3m disable_retry_limit false retry_limit 17 retry_wait 1s </template> </match> <match program_304.system_err.docker.*.**> type forest subtype file <template> time_slice_format %Y%m%dT%H path /home/lee/fluentd-log/${tag_parts[0]}/${tag_parts[3]}/t3 buffer_chunk_limit 256m buffer_queue_limit 128 flush_interval 3m disable_retry_limit false retry_limit 17 retry_wait 1s </template> </match> <match program_404.system_err.docker.*.**> type forest subtype file <template> time_slice_format %Y%m%dT%H path /home/lee/fluentd-log/${tag_parts[0]}/${tag_parts[3]}/t3 buffer_chunk_limit 256m buffer_queue_limit 128 flush_interval 3m disable_retry_limit false retry_limit 17 retry_wait 1s </template> </match>
lee@lee-PC:fluentd-log$ pwd /home/lee/fluentd-log lee@lee-PC:fluentd-log$ tree ./ ./ ├── program_200 │ └── test01 │ └── t3.20160922T21_0.log ├── program_304 │ └── test01 │ ├── t3.20160922T21_0.log │ └── t3.20160922T22_0.log └── program_404 └── test01 └── t3.20160922T21_0.log 6 directories, 4 files lee@lee-PC:fluentd-log$