什么是收集来自亚马逊的EC2实例日志的好办法?亚马逊、好办法、实例、日志

2023-09-11 08:15:38 作者:造梦 〃

我的应用程序托管在亚马逊的EC2集群上。每个实例将事件写入日志文件。我需要收集(和数据防雷)过这些日志在每一天结束。什么是推荐的方式来收集这些日志在一个中央位置?我想到了几个选项,不知道该往哪个方向走:

使用cron作业在SCP他们一个实例 请通过TCP / IP的所有事件以一个实例 解决方案

我们使用 Logstash 每台主机(通过木偶部署)上收集和船舶日志事件到消息队列(RabbitMQ的,但可能是Redis的)一个中央主机上。另外Logstash实例检索事件,对其进行处理,配件,结果到 ElasticSearch 。一个 Kibana 的Web界面用于通过这个数据库中进行搜索。

这是非常有能力,可轻松扩展,是非常灵活的。 Logstash具有吨的过滤器,以处理从各种输入的事件,并可以输出到大量的服务,ElasticSearch是其中之一。目前,我们从我们的EC2实例船每天约1,2万的日志事件,光硬件。潜伏期从事件搜索的日志事件为1秒左右的我们的设置。

下面是对这种设置的一些文件:http://www.logstash.net/docs/1.1.9/tutorials/getting-started-centralized,并与一些实时数据的Kibana搜索界面的演示。

M1苹果电脑云主机比x86主机性价比高60

My app is hosted on an Amazon EC2 cluster. Each instance writes events to log files. I need to collect (and data mine) over these logs at the end of each day. What's a recommended way to collect these logs in a central location? I have thought of several options, not sure which way to go:

scp them to an instance using a cron job Log all events over TCP/IP to an instance

解决方案

We use Logstash on each host (deployed via Puppet) to gather and ship log events to a message queue (RabbitMQ, but could be Redis) on a central host. Another Logstash instance retrieves the events, processes them and stuffs the result into ElasticSearch. A Kibana web interface is used to search through this database.

It's very capable, scales easily and is very flexible. Logstash has tons of filters to process events from various inputs, and can output to lots of services, ElasticSearch being one of them. We currently ship about 1,2 million log events per day from our EC2 instances, on light hardware. The latency for a log event from event to searchable is about 1 second in our setup.

Here's some documentation on this kind of setup: http://www.logstash.net/docs/1.1.9/tutorials/getting-started-centralized, and a demo of the Kibana search interface with some live data.