From owner-svn-ports-all@FreeBSD.ORG Wed Jul 17 23:20:23 2013 Return-Path: Delivered-To: svn-ports-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id CE9177F5; Wed, 17 Jul 2013 23:20:23 +0000 (UTC) (envelope-from madpilot@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) by mx1.freebsd.org (Postfix) with ESMTP id AF5C9293; Wed, 17 Jul 2013 23:20:23 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r6HNKNDt074331; Wed, 17 Jul 2013 23:20:23 GMT (envelope-from madpilot@svn.freebsd.org) Received: (from madpilot@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r6HNKMX1074320; Wed, 17 Jul 2013 23:20:22 GMT (envelope-from madpilot@svn.freebsd.org) Message-Id: <201307172320.r6HNKMX1074320@svn.freebsd.org> From: Guido Falsi Date: Wed, 17 Jul 2013 23:20:22 +0000 (UTC) To: ports-committers@freebsd.org, svn-ports-all@freebsd.org, svn-ports-head@freebsd.org Subject: svn commit: r323192 - in head/sysutils: . logstash logstash/files X-SVN-Group: ports-head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-ports-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for the ports tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jul 2013 23:20:23 -0000 Author: madpilot Date: Wed Jul 17 23:20:21 2013 New Revision: 323192 URL: http://svnweb.freebsd.org/changeset/ports/323192 Log: Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs. WWW: http://logstash.net/ PR: ports/168266 Submitted by: Daniel Solsona , Regis A. Despres Added: head/sysutils/logstash/ head/sysutils/logstash/Makefile (contents, props changed) head/sysutils/logstash/distinfo (contents, props changed) head/sysutils/logstash/files/ head/sysutils/logstash/files/elasticsearch.yml.sample (contents, props changed) head/sysutils/logstash/files/logstash.conf.sample (contents, props changed) head/sysutils/logstash/files/logstash.in (contents, props changed) head/sysutils/logstash/pkg-descr (contents, props changed) head/sysutils/logstash/pkg-plist (contents, props changed) Modified: head/sysutils/Makefile Modified: head/sysutils/Makefile ============================================================================== --- head/sysutils/Makefile Wed Jul 17 22:12:15 2013 (r323191) +++ head/sysutils/Makefile Wed Jul 17 23:20:21 2013 (r323192) @@ -498,6 +498,7 @@ SUBDIR += logmon SUBDIR += logrotate SUBDIR += logstalgia + SUBDIR += logstash SUBDIR += logtool SUBDIR += logwatch SUBDIR += lookat Added: head/sysutils/logstash/Makefile ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/Makefile Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,50 @@ +# Created by: Daniel Solsona , Guido Falsi +# $FreeBSD$ + +PORTNAME= logstash +PORTVERSION= 1.1.13 +CATEGORIES= sysutils java +MASTER_SITES= https://logstash.objects.dreamhost.com/release/ \ + http://semicomplete.com/files/logstash/ +DISTNAME= ${PORTNAME}-${PORTVERSION}-flatjar +EXTRACT_SUFX= .jar +EXTRACT_ONLY= + +MAINTAINER= regis.despres@gmail.com +COMMENT= Tool for managing events and logs + +USE_JAVA= yes +JAVA_VERSION= 1.5+ + +NO_BUILD= yes + +USE_RC_SUBR= logstash + +LOGSTASH_HOME?= ${PREFIX}/${PORTNAME} +LOGSTASH_HOME_REL?= ${LOGSTASH_HOME:S,^${PREFIX}/,,} +LOGSTASH_JAR?= ${DISTNAME}${EXTRACT_SUFX} +LOGSTASH_RUN?= /var/run/${PORTNAME} +LOGSTASH_DATA_DIR?= /var/db/${PORTNAME} + +SUB_LIST= LOGSTASH_DATA_DIR=${LOGSTASH_DATA_DIR} JAVA_HOME=${JAVA_HOME} \ + LOGSTASH_HOME=${LOGSTASH_HOME} LOGSTASH_JAR=${LOGSTASH_JAR} +PLIST_SUB+= LOGSTASH_HOME=${LOGSTASH_HOME_REL} LOGSTASH_JAR=${LOGSTASH_JAR} \ + LOGSTASH_RUN=${LOGSTASH_RUN} \ + LOGSTASH_DATA_DIR=${LOGSTASH_DATA_DIR} + +do-install: + ${MKDIR} ${LOGSTASH_RUN} + ${MKDIR} ${ETCDIR} + ${MKDIR} ${LOGSTASH_HOME} + ${MKDIR} ${LOGSTASH_DATA_DIR} + ${INSTALL_DATA} ${DISTDIR}/${DIST_SUBDIR}/${LOGSTASH_JAR} ${LOGSTASH_HOME} + ${INSTALL_DATA} ${FILESDIR}/logstash.conf.sample ${ETCDIR} + @if [ ! -f ${ETCDIR}/logstash.conf ]; then \ + ${CP} -p ${ETCDIR}/logstash.conf.sample ${ETCDIR}/logstash.conf ; \ + fi + ${INSTALL_DATA} ${FILESDIR}/elasticsearch.yml.sample ${ETCDIR} + @if [ ! -f ${ETCDIR}/elasticsearch.yml ]; then \ + ${CP} -p ${ETCDIR}/elasticsearch.yml.sample ${ETCDIR}/elasticsearch.yml ; \ + fi + +.include Added: head/sysutils/logstash/distinfo ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/distinfo Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,2 @@ +SHA256 (logstash-1.1.13-flatjar.jar) = 5ba0639ff4da064c2a4f6a04bd7006b1997a6573859d3691e210b6855e1e47f1 +SIZE (logstash-1.1.13-flatjar.jar) = 69485313 Added: head/sysutils/logstash/files/elasticsearch.yml.sample ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/files/elasticsearch.yml.sample Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,337 @@ +##################### ElasticSearch Configuration Example ##################### + +# This file contains an overview of various configuration settings, +# targeted at operations staff. Application developers should +# consult the guide at . +# +# The installation procedure is covered at +# . +# +# ElasticSearch comes with reasonable defaults for most settings, +# so you can try it out without bothering with configuration. +# +# Most of the time, these defaults are just fine for running a production +# cluster. If you're fine-tuning your cluster, or wondering about the +# effect of certain configuration option, please _do ask_ on the +# mailing list or IRC channel [http://elasticsearch.org/community]. + +# Any element in the configuration can be replaced with environment variables +# by placing them in ${...} notation. For example: +# +# node.rack: ${RACK_ENV_VAR} + +# See +# for information on supported formats and syntax for the configuration file. + + +################################### Cluster ################################### + +# Cluster name identifies your cluster for auto-discovery. If you're running +# multiple clusters on the same network, make sure you're using unique names. +# +# cluster.name: elasticsearch + + +#################################### Node ##################################### + +# Node names are generated dynamically on startup, so you're relieved +# from configuring them manually. You can tie this node to a specific name: +# +# node.name: "Franz Kafka" + +# Every node can be configured to allow or deny being eligible as the master, +# and to allow or deny to store the data. +# +# Allow this node to be eligible as a master node (enabled by default): +# +# node.master: true +# +# Allow this node to store data (enabled by default): +# +# node.data: true + +# You can exploit these settings to design advanced cluster topologies. +# +# 1. You want this node to never become a master node, only to hold data. +# This will be the "workhorse" of your cluster. +# +# node.master: false +# node.data: true +# +# 2. You want this node to only serve as a master: to not store any data and +# to have free resources. This will be the "coordinator" of your cluster. +# +# node.master: true +# node.data: false +# +# 3. You want this node to be neither master nor data node, but +# to act as a "search load balancer" (fetching data from nodes, +# aggregating results, etc.) +# +# node.master: false +# node.data: false + +# Use the Cluster Health API [http://localhost:9200/_cluster/health], the +# Node Info API [http://localhost:9200/_cluster/nodes] or GUI tools +# such as and +# to inspect the cluster state. + +# A node can have generic attributes associated with it, which can later be used +# for customized shard allocation filtering, or allocation awareness. An attribute +# is a simple key value pair, similar to node.key: value, here is an example: +# +# node.rack: rack314 + + +#################################### Index #################################### + +# You can set a number of options (such as shard/replica options, mapping +# or analyzer definitions, translog settings, ...) for indices globally, +# in this file. +# +# Note, that it makes more sense to configure index settings specifically for +# a certain index, either when creating it or by using the index templates API. +# +# See and +# +# for more information. + +# Set the number of shards (splits) of an index (5 by default): +# +# index.number_of_shards: 5 + +# Set the number of replicas (additional copies) of an index (1 by default): +# +# index.number_of_replicas: 1 + +# Note, that for development on a local machine, with small indices, it usually +# makes sense to "disable" the distributed features: +# +# index.number_of_shards: 1 +# index.number_of_replicas: 0 + +# These settings directly affect the performance of index and search operations +# in your cluster. Assuming you have enough machines to hold shards and +# replicas, the rule of thumb is: +# +# 1. Having more *shards* enhances the _indexing_ performance and allows to +# _distribute_ a big index across machines. +# 2. Having more *replicas* enhances the _search_ performance and improves the +# cluster _availability_. +# +# The "number_of_shards" is a one-time setting for an index. +# +# The "number_of_replicas" can be increased or decreased anytime, +# by using the Index Update Settings API. +# +# ElasticSearch takes care about load balancing, relocating, gathering the +# results from nodes, etc. Experiment with different settings to fine-tune +# your setup. + +# Use the Index Status API () to inspect +# the index status. + + +#################################### Paths #################################### + +# Path to directory containing configuration (this file and logging.yml): +# +# path.conf: /path/to/conf + +# Path to directory where to store index data allocated for this node. +# +# path.data: /path/to/data +# +# Can optionally include more than one location, causing data to be striped across +# the locations on a file level, favouring locations with most free +# space on creation. For example: +# +# path.data: /path/to/data1,/path/to/data2 + +# Path to temporary files: +# +# path.work: /path/to/work + +# Path to log files: +# +# path.logs: /path/to/logs + +# Path to where plugins are installed: +# +# path.plugins: /path/to/plugins + + +################################### Memory #################################### + +# ElasticSearch performs poorly when JVM starts swapping: you should ensure that +# it _never_ swaps. +# +# Set this property to true to lock the memory: +# +# bootstrap.mlockall: true + +# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set +# to the same value, and that the machine has enough memory to allocate +# for ElasticSearch, leaving enough memory for the operating system itself. +# +# You should also make sure that the ElasticSearch process is allowed to lock +# the memory, eg. by using `ulimit -l unlimited`. + + +############################## Network And HTTP ############################### + +# ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens +# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node +# communication. (the range means that if the port is busy, it will automatically +# try the next port). + +# Set the bind address specifically (IPv4 or IPv6): +# +# network.bind_host: 192.168.0.1 + +# Set the address other nodes will use to communicate with this node. If not +# set, it is automatically derived. It must point to an actual IP address. +# +# network.publish_host: 192.168.0.1 + +# Set both 'bind_host' and 'publish_host': +# +# network.host: 192.168.0.1 + +# Set a custom port for the node to node communication (9300 by default): +# +# transport.port: 9300 + +# Enable compression for all communication between nodes (disabled by default): +# +# transport.tcp.compress: true + +# Set a custom port to listen for HTTP traffic: +# +# http.port: 9200 + +# Set a custom allowed content length: +# +# http.max_content_length: 100mb + +# Disable HTTP completely: +# +# http.enabled: false + + +################################### Gateway ################################### + +# The gateway allows for persisting the cluster state between full cluster +# restarts. Every change to the state (such as adding an index) will be stored +# in the gateway, and when the cluster starts up for the first time, +# it will read its state from the gateway. + +# There are several types of gateway implementations. For more information, +# see . + +# The default gateway type is the "local" gateway (recommended): +# +# gateway.type: local + +# Settings below control how and when to start the initial recovery process on +# a full cluster restart (to reuse as much local data as possible). + +# Allow recovery process after N nodes in a cluster are up: +# +# gateway.recover_after_nodes: 1 + +# Set the timeout to initiate the recovery process, once the N nodes +# from previous setting are up (accepts time value): +# +# gateway.recover_after_time: 5m + +# Set how many nodes are expected in this cluster. Once these N nodes +# are up, begin recovery process immediately: +# +# gateway.expected_nodes: 2 + + +############################# Recovery Throttling ############################# + +# These settings allow to control the process of shards allocation between +# nodes during initial recovery, replica allocation, rebalancing, +# or when adding and removing nodes. + +# Set the number of concurrent recoveries happening on a node: +# +# 1. During the initial recovery +# +# cluster.routing.allocation.node_initial_primaries_recoveries: 4 +# +# 2. During adding/removing nodes, rebalancing, etc +# +# cluster.routing.allocation.node_concurrent_recoveries: 2 + +# Set to throttle throughput when recovering (eg. 100mb, by default unlimited): +# +# indices.recovery.max_size_per_sec: 0 + +# Set to limit the number of open concurrent streams when +# recovering a shard from a peer: +# +# indices.recovery.concurrent_streams: 5 + + +################################## Discovery ################################## + +# Discovery infrastructure ensures nodes can be found within a cluster +# and master node is elected. Multicast discovery is the default. + +# Set to ensure a node sees N other master eligible nodes to be considered +# operational within the cluster. Set this option to a higher value (2-4) +# for large clusters: +# +# discovery.zen.minimum_master_nodes: 1 + +# Set the time to wait for ping responses from other nodes when discovering. +# Set this option to a higher value on a slow or congested network +# to minimize discovery failures: +# +# discovery.zen.ping.timeout: 3s + +# See +# for more information. + +# Unicast discovery allows to explicitly control which nodes will be used +# to discover the cluster. It can be used when multicast is not present, +# or to restrict the cluster communication-wise. +# +# 1. Disable multicast discovery (enabled by default): +# +# discovery.zen.ping.multicast.enabled: false +# +# 2. Configure an initial list of master nodes in the cluster +# to perform discovery when new nodes (master or data) are started: +# +# discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"] + +# EC2 discovery allows to use AWS EC2 API in order to perform discovery. +# +# You have to install the cloud-aws plugin for enabling the EC2 discovery. +# +# See +# for more information. +# +# See +# for a step-by-step tutorial. + + +################################## Slow Log ################################## + +# Shard level query and fetch threshold logging. + +#index.search.slowlog.level: TRACE +#index.search.slowlog.threshold.query.warn: 10s +#index.search.slowlog.threshold.query.info: 5s +#index.search.slowlog.threshold.query.debug: 2s +#index.search.slowlog.threshold.query.trace: 500ms + +#index.search.slowlog.threshold.fetch.warn: 1s +#index.search.slowlog.threshold.fetch.info: 800ms +#index.search.slowlog.threshold.fetch.debug: 500ms +#index.search.slowlog.threshold.fetch.trace: 200ms Added: head/sysutils/logstash/files/logstash.conf.sample ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/files/logstash.conf.sample Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,38 @@ +input { + file { + type => "system logs" + + # # Wildcards work, here :) + # path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ] + path => [ "/var/log/messages" ] + } + + #file { + # type => "Hudson-access" + # path => "/var/log/www/hudson.ish.com.au-access_log" + #} + + #file { + # type => "Syslog" + # path => "/var/log/messages" + #} +} + +output { + # Emit events to stdout for easy debugging of what is going through + # logstash. + #stdout { } + + # This will use elasticsearch to store your logs. + # The 'embedded' option will cause logstash to run the elasticsearch + # server in the same process, so you don't have to worry about + # how to download, configure, or run elasticsearch! + elasticsearch { + embedded => true + #embedded_http_port => 9200 + #cluster => elasticsearch + #host => host + #port => port + + } +} Added: head/sysutils/logstash/files/logstash.in ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/files/logstash.in Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,81 @@ +#!/bin/sh + +# $FreeBSD$ +# +# PROVIDE: logstash +# REQUIRE: LOGIN +# KEYWORD: shutdown +# +# +# Configuration settings for logstash in /etc/rc.conf: +# +# logstash_enable (bool): +# Set to "NO" by default. +# Set it to "YES" to enable logstash +# +# logstash_mode : +# Set to "standalone" by default. +# Valid options: +# "standalone": agent, web & elasticsearch +# "web": Starts logstash as a web ui +# "agent": Justs works as a log shipper +# +# logstash_logging (bool): +# Set to "NO" by default. +# Set it to "YES" to enable logstash logging to file +# Default output to /var/log/logstash.log +# + +. /etc/rc.subr + +name=logstash +rcvar=logstash_enable + +load_rc_config ${name} + +: ${logstash_enable="NO"} +: ${logstash_home="%%LOGSTASH_HOME%%"} +: ${logstash_config="%%PREFIX%%/etc/${name}/${name}.conf"} +: ${logstash_jar="%%LOGSTASH_HOME%%/%%LOGSTASH_JAR%%"} +: ${logstash_java_home="%%JAVA_HOME%%"} +: ${logstash_log="NO"} +: ${logstash_mode="standalone"} +: ${logstash_port="9292"} +: ${logstash_elastic_backend=""} +: ${logstash_log_file="${logdir}/${name}.log"} +: ${logstash_elastic_datadir="%%LOGSTASH_DATA_DIR%%"} + +piddir=/var/run/${name} +pidfile=${piddir}/${name}.pid + +if [ -d $piddir ]; then + mkdir -p $piddir +fi + +logdir="/var/log" +command="/usr/sbin/daemon" + +java_cmd="${logstash_java_home}/bin/java" +procname="${java_cmd}" + +logstash_chdir=${logstash_home} +logstash_log_options="" +logstash_elastic_options="" + +if checkyesno logstash_log; then + logstash_log_options=" --log ${logstash_log_file}" +fi + +if [ ${logstash_mode} = "standalone" ]; then + logstash_args="agent -f ${logstash_config} -- web --port ${logstash_port} --backend elasticsearch:///?local ${logstash_log_options}" + logstash_elastic_options="-Des.path.data=${logstash_elastic_datadir}" +elif [ ${logstash_mode} = "agent" ]; then + logstash_args="agent -f ${logstash_config} ${logstash_log_options}" +elif [ ${logstash_mode} = "web" ]; then + logstash_args="web --port ${logstash_port} --backend elasticsearch://${logstash_elastic_backend}/ ${logstash_log_options}" +fi + +command_args="-f -p ${pidfile} ${java_cmd} ${logstash_elastic_options} -jar ${logstash_jar} ${logstash_args}" +required_files="${java_cmd} ${logstash_config}" + +run_rc_command "$1" Added: head/sysutils/logstash/pkg-descr ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/pkg-descr Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,6 @@ +Logstash is a tool for managing events and logs. You can use it to +collect logs, parse them, and store them for later use (like, for +searching). Speaking of searching, logstash comes with a web interface +for searching and drilling into all of your logs. + +WWW: http://logstash.net/ Added: head/sysutils/logstash/pkg-plist ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sysutils/logstash/pkg-plist Wed Jul 17 23:20:21 2013 (r323192) @@ -0,0 +1,13 @@ +%%LOGSTASH_HOME%%/%%LOGSTASH_JAR%% +@exec mkdir -p %%LOGSTASH_RUN%% +@exec mkdir -p %%LOGSTASH_DATA_DIR%% +@unexec if cmp -s %D/%%ETCDIR%%/logstash.conf.sample %D/%%ETCDIR%%/logstash.conf; then rm -f %D/%%ETCDIR%%/logstash.conf; fi +%%ETCDIR%%/logstash.conf.sample +@exec if [ ! -f %D/%%ETCDIR%%/logstash.conf ] ; then cp -p %D/%F %B/logstash.conf; fi +@unexec if cmp -s %D/%%ETCDIR%%/elasticsearch.yml.sample %D/%%ETCDIR%%/elasticsearch.yml; then rm -f %D/%%ETCDIR%%/elasticsearch.yml; fi +%%ETCDIR%%/elasticsearch.yml.sample +@exec if [ ! -f %D/%%ETCDIR%%/elasticsearch.yml ] ; then cp -p %D/%F %B/elasticsearch.yml; fi +@dirrmtry %%LOGSTASH_DATA_DIR%% +@dirrmtry %%LOGSTASH_HOME%% +@dirrmtry %%ETCDIR%% +@dirrmtry %%LOGSTASH_RUN%%