From owner-svn-ports-all@freebsd.org Fri Jun 21 23:08:51 2019 Return-Path: Delivered-To: svn-ports-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 85ED915C2FB1; Fri, 21 Jun 2019 23:08:51 +0000 (UTC) (envelope-from sunpoet@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 675AE85996; Fri, 21 Jun 2019 23:08:49 +0000 (UTC) (envelope-from sunpoet@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 6FCA51E65; Fri, 21 Jun 2019 23:08:46 +0000 (UTC) (envelope-from sunpoet@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x5LN8kux061855; Fri, 21 Jun 2019 23:08:46 GMT (envelope-from sunpoet@FreeBSD.org) Received: (from sunpoet@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x5LN8jtm061852; Fri, 21 Jun 2019 23:08:45 GMT (envelope-from sunpoet@FreeBSD.org) Message-Id: <201906212308.x5LN8jtm061852@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: sunpoet set sender to sunpoet@FreeBSD.org using -f From: Sunpoet Po-Chuan Hsieh Date: Fri, 21 Jun 2019 23:08:45 +0000 (UTC) To: ports-committers@freebsd.org, svn-ports-all@freebsd.org, svn-ports-head@freebsd.org Subject: svn commit: r504818 - in head/math: . py-gym X-SVN-Group: ports-head X-SVN-Commit-Author: sunpoet X-SVN-Commit-Paths: in head/math: . py-gym X-SVN-Commit-Revision: 504818 X-SVN-Commit-Repository: ports MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 675AE85996 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.98 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.999,0]; NEURAL_HAM_SHORT(-0.98)[-0.981,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US]; NEURAL_HAM_LONG(-1.00)[-1.000,0] X-BeenThere: svn-ports-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for the ports tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Jun 2019 23:08:51 -0000 Author: sunpoet Date: Fri Jun 21 23:08:45 2019 New Revision: 504818 URL: https://svnweb.freebsd.org/changeset/ports/504818 Log: Add py-gym 0.12.5 OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. This is the gym open-source library, which gives you access to a standardized set of environments. gym makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano. You can use it from Python code, and soon from other languages. There are two basic concepts in reinforcement learning: the environment (namely, the outside world) and the agent (namely, the algorithm you are writing). The agent sends actions to the environment, and the environment replies with observations and rewards (that is, a score). The core gym interface is Env, which is the unified environment interface. There is no interface for agents; that part is left to you. The following are the Env methods you should know: - reset(self): Reset the environment's state. Returns observation. - step(self, action): Step the environment by one timestep. Returns observation, reward, done, info. - render(self, mode='human'): Render one frame of the environment. The default mode will do something human friendly, such as pop up a window. WWW: https://gym.openai.com/ WWW: https://github.com/openai/gym Added: head/math/py-gym/ head/math/py-gym/Makefile (contents, props changed) head/math/py-gym/distinfo (contents, props changed) head/math/py-gym/pkg-descr (contents, props changed) Modified: head/math/Makefile Modified: head/math/Makefile ============================================================================== --- head/math/Makefile Fri Jun 21 23:08:38 2019 (r504817) +++ head/math/Makefile Fri Jun 21 23:08:45 2019 (r504818) @@ -707,6 +707,7 @@ SUBDIR += py-gnuplot SUBDIR += py-grandalf SUBDIR += py-graphillion + SUBDIR += py-gym SUBDIR += py-igakit SUBDIR += py-igraph SUBDIR += py-intspan Added: head/math/py-gym/Makefile ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/math/py-gym/Makefile Fri Jun 21 23:08:45 2019 (r504818) @@ -0,0 +1,27 @@ +# Created by: Po-Chuan Hsieh +# $FreeBSD$ + +PORTNAME= gym +PORTVERSION= 0.12.5 +CATEGORIES= math python +MASTER_SITES= CHEESESHOP +PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX} + +MAINTAINER= sunpoet@FreeBSD.org +COMMENT= OpenAI toolkit for developing and comparing your reinforcement learning agents + +LICENSE= MIT + +RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}numpy>=1.10.4:math/py-numpy@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}pyglet>=0:graphics/py-pyglet@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}scipy>=0:science/py-scipy@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}six>=0:devel/py-six@${PY_FLAVOR} +TEST_DEPENDS= ${PYTHON_PKGNAMEPREFIX}mock>=0:devel/py-mock@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}pytest>=0:devel/py-pytest@${PY_FLAVOR} + +USES= python +USE_PYTHON= autoplist concurrent distutils + +NO_ARCH= yes + +.include Added: head/math/py-gym/distinfo ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/math/py-gym/distinfo Fri Jun 21 23:08:45 2019 (r504818) @@ -0,0 +1,3 @@ +TIMESTAMP = 1561148961 +SHA256 (gym-0.12.5.tar.gz) = 027422f59b662748eae3420b804e35bbf953f62d40cd96d2de9f842c08de822e +SIZE (gym-0.12.5.tar.gz) = 1544308 Added: head/math/py-gym/pkg-descr ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/math/py-gym/pkg-descr Fri Jun 21 23:08:45 2019 (r504818) @@ -0,0 +1,24 @@ +OpenAI Gym is a toolkit for developing and comparing reinforcement learning +algorithms. This is the gym open-source library, which gives you access to a +standardized set of environments. + +gym makes no assumptions about the structure of your agent, and is compatible +with any numerical computation library, such as TensorFlow or Theano. You can +use it from Python code, and soon from other languages. + +There are two basic concepts in reinforcement learning: the environment (namely, +the outside world) and the agent (namely, the algorithm you are writing). The +agent sends actions to the environment, and the environment replies with +observations and rewards (that is, a score). + +The core gym interface is Env, which is the unified environment interface. There +is no interface for agents; that part is left to you. The following are the Env +methods you should know: +- reset(self): Reset the environment's state. Returns observation. +- step(self, action): Step the environment by one timestep. Returns observation, + reward, done, info. +- render(self, mode='human'): Render one frame of the environment. The default + mode will do something human friendly, such as pop up a window. + +WWW: https://gym.openai.com/ +WWW: https://github.com/openai/gym