From owner-svn-ports-all@FreeBSD.ORG Sun Mar 10 19:04:02 2013 Return-Path: Delivered-To: svn-ports-all@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 989C0F2C; Sun, 10 Mar 2013 19:04:02 +0000 (UTC) (envelope-from rea@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) by mx1.freebsd.org (Postfix) with ESMTP id 89A06B14; Sun, 10 Mar 2013 19:04:02 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.6/8.14.6) with ESMTP id r2AJ428I011675; Sun, 10 Mar 2013 19:04:02 GMT (envelope-from rea@svn.freebsd.org) Received: (from rea@localhost) by svn.freebsd.org (8.14.6/8.14.5/Submit) id r2AJ41CI011657; Sun, 10 Mar 2013 19:04:01 GMT (envelope-from rea@svn.freebsd.org) Message-Id: <201303101904.r2AJ41CI011657@svn.freebsd.org> From: Eygene Ryabinkin Date: Sun, 10 Mar 2013 19:04:01 +0000 (UTC) To: ports-committers@freebsd.org, svn-ports-all@freebsd.org, svn-ports-head@freebsd.org Subject: svn commit: r313838 - in head: lang/perl5.12 lang/perl5.12/files lang/perl5.14 lang/perl5.14/files lang/perl5.16 lang/perl5.16/files security/vuxml X-SVN-Group: ports-head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-ports-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for the ports tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 10 Mar 2013 19:04:02 -0000 Author: rea Date: Sun Mar 10 19:04:00 2013 New Revision: 313838 URL: http://svnweb.freebsd.org/changeset/ports/313838 Log: Perl 5.x: fix CVE-2013-1667 Feature safe: wholeheartedly hope so Added: head/lang/perl5.12/files/patch-cve-2013-1667 (contents, props changed) head/lang/perl5.14/files/patch-cve-2013-1667 (contents, props changed) head/lang/perl5.16/files/patch-cve-2013-1667 (contents, props changed) Modified: head/lang/perl5.12/Makefile head/lang/perl5.14/Makefile head/lang/perl5.16/Makefile head/security/vuxml/vuln.xml Modified: head/lang/perl5.12/Makefile ============================================================================== --- head/lang/perl5.12/Makefile Sun Mar 10 18:40:26 2013 (r313837) +++ head/lang/perl5.12/Makefile Sun Mar 10 19:04:00 2013 (r313838) @@ -7,7 +7,7 @@ PORTNAME= perl PORTVERSION= ${PERL_VERSION} -PORTREVISION= 4 +PORTREVISION= 5 CATEGORIES= lang devel perl5 MASTER_SITES= CPAN \ ${MASTER_SITE_LOCAL:S/$/:local/} \ Added: head/lang/perl5.12/files/patch-cve-2013-1667 ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/lang/perl5.12/files/patch-cve-2013-1667 Sun Mar 10 19:04:00 2013 (r313838) @@ -0,0 +1,164 @@ +From f2a571dae7d70f7e3b59022834d8003ecd2df884 Mon Sep 17 00:00:00 2001 +From: Yves Orton +Date: Tue, 12 Feb 2013 10:53:05 +0100 +Subject: [PATCH] Prevent premature hsplit() calls, and only trigger REHASH + after hsplit() + +Triggering a hsplit due to long chain length allows an attacker +to create a carefully chosen set of keys which can cause the hash +to use 2 * (2**32) * sizeof(void *) bytes ram. AKA a DOS via memory +exhaustion. Doing so also takes non trivial time. + +Eliminating this check, and only inspecting chain length after a +normal hsplit() (triggered when keys>buckets) prevents the attack +entirely, and makes such attacks relatively benign. + +(cherry picked from commit f1220d61455253b170e81427c9d0357831ca0fac) +--- + ext/Hash-Util-FieldHash/t/10_hash.t | 18 ++++++++++++++++-- + hv.c | 26 ++++++-------------------- + t/op/hash.t | 20 +++++++++++++++++--- + 3 files changed, 39 insertions(+), 25 deletions(-) + +diff --git a/ext/Hash-Util-FieldHash/t/10_hash.t b/ext/Hash-Util-FieldHash/t/10_hash.t +index 2cfb4e8..d58f053 100644 +--- ext/Hash-Util-FieldHash/t/10_hash.t ++++ ext/Hash-Util-FieldHash/t/10_hash.t +@@ -38,15 +38,29 @@ use constant START => "a"; + + # some initial hash data + fieldhash my %h2; +-%h2 = map {$_ => 1} 'a'..'cc'; ++my $counter= "a"; ++$h2{$counter++}++ while $counter ne 'cd'; + + ok (!Internals::HvREHASH(%h2), + "starting with pre-populated non-pathological hash (rehash flag if off)"); + + my @keys = get_keys(\%h2); ++my $buckets= buckets(\%h2); + $h2{$_}++ for @keys; ++$h2{$counter++}++ while buckets(\%h2) == $buckets; # force a split + ok (Internals::HvREHASH(%h2), +- scalar(@keys) . " colliding into the same bucket keys are triggering rehash"); ++ scalar(@keys) . " colliding into the same bucket keys are triggering rehash after split"); ++ ++# returns the number of buckets in a hash ++sub buckets { ++ my $hr = shift; ++ my $keys_buckets= scalar(%$hr); ++ if ($keys_buckets=~m!/([0-9]+)\z!) { ++ return 0+$1; ++ } else { ++ return 8; ++ } ++} + + sub get_keys { + my $hr = shift; +diff --git a/hv.c b/hv.c +index 89c6456..8659678 100644 +--- hv.c ++++ hv.c +@@ -35,7 +35,8 @@ holds the key and hash value. + #define PERL_HASH_INTERNAL_ACCESS + #include "perl.h" + +-#define HV_MAX_LENGTH_BEFORE_SPLIT 14 ++#define HV_MAX_LENGTH_BEFORE_REHASH 14 ++#define SHOULD_DO_HSPLIT(xhv) ((xhv)->xhv_keys > (xhv)->xhv_max) /* HvTOTALKEYS(hv) > HvMAX(hv) */ + + static const char S_strtab_error[] + = "Cannot modify shared string table in hv_%s"; +@@ -818,23 +819,8 @@ Perl_hv_common(pTHX_ HV *hv, SV *keysv, const char *key, STRLEN klen, + xhv->xhv_keys++; /* HvTOTALKEYS(hv)++ */ + if (!counter) { /* initial entry? */ + xhv->xhv_fill++; /* HvFILL(hv)++ */ +- } else if (xhv->xhv_keys > (IV)xhv->xhv_max) { ++ } else if ( SHOULD_DO_HSPLIT(xhv) ) { + hsplit(hv); +- } else if(!HvREHASH(hv)) { +- U32 n_links = 1; +- +- while ((counter = HeNEXT(counter))) +- n_links++; +- +- if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) { +- /* Use only the old HvKEYS(hv) > HvMAX(hv) condition to limit +- bucket splits on a rehashed hash, as we're not going to +- split it again, and if someone is lucky (evil) enough to +- get all the keys in one list they could exhaust our memory +- as we repeatedly double the number of buckets on every +- entry. Linear search feels a less worse thing to do. */ +- hsplit(hv); +- } + } + } + +@@ -1180,7 +1166,7 @@ S_hsplit(pTHX_ HV *hv) + + + /* Pick your policy for "hashing isn't working" here: */ +- if (longest_chain <= HV_MAX_LENGTH_BEFORE_SPLIT /* split worked? */ ++ if (longest_chain <= HV_MAX_LENGTH_BEFORE_REHASH /* split worked? */ + || HvREHASH(hv)) { + return; + } +@@ -2551,8 +2537,8 @@ S_share_hek_flags(pTHX_ const char *str, I32 len, register U32 hash, int flags) + xhv->xhv_keys++; /* HvTOTALKEYS(hv)++ */ + if (!next) { /* initial entry? */ + xhv->xhv_fill++; /* HvFILL(hv)++ */ +- } else if (xhv->xhv_keys > (IV)xhv->xhv_max /* HvKEYS(hv) > HvMAX(hv) */) { +- hsplit(PL_strtab); ++ } else if ( SHOULD_DO_HSPLIT(xhv) ) { ++ hsplit(PL_strtab); + } + } + +diff --git a/t/op/hash.t b/t/op/hash.t +index 9bde518..45eb782 100644 +--- t/op/hash.t ++++ t/op/hash.t +@@ -39,22 +39,36 @@ use constant THRESHOLD => 14; + use constant START => "a"; + + # some initial hash data +-my %h2 = map {$_ => 1} 'a'..'cc'; ++my %h2; ++my $counter= "a"; ++$h2{$counter++}++ while $counter ne 'cd'; + + ok (!Internals::HvREHASH(%h2), + "starting with pre-populated non-pathological hash (rehash flag if off)"); + + my @keys = get_keys(\%h2); ++my $buckets= buckets(\%h2); + $h2{$_}++ for @keys; ++$h2{$counter++}++ while buckets(\%h2) == $buckets; # force a split + ok (Internals::HvREHASH(%h2), +- scalar(@keys) . " colliding into the same bucket keys are triggering rehash"); ++ scalar(@keys) . " colliding into the same bucket keys are triggering rehash after split"); ++ ++# returns the number of buckets in a hash ++sub buckets { ++ my $hr = shift; ++ my $keys_buckets= scalar(%$hr); ++ if ($keys_buckets=~m!/([0-9]+)\z!) { ++ return 0+$1; ++ } else { ++ return 8; ++ } ++} + + sub get_keys { + my $hr = shift; + + # the minimum of bits required to mount the attack on a hash + my $min_bits = log(THRESHOLD)/log(2); +- + # if the hash has already been populated with a significant amount + # of entries the number of mask bits can be higher + my $keys = scalar keys %$hr; +-- +1.8.1.3 + Modified: head/lang/perl5.14/Makefile ============================================================================== --- head/lang/perl5.14/Makefile Sun Mar 10 18:40:26 2013 (r313837) +++ head/lang/perl5.14/Makefile Sun Mar 10 19:04:00 2013 (r313838) @@ -7,7 +7,7 @@ PORTNAME= perl PORTVERSION= ${PERL_VERSION} -PORTREVISION= 2 +PORTREVISION= 3 CATEGORIES= lang devel perl5 MASTER_SITES= CPAN \ ${MASTER_SITE_LOCAL:S/$/:local/} \ Added: head/lang/perl5.14/files/patch-cve-2013-1667 ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/lang/perl5.14/files/patch-cve-2013-1667 Sun Mar 10 19:04:00 2013 (r313838) @@ -0,0 +1,172 @@ +From 57629630785036482da04228e9bf767b3dac66b6 Mon Sep 17 00:00:00 2001 +From: Yves Orton +Date: Tue, 12 Feb 2013 10:53:05 +0100 +Subject: [PATCH] Prevent premature hsplit() calls, and only trigger REHASH + after hsplit() + +Triggering a hsplit due to long chain length allows an attacker +to create a carefully chosen set of keys which can cause the hash +to use 2 * (2**32) * sizeof(void *) bytes ram. AKA a DOS via memory +exhaustion. Doing so also takes non trivial time. + +Eliminating this check, and only inspecting chain length after a +normal hsplit() (triggered when keys>buckets) prevents the attack +entirely, and makes such attacks relatively benign. + +(cherry picked from commit f1220d61455253b170e81427c9d0357831ca0fac) +--- + ext/Hash-Util-FieldHash/t/10_hash.t | 18 ++++++++++++++++-- + hv.c | 35 ++++++++--------------------------- + t/op/hash.t | 20 +++++++++++++++++--- + 3 files changed, 41 insertions(+), 32 deletions(-) + +diff --git a/ext/Hash-Util-FieldHash/t/10_hash.t b/ext/Hash-Util-FieldHash/t/10_hash.t +index 2cfb4e8..d58f053 100644 +--- ext/Hash-Util-FieldHash/t/10_hash.t ++++ ext/Hash-Util-FieldHash/t/10_hash.t +@@ -38,15 +38,29 @@ use constant START => "a"; + + # some initial hash data + fieldhash my %h2; +-%h2 = map {$_ => 1} 'a'..'cc'; ++my $counter= "a"; ++$h2{$counter++}++ while $counter ne 'cd'; + + ok (!Internals::HvREHASH(%h2), + "starting with pre-populated non-pathological hash (rehash flag if off)"); + + my @keys = get_keys(\%h2); ++my $buckets= buckets(\%h2); + $h2{$_}++ for @keys; ++$h2{$counter++}++ while buckets(\%h2) == $buckets; # force a split + ok (Internals::HvREHASH(%h2), +- scalar(@keys) . " colliding into the same bucket keys are triggering rehash"); ++ scalar(@keys) . " colliding into the same bucket keys are triggering rehash after split"); ++ ++# returns the number of buckets in a hash ++sub buckets { ++ my $hr = shift; ++ my $keys_buckets= scalar(%$hr); ++ if ($keys_buckets=~m!/([0-9]+)\z!) { ++ return 0+$1; ++ } else { ++ return 8; ++ } ++} + + sub get_keys { + my $hr = shift; +diff --git a/hv.c b/hv.c +index 2be1feb..abb9d76 100644 +--- hv.c ++++ hv.c +@@ -35,7 +35,8 @@ holds the key and hash value. + #define PERL_HASH_INTERNAL_ACCESS + #include "perl.h" + +-#define HV_MAX_LENGTH_BEFORE_SPLIT 14 ++#define HV_MAX_LENGTH_BEFORE_REHASH 14 ++#define SHOULD_DO_HSPLIT(xhv) ((xhv)->xhv_keys > (xhv)->xhv_max) /* HvTOTALKEYS(hv) > HvMAX(hv) */ + + static const char S_strtab_error[] + = "Cannot modify shared string table in hv_%s"; +@@ -794,29 +795,9 @@ Perl_hv_common(pTHX_ HV *hv, SV *keysv, const char *key, STRLEN klen, + if (masked_flags & HVhek_ENABLEHVKFLAGS) + HvHASKFLAGS_on(hv); + +- { +- const HE *counter = HeNEXT(entry); +- +- xhv->xhv_keys++; /* HvTOTALKEYS(hv)++ */ +- if (!counter) { /* initial entry? */ +- } else if (xhv->xhv_keys > xhv->xhv_max) { +- /* Use only the old HvKEYS(hv) > HvMAX(hv) condition to limit +- bucket splits on a rehashed hash, as we're not going to +- split it again, and if someone is lucky (evil) enough to +- get all the keys in one list they could exhaust our memory +- as we repeatedly double the number of buckets on every +- entry. Linear search feels a less worse thing to do. */ +- hsplit(hv); +- } else if(!HvREHASH(hv)) { +- U32 n_links = 1; +- +- while ((counter = HeNEXT(counter))) +- n_links++; +- +- if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) { +- hsplit(hv); +- } +- } ++ xhv->xhv_keys++; /* HvTOTALKEYS(hv)++ */ ++ if ( SHOULD_DO_HSPLIT(xhv) ) { ++ hsplit(hv); + } + + if (return_svp) { +@@ -1192,7 +1173,7 @@ S_hsplit(pTHX_ HV *hv) + + + /* Pick your policy for "hashing isn't working" here: */ +- if (longest_chain <= HV_MAX_LENGTH_BEFORE_SPLIT /* split worked? */ ++ if (longest_chain <= HV_MAX_LENGTH_BEFORE_REHASH /* split worked? */ + || HvREHASH(hv)) { + return; + } +@@ -2831,8 +2812,8 @@ S_share_hek_flags(pTHX_ const char *str, I32 len, register U32 hash, int flags) + + xhv->xhv_keys++; /* HvTOTALKEYS(hv)++ */ + if (!next) { /* initial entry? */ +- } else if (xhv->xhv_keys > xhv->xhv_max /* HvKEYS(hv) > HvMAX(hv) */) { +- hsplit(PL_strtab); ++ } else if ( SHOULD_DO_HSPLIT(xhv) ) { ++ hsplit(PL_strtab); + } + } + +diff --git a/t/op/hash.t b/t/op/hash.t +index 278bea7..201260a 100644 +--- t/op/hash.t ++++ t/op/hash.t +@@ -39,22 +39,36 @@ use constant THRESHOLD => 14; + use constant START => "a"; + + # some initial hash data +-my %h2 = map {$_ => 1} 'a'..'cc'; ++my %h2; ++my $counter= "a"; ++$h2{$counter++}++ while $counter ne 'cd'; + + ok (!Internals::HvREHASH(%h2), + "starting with pre-populated non-pathological hash (rehash flag if off)"); + + my @keys = get_keys(\%h2); ++my $buckets= buckets(\%h2); + $h2{$_}++ for @keys; ++$h2{$counter++}++ while buckets(\%h2) == $buckets; # force a split + ok (Internals::HvREHASH(%h2), +- scalar(@keys) . " colliding into the same bucket keys are triggering rehash"); ++ scalar(@keys) . " colliding into the same bucket keys are triggering rehash after split"); ++ ++# returns the number of buckets in a hash ++sub buckets { ++ my $hr = shift; ++ my $keys_buckets= scalar(%$hr); ++ if ($keys_buckets=~m!/([0-9]+)\z!) { ++ return 0+$1; ++ } else { ++ return 8; ++ } ++} + + sub get_keys { + my $hr = shift; + + # the minimum of bits required to mount the attack on a hash + my $min_bits = log(THRESHOLD)/log(2); +- + # if the hash has already been populated with a significant amount + # of entries the number of mask bits can be higher + my $keys = scalar keys %$hr; +-- +1.8.1.3 + Modified: head/lang/perl5.16/Makefile ============================================================================== --- head/lang/perl5.16/Makefile Sun Mar 10 18:40:26 2013 (r313837) +++ head/lang/perl5.16/Makefile Sun Mar 10 19:04:00 2013 (r313838) @@ -7,6 +7,7 @@ PORTNAME= perl PORTVERSION= ${PERL_VERSION} +PORTREVISION= 1 CATEGORIES= lang devel perl5 MASTER_SITES= CPAN \ ${MASTER_SITE_LOCAL:S/$/:local/} \ Added: head/lang/perl5.16/files/patch-cve-2013-1667 ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/lang/perl5.16/files/patch-cve-2013-1667 Sun Mar 10 19:04:00 2013 (r313838) @@ -0,0 +1,170 @@ +From f1220d61455253b170e81427c9d0357831ca0fac Mon Sep 17 00:00:00 2001 +From: Yves Orton +Date: Tue, 12 Feb 2013 10:53:05 +0100 +Subject: [PATCH] Prevent premature hsplit() calls, and only trigger REHASH + after hsplit() + +Triggering a hsplit due to long chain length allows an attacker +to create a carefully chosen set of keys which can cause the hash +to use 2 * (2**32) * sizeof(void *) bytes ram. AKA a DOS via memory +exhaustion. Doing so also takes non trivial time. + +Eliminating this check, and only inspecting chain length after a +normal hsplit() (triggered when keys>buckets) prevents the attack +entirely, and makes such attacks relatively benign. +--- + ext/Hash-Util-FieldHash/t/10_hash.t | 18 ++++++++++++++++-- + hv.c | 35 ++++++++--------------------------- + t/op/hash.t | 20 +++++++++++++++++--- + 3 files changed, 41 insertions(+), 32 deletions(-) + +diff --git a/ext/Hash-Util-FieldHash/t/10_hash.t b/ext/Hash-Util-FieldHash/t/10_hash.t +index 2cfb4e8..d58f053 100644 +--- ext/Hash-Util-FieldHash/t/10_hash.t ++++ ext/Hash-Util-FieldHash/t/10_hash.t +@@ -38,15 +38,29 @@ use constant START => "a"; + + # some initial hash data + fieldhash my %h2; +-%h2 = map {$_ => 1} 'a'..'cc'; ++my $counter= "a"; ++$h2{$counter++}++ while $counter ne 'cd'; + + ok (!Internals::HvREHASH(%h2), + "starting with pre-populated non-pathological hash (rehash flag if off)"); + + my @keys = get_keys(\%h2); ++my $buckets= buckets(\%h2); + $h2{$_}++ for @keys; ++$h2{$counter++}++ while buckets(\%h2) == $buckets; # force a split + ok (Internals::HvREHASH(%h2), +- scalar(@keys) . " colliding into the same bucket keys are triggering rehash"); ++ scalar(@keys) . " colliding into the same bucket keys are triggering rehash after split"); ++ ++# returns the number of buckets in a hash ++sub buckets { ++ my $hr = shift; ++ my $keys_buckets= scalar(%$hr); ++ if ($keys_buckets=~m!/([0-9]+)\z!) { ++ return 0+$1; ++ } else { ++ return 8; ++ } ++} + + sub get_keys { + my $hr = shift; +diff --git a/hv.c b/hv.c +index 6b66251..a031703 100644 +--- hv.c ++++ hv.c +@@ -35,7 +35,8 @@ holds the key and hash value. + #define PERL_HASH_INTERNAL_ACCESS + #include "perl.h" + +-#define HV_MAX_LENGTH_BEFORE_SPLIT 14 ++#define HV_MAX_LENGTH_BEFORE_REHASH 14 ++#define SHOULD_DO_HSPLIT(xhv) ((xhv)->xhv_keys > (xhv)->xhv_max) /* HvTOTALKEYS(hv) > HvMAX(hv) */ + + static const char S_strtab_error[] + = "Cannot modify shared string table in hv_%s"; +@@ -798,29 +799,9 @@ Perl_hv_common(pTHX_ HV *hv, SV *keysv, const char *key, STRLEN klen, + if (masked_flags & HVhek_ENABLEHVKFLAGS) + HvHASKFLAGS_on(hv); + +- { +- const HE *counter = HeNEXT(entry); +- +- xhv->xhv_keys++; /* HvTOTALKEYS(hv)++ */ +- if (!counter) { /* initial entry? */ +- } else if (xhv->xhv_keys > xhv->xhv_max) { +- /* Use only the old HvUSEDKEYS(hv) > HvMAX(hv) condition to limit +- bucket splits on a rehashed hash, as we're not going to +- split it again, and if someone is lucky (evil) enough to +- get all the keys in one list they could exhaust our memory +- as we repeatedly double the number of buckets on every +- entry. Linear search feels a less worse thing to do. */ +- hsplit(hv); +- } else if(!HvREHASH(hv)) { +- U32 n_links = 1; +- +- while ((counter = HeNEXT(counter))) +- n_links++; +- +- if (n_links > HV_MAX_LENGTH_BEFORE_SPLIT) { +- hsplit(hv); +- } +- } ++ xhv->xhv_keys++; /* HvTOTALKEYS(hv)++ */ ++ if ( SHOULD_DO_HSPLIT(xhv) ) { ++ hsplit(hv); + } + + if (return_svp) { +@@ -1197,7 +1178,7 @@ S_hsplit(pTHX_ HV *hv) + + + /* Pick your policy for "hashing isn't working" here: */ +- if (longest_chain <= HV_MAX_LENGTH_BEFORE_SPLIT /* split worked? */ ++ if (longest_chain <= HV_MAX_LENGTH_BEFORE_REHASH /* split worked? */ + || HvREHASH(hv)) { + return; + } +@@ -2782,8 +2763,8 @@ S_share_hek_flags(pTHX_ const char *str, I32 len, register U32 hash, int flags) + + xhv->xhv_keys++; /* HvTOTALKEYS(hv)++ */ + if (!next) { /* initial entry? */ +- } else if (xhv->xhv_keys > xhv->xhv_max /* HvUSEDKEYS(hv) > HvMAX(hv) */) { +- hsplit(PL_strtab); ++ } else if ( SHOULD_DO_HSPLIT(xhv) ) { ++ hsplit(PL_strtab); + } + } + +diff --git a/t/op/hash.t b/t/op/hash.t +index ef757a3..97eb81b 100644 +--- t/op/hash.t ++++ t/op/hash.t +@@ -39,22 +39,36 @@ use constant THRESHOLD => 14; + use constant START => "a"; + + # some initial hash data +-my %h2 = map {$_ => 1} 'a'..'cc'; ++my %h2; ++my $counter= "a"; ++$h2{$counter++}++ while $counter ne 'cd'; + + ok (!Internals::HvREHASH(%h2), + "starting with pre-populated non-pathological hash (rehash flag if off)"); + + my @keys = get_keys(\%h2); ++my $buckets= buckets(\%h2); + $h2{$_}++ for @keys; ++$h2{$counter++}++ while buckets(\%h2) == $buckets; # force a split + ok (Internals::HvREHASH(%h2), +- scalar(@keys) . " colliding into the same bucket keys are triggering rehash"); ++ scalar(@keys) . " colliding into the same bucket keys are triggering rehash after split"); ++ ++# returns the number of buckets in a hash ++sub buckets { ++ my $hr = shift; ++ my $keys_buckets= scalar(%$hr); ++ if ($keys_buckets=~m!/([0-9]+)\z!) { ++ return 0+$1; ++ } else { ++ return 8; ++ } ++} + + sub get_keys { + my $hr = shift; + + # the minimum of bits required to mount the attack on a hash + my $min_bits = log(THRESHOLD)/log(2); +- + # if the hash has already been populated with a significant amount + # of entries the number of mask bits can be higher + my $keys = scalar keys %$hr; +-- +1.8.1.3 + Modified: head/security/vuxml/vuln.xml ============================================================================== --- head/security/vuxml/vuln.xml Sun Mar 10 18:40:26 2013 (r313837) +++ head/security/vuxml/vuln.xml Sun Mar 10 19:04:00 2013 (r313838) @@ -51,6 +51,46 @@ Note: Please add new entries to the beg --> + + perl -- denial of service via algorithmic complexity attack on hashing routines + + + perl + 5.12.4_5 + 5.14.05.14.2_3 + 5.16.05.16.2_1 + + + + +

Perl developers report:

+
+

In order to prevent an algorithmic complexity attack + against its hashing mechanism, perl will sometimes + recalculate keys and redistribute the contents of a hash. + This mechanism has made perl robust against attacks that + have been demonstrated against other systems.

+

Research by Yves Orton has recently uncovered a flaw in + the rehashing code which can result in pathological + behavior. This flaw could be exploited to carry out a + denial of service attack against code that uses arbitrary + user input as hash keys.

+

Because using user-provided strings as hash keys is a + very common operation, we urge users of perl to update their + perl executable as soon as possible.

+
+ +
+ + CVE-2013-1667 + http://www.nntp.perl.org/group/perl.perl5.porters/2013/03/msg199755.html + + + 2013-03-04 + 2013-03-10 + +
+ libpurple -- multiple vulnerabilities