Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 28 Jun 2017 05:28:16 +0000 (UTC)
From:      Alan Cox <alc@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org
Subject:   svn commit: r320438 - stable/11/sys/kern
Message-ID:  <201706280528.v5S5SGkP022196@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: alc
Date: Wed Jun 28 05:28:15 2017
New Revision: 320438
URL: https://svnweb.freebsd.org/changeset/base/320438

Log:
  MFC r315518
    Avoid unnecessary calls to vm_map_protect() in elf_load_section().
  
    Typically, when elf_load_section() unconditionally passed VM_PROT_ALL to
    elf_map_insert(), it was needlessly enabling execute access on the
    mapping, and it would later have to call vm_map_protect() to correct the
    mapping's access rights.  Now, instead, elf_load_section() always passes
    its parameter "prot" to elf_map_insert().  So, elf_load_section() must
    only call vm_map_protect() if it needs to remove the write access that
    was temporarily granted to perform a copyout().
  
  Approved by:	re (kib)

Modified:
  stable/11/sys/kern/imgact_elf.c
Directory Properties:
  stable/11/   (props changed)

Modified: stable/11/sys/kern/imgact_elf.c
==============================================================================
--- stable/11/sys/kern/imgact_elf.c	Wed Jun 28 05:21:00 2017	(r320437)
+++ stable/11/sys/kern/imgact_elf.c	Wed Jun 28 05:28:15 2017	(r320438)
@@ -596,7 +596,7 @@ __elfN(load_section)(struct image_params *imgp, vm_oof
 	/* This had damn well better be true! */
 	if (map_len != 0) {
 		rv = __elfN(map_insert)(imgp, map, NULL, 0, map_addr,
-		    map_addr + map_len, VM_PROT_ALL, 0);
+		    map_addr + map_len, prot, 0);
 		if (rv != KERN_SUCCESS)
 			return (EINVAL);
 	}
@@ -617,10 +617,12 @@ __elfN(load_section)(struct image_params *imgp, vm_oof
 	}
 
 	/*
-	 * set it to the specified protection.
+	 * Remove write access to the page if it was only granted by map_insert
+	 * to allow copyout.
 	 */
-	vm_map_protect(map, trunc_page(map_addr), round_page(map_addr +
-	    map_len), prot, FALSE);
+	if ((prot & VM_PROT_WRITE) == 0)
+		vm_map_protect(map, trunc_page(map_addr), round_page(map_addr +
+		    map_len), prot, FALSE);
 
 	return (0);
 }



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201706280528.v5S5SGkP022196>