From owner-freebsd-ppc@freebsd.org Wed May 1 03:59:03 2019 Return-Path: Delivered-To: freebsd-ppc@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3FAF61583605 for ; Wed, 1 May 2019 03:59:03 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic307-10.consmr.mail.ne1.yahoo.com (sonic307-10.consmr.mail.ne1.yahoo.com [66.163.190.33]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E20CA74EF3 for ; Wed, 1 May 2019 03:59:01 +0000 (UTC) (envelope-from marklmi@yahoo.com) X-YMail-OSG: F4tGob0VM1lTVctfwvmyGeZeDVYoN.yFGU5YnmChfhVLJHPTmFUJkAtGHNhC0WC v0wGm_Bn2aBBwKWOjTC5ISuE0XaAr3R6r_LBjFkDW_hvBf9cvufDDTzYDNC_V8a9XwY0yk37P1ca 70SqSi0D7E1RmSquoueXkH7gGZYrE7uX1QjH1Q1ThmFmkpwTTEENsrPhqz2RdkviIBu.WlzsvGU2 oUDGPJImZcMftuxPuG8OCpGtn69DoTt08yt1REMaiG4FZd6BYWKtQ4w7fVZdaq1idxUojSNq4Z2. bH3y4CwcbC9tX2gSwgwmSEe3AvNDtQjEaSMHfv.0YgR1Vz_ybsYUG3T.4KSAGZuHuN53OgnYTN1L yC4GAiaF6rXgPqjSditci56ReHPrJ2REblmZ2K4MffgvAJDANbdXGVzDOgwrawDjS9lO1x.jEQT4 .uhVE7jGPkY1z9BijBV_lJs5PsyUD8145aqm7aTkopd_0iTQVElXAYPdNtkRlI5r2Dt5R1Mjtu.x MR6FsfIIM4IXHV60I4o8tZr2Jj4ko_idgIb1ILR1OuTpPLDm2akij0GOi1yE6n.806D9MpnZSfKv zbj.UZPEFXCg_dsRSUEOW0spjzQwxt0cMTuGL6uiI0FeO72s_AStEiHT3ggU3Tskn5fvw8sLzEkS 63dTRV1gjkAfonynek.pzFFwKF0j28GjkOwVgPQ8MjTyS3KcgbGF2BabRj9Zlyt_XuSpzXPK1Lfo w2vPeWduNfrJZqE6q32qPU_HvwDPbzZdeSlfLpXByRDLc0HRTw.oDIkMEqUjuzMES3AW0Me.Atb9 u_K2xLnwDykYkmpc2IptkN8WnHgZ3zwWXOXEOp4PlrkOCikt50hJldCf6YihbqG4CAMgRxJWuhBW EoJfHqxdN9WkEPWsqR4wAzOSnhHCSdwropiYWVQFMuUgfv274K9q7peSfYRBY4MQLCWrkcBrM.2M hGNoRbRucLBcKptZN50QSf.KatPD1A5Mp3M4wyfAXuOBj_zOtmsjqVRj4sMDGddALCWK7kvJ1wzO MW4WO6CmZzOoiYETqzYVfhvb7dt89gBmG3Np1r8HjrSSSNP6h1MjsQPHO.renPq.P4xooe72rE05 x6_bVEGY0sDD.MDq9lVZoOiqc44fOkbme2kmEZ824x4fiCTwrS.Zp26WLi6cqbGHnE3vOahBvOfT 4g5Qia062FiH5Vw-- Received: from sonic.gate.mail.ne1.yahoo.com by sonic307.consmr.mail.ne1.yahoo.com with HTTP; Wed, 1 May 2019 03:58:55 +0000 Received: from c-76-115-7-162.hsd1.or.comcast.net (EHLO [192.168.1.103]) ([76.115.7.162]) by smtp428.mail.ne1.yahoo.com (Oath Hermes SMTP Server) with ESMTPA ID 677da527d1125d22f7c694f1323c70fa; Wed, 01 May 2019 03:58:50 +0000 (UTC) From: Mark Millard Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: How many segments does it take to span from VM_MIN_KERNEL_ADDRESS through VM_MAX_SAFE_KERNEL_ADDRESS? 128 in moea64_late_bootstrap Message-Id: <3C69CF7C-7F33-4C79-92C0-3493A1294996@yahoo.com> Date: Tue, 30 Apr 2019 20:58:49 -0700 To: Justin Hibbits , FreeBSD PowerPC ML X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: E20CA74EF3 X-Spamd-Bar: + X-Spamd-Result: default: False [1.96 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; R_SPF_ALLOW(-0.20)[+ptr:yahoo.com]; MV_CASE(0.50)[]; FREEMAIL_FROM(0.00)[yahoo.com]; RCVD_COUNT_THREE(0.00)[3]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[yahoo.com:+]; MX_GOOD(-0.01)[cached: mta6.am0.yahoodns.net]; RCPT_COUNT_TWO(0.00)[2]; DMARC_POLICY_ALLOW(-0.50)[yahoo.com,reject]; FREEMAIL_TO(0.00)[gmail.com]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; MIME_TRACE(0.00)[0:+]; FREEMAIL_ENVFROM(0.00)[yahoo.com]; ASN(0.00)[asn:36646, ipnet:66.163.184.0/21, country:US]; MID_RHS_MATCH_FROM(0.00)[]; SUBJECT_HAS_QUESTION(0.00)[]; ARC_NA(0.00)[]; R_DKIM_ALLOW(-0.20)[yahoo.com:s=s2048]; FROM_HAS_DN(0.00)[]; DWL_DNSWL_NONE(0.00)[yahoo.com.dwl.dnswl.org : 127.0.5.0]; NEURAL_SPAM_SHORT(0.79)[0.785,0]; MIME_GOOD(-0.10)[text/plain]; IP_SCORE(1.33)[ip: (4.22), ipnet: 66.163.184.0/21(1.39), asn: 36646(1.11), country: US(-0.06)]; NEURAL_SPAM_MEDIUM(0.06)[0.060,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; NEURAL_SPAM_LONG(0.30)[0.296,0]; RCVD_IN_DNSWL_NONE(0.00)[33.190.163.66.list.dnswl.org : 127.0.5.0] X-BeenThere: freebsd-ppc@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Porting FreeBSD to the PowerPC List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 May 2019 03:59:03 -0000 [At the end this note shows why the old VM_MAX_KERNEL_ADDRESS lead to no slb-miss exceptions in cpudep_ap_bootstrap.] There is code in moea64_late_bootstrap that looks like: virtual_avail = VM_MIN_KERNEL_ADDRESS; virtual_end = VM_MAX_SAFE_KERNEL_ADDRESS; /* * Map the entire KVA range into the SLB. We must not fault there. */ #ifdef __powerpc64__ for (va = virtual_avail; va < virtual_end; va += SEGMENT_LENGTH) moea64_bootstrap_slb_prefault(va, 0); #endif where (modern): #define VM_MIN_KERNEL_ADDRESS 0xe000000000000000UL #define VM_MAX_SAFE_KERNEL_ADDRESS VM_MAX_KERNEL_ADDRESS #define VM_MAX_KERNEL_ADDRESS 0xe0000007ffffffffUL #define SEGMENT_LENGTH 0x10000000UL So: 0xe000000000000000UL: VM_MIN_KERNEL_ADDRESS 0x0000000010000000UL: SEGMENT_LENGTH 0xe0000007ffffffffUL: VM_MAX_KERNEL_ADDRESS So I see the loop as doing moea64_bootstrap_slb_prefault 128 times (decimal, 0x00..0x7f at the appropriate byte in va). (I do not see why this loop keeps going once the slb kernel slots are all full. Nor is it obvious to me why the larger va values should be the ones more likely to still be covered. But I'm going a different direction below.) That also means that the code does random replacement (based on mftb()%n_slbs, but avoiding USER_SLB_SLOT) 128-(64-1), or 65 times. The slb_insert_kernel use in moea64_bootstrap_slb_prefault does that: moea64_bootstrap_slb_prefault(vm_offset_t va, int large) { struct slb *cache; struct slb entry; uint64_t esid, slbe; uint64_t i; cache = PCPU_GET(aim.slb); esid = va >> ADDR_SR_SHFT; slbe = (esid << SLBE_ESID_SHIFT) | SLBE_VALID; for (i = 0; i < 64; i++) { if (cache[i].slbe == (slbe | i)) return; } entry.slbe = slbe; entry.slbv = KERNEL_VSID(esid) << SLBV_VSID_SHIFT; if (large) entry.slbv |= SLBV_L; slb_insert_kernel(entry.slbe, entry.slbv); } where slb_insert_kernel is in turn has the code that will do replacements: void slb_insert_kernel(uint64_t slbe, uint64_t slbv) { struct slb *slbcache; int i; /* We don't want to be preempted while modifying the kernel map */ critical_enter(); slbcache = PCPU_GET(aim.slb); /* Check for an unused slot, abusing the user slot as a full flag */ if (slbcache[USER_SLB_SLOT].slbe == 0) { for (i = 0; i < n_slbs; i++) { if (i == USER_SLB_SLOT) continue; if (!(slbcache[i].slbe & SLBE_VALID)) goto fillkernslb; } if (i == n_slbs) slbcache[USER_SLB_SLOT].slbe = 1; } i = mftb() % n_slbs; if (i == USER_SLB_SLOT) i = (i+1) % n_slbs; fillkernslb: KASSERT(i != USER_SLB_SLOT, ("Filling user SLB slot with a kernel mapping")); slbcache[i].slbv = slbv; slbcache[i].slbe = slbe | (uint64_t)i; /* If it is for this CPU, put it in the SLB right away */ if (pmap_bootstrapped) { /* slbie not required */ __asm __volatile ("slbmte %0, %1" :: "r"(slbcache[i].slbv), "r"(slbcache[i].slbe)); } critical_exit(); } [The USER_SLB_SLOT handling makes selection of slot USER_SLB_SLOT+1 for what to replace more likely than the other kernel slots.] I expect that the above explains the variability in if cpudep_ap_bootstrap 's: sp = pcpup->pc_curpcb->pcb_sp gets a slb fault for dereferencing the pc_curpcb stage of that vs. not. I also expect that the old VM_MAX_KERNEL_ADDRESS value explains the lack of slb-misses in old times: 0xe000000000000000UL: VM_MIN_KERNEL_ADDRESS 0x0000000010000000UL: SEGMENT_LENGTH 0xe0000001c7ffffffUL: VM_MAX_KERNEL_ADDRESS So 0x00..0x1c is 29 alternatives (decimal). That fits in 64-1 slots, or even 32-1 slots: no random replacements happened above or elsewhere. That, in turn meant no testing of the handling of any slb-misses back then. [Other list messages suggest missing context synchronizing instructions for slbmte and related instructions. The history is not evidence about that, given the lack of slb-misses.] === Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar)