LCOV - code coverage report
Current view: top level - js/src/gc - GC.cpp (source / functions) Hit Total Coverage
Test: output.info Lines: 334 3793 8.8 %
Date: 2018-08-07 16:42:27 Functions: 0 0 -
Legend: Lines: hit not hit

          Line data    Source code
       1             : /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
       2             :  * vim: set ts=8 sts=4 et sw=4 tw=99:
       3             :  * This Source Code Form is subject to the terms of the Mozilla Public
       4             :  * License, v. 2.0. If a copy of the MPL was not distributed with this
       5             :  * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
       6             : 
       7             : /*
       8             :  * This code implements an incremental mark-and-sweep garbage collector, with
       9             :  * most sweeping carried out in the background on a parallel thread.
      10             :  *
      11             :  * Full vs. zone GC
      12             :  * ----------------
      13             :  *
      14             :  * The collector can collect all zones at once, or a subset. These types of
      15             :  * collection are referred to as a full GC and a zone GC respectively.
      16             :  *
      17             :  * It is possible for an incremental collection that started out as a full GC to
      18             :  * become a zone GC if new zones are created during the course of the
      19             :  * collection.
      20             :  *
      21             :  * Incremental collection
      22             :  * ----------------------
      23             :  *
      24             :  * For a collection to be carried out incrementally the following conditions
      25             :  * must be met:
      26             :  *  - the collection must be run by calling js::GCSlice() rather than js::GC()
      27             :  *  - the GC mode must have been set to JSGC_MODE_INCREMENTAL with
      28             :  *    JS_SetGCParameter()
      29             :  *  - no thread may have an AutoKeepAtoms instance on the stack
      30             :  *
      31             :  * The last condition is an engine-internal mechanism to ensure that incremental
      32             :  * collection is not carried out without the correct barriers being implemented.
      33             :  * For more information see 'Incremental marking' below.
      34             :  *
      35             :  * If the collection is not incremental, all foreground activity happens inside
      36             :  * a single call to GC() or GCSlice(). However the collection is not complete
      37             :  * until the background sweeping activity has finished.
      38             :  *
      39             :  * An incremental collection proceeds as a series of slices, interleaved with
      40             :  * mutator activity, i.e. running JavaScript code. Slices are limited by a time
      41             :  * budget. The slice finishes as soon as possible after the requested time has
      42             :  * passed.
      43             :  *
      44             :  * Collector states
      45             :  * ----------------
      46             :  *
      47             :  * The collector proceeds through the following states, the current state being
      48             :  * held in JSRuntime::gcIncrementalState:
      49             :  *
      50             :  *  - MarkRoots  - marks the stack and other roots
      51             :  *  - Mark       - incrementally marks reachable things
      52             :  *  - Sweep      - sweeps zones in groups and continues marking unswept zones
      53             :  *  - Finalize   - performs background finalization, concurrent with mutator
      54             :  *  - Compact    - incrementally compacts by zone
      55             :  *  - Decommit   - performs background decommit and chunk removal
      56             :  *
      57             :  * The MarkRoots activity always takes place in the first slice. The next two
      58             :  * states can take place over one or more slices.
      59             :  *
      60             :  * In other words an incremental collection proceeds like this:
      61             :  *
      62             :  * Slice 1:   MarkRoots:  Roots pushed onto the mark stack.
      63             :  *            Mark:       The mark stack is processed by popping an element,
      64             :  *                        marking it, and pushing its children.
      65             :  *
      66             :  *          ... JS code runs ...
      67             :  *
      68             :  * Slice 2:   Mark:       More mark stack processing.
      69             :  *
      70             :  *          ... JS code runs ...
      71             :  *
      72             :  * Slice n-1: Mark:       More mark stack processing.
      73             :  *
      74             :  *          ... JS code runs ...
      75             :  *
      76             :  * Slice n:   Mark:       Mark stack is completely drained.
      77             :  *            Sweep:      Select first group of zones to sweep and sweep them.
      78             :  *
      79             :  *          ... JS code runs ...
      80             :  *
      81             :  * Slice n+1: Sweep:      Mark objects in unswept zones that were newly
      82             :  *                        identified as alive (see below). Then sweep more zone
      83             :  *                        sweep groups.
      84             :  *
      85             :  *          ... JS code runs ...
      86             :  *
      87             :  * Slice n+2: Sweep:      Mark objects in unswept zones that were newly
      88             :  *                        identified as alive. Then sweep more zones.
      89             :  *
      90             :  *          ... JS code runs ...
      91             :  *
      92             :  * Slice m:   Sweep:      Sweeping is finished, and background sweeping
      93             :  *                        started on the helper thread.
      94             :  *
      95             :  *          ... JS code runs, remaining sweeping done on background thread ...
      96             :  *
      97             :  * When background sweeping finishes the GC is complete.
      98             :  *
      99             :  * Incremental marking
     100             :  * -------------------
     101             :  *
     102             :  * Incremental collection requires close collaboration with the mutator (i.e.,
     103             :  * JS code) to guarantee correctness.
     104             :  *
     105             :  *  - During an incremental GC, if a memory location (except a root) is written
     106             :  *    to, then the value it previously held must be marked. Write barriers
     107             :  *    ensure this.
     108             :  *
     109             :  *  - Any object that is allocated during incremental GC must start out marked.
     110             :  *
     111             :  *  - Roots are marked in the first slice and hence don't need write barriers.
     112             :  *    Roots are things like the C stack and the VM stack.
     113             :  *
     114             :  * The problem that write barriers solve is that between slices the mutator can
     115             :  * change the object graph. We must ensure that it cannot do this in such a way
     116             :  * that makes us fail to mark a reachable object (marking an unreachable object
     117             :  * is tolerable).
     118             :  *
     119             :  * We use a snapshot-at-the-beginning algorithm to do this. This means that we
     120             :  * promise to mark at least everything that is reachable at the beginning of
     121             :  * collection. To implement it we mark the old contents of every non-root memory
     122             :  * location written to by the mutator while the collection is in progress, using
     123             :  * write barriers. This is described in gc/Barrier.h.
     124             :  *
     125             :  * Incremental sweeping
     126             :  * --------------------
     127             :  *
     128             :  * Sweeping is difficult to do incrementally because object finalizers must be
     129             :  * run at the start of sweeping, before any mutator code runs. The reason is
     130             :  * that some objects use their finalizers to remove themselves from caches. If
     131             :  * mutator code was allowed to run after the start of sweeping, it could observe
     132             :  * the state of the cache and create a new reference to an object that was just
     133             :  * about to be destroyed.
     134             :  *
     135             :  * Sweeping all finalizable objects in one go would introduce long pauses, so
     136             :  * instead sweeping broken up into groups of zones. Zones which are not yet
     137             :  * being swept are still marked, so the issue above does not apply.
     138             :  *
     139             :  * The order of sweeping is restricted by cross compartment pointers - for
     140             :  * example say that object |a| from zone A points to object |b| in zone B and
     141             :  * neither object was marked when we transitioned to the Sweep phase. Imagine we
     142             :  * sweep B first and then return to the mutator. It's possible that the mutator
     143             :  * could cause |a| to become alive through a read barrier (perhaps it was a
     144             :  * shape that was accessed via a shape table). Then we would need to mark |b|,
     145             :  * which |a| points to, but |b| has already been swept.
     146             :  *
     147             :  * So if there is such a pointer then marking of zone B must not finish before
     148             :  * marking of zone A.  Pointers which form a cycle between zones therefore
     149             :  * restrict those zones to being swept at the same time, and these are found
     150             :  * using Tarjan's algorithm for finding the strongly connected components of a
     151             :  * graph.
     152             :  *
     153             :  * GC things without finalizers, and things with finalizers that are able to run
     154             :  * in the background, are swept on the background thread. This accounts for most
     155             :  * of the sweeping work.
     156             :  *
     157             :  * Reset
     158             :  * -----
     159             :  *
     160             :  * During incremental collection it is possible, although unlikely, for
     161             :  * conditions to change such that incremental collection is no longer safe. In
     162             :  * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in
     163             :  * the mark state, this just stops marking, but if we have started sweeping
     164             :  * already, we continue until we have swept the current sweep group. Following a
     165             :  * reset, a new non-incremental collection is started.
     166             :  *
     167             :  * Compacting GC
     168             :  * -------------
     169             :  *
     170             :  * Compacting GC happens at the end of a major GC as part of the last slice.
     171             :  * There are three parts:
     172             :  *
     173             :  *  - Arenas are selected for compaction.
     174             :  *  - The contents of those arenas are moved to new arenas.
     175             :  *  - All references to moved things are updated.
     176             :  *
     177             :  * Collecting Atoms
     178             :  * ----------------
     179             :  *
     180             :  * Atoms are collected differently from other GC things. They are contained in
     181             :  * a special zone and things in other zones may have pointers to them that are
     182             :  * not recorded in the cross compartment pointer map. Each zone holds a bitmap
     183             :  * with the atoms it might be keeping alive, and atoms are only collected if
     184             :  * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
     185             :  * this bitmap is managed.
     186             :  */
     187             : 
     188             : #include "gc/GC-inl.h"
     189             : 
     190             : #include "mozilla/ArrayUtils.h"
     191             : #include "mozilla/DebugOnly.h"
     192             : #include "mozilla/MacroForEach.h"
     193             : #include "mozilla/MemoryReporting.h"
     194             : #include "mozilla/Move.h"
     195             : #include "mozilla/Range.h"
     196             : #include "mozilla/ScopeExit.h"
     197             : #include "mozilla/TimeStamp.h"
     198             : #include "mozilla/TypeTraits.h"
     199             : #include "mozilla/Unused.h"
     200             : 
     201             : #include <ctype.h>
     202             : #include <initializer_list>
     203             : #include <string.h>
     204             : #ifndef XP_WIN
     205             : # include <sys/mman.h>
     206             : # include <unistd.h>
     207             : #endif
     208             : 
     209             : #include "jsapi.h"
     210             : #include "jsfriendapi.h"
     211             : #include "jstypes.h"
     212             : #include "jsutil.h"
     213             : 
     214             : #include "gc/FindSCCs.h"
     215             : #include "gc/FreeOp.h"
     216             : #include "gc/GCInternals.h"
     217             : #include "gc/GCTrace.h"
     218             : #include "gc/Memory.h"
     219             : #include "gc/Policy.h"
     220             : #include "gc/WeakMap.h"
     221             : #include "jit/BaselineJIT.h"
     222             : #include "jit/IonCode.h"
     223             : #include "jit/JitcodeMap.h"
     224             : #include "js/SliceBudget.h"
     225             : #include "proxy/DeadObjectProxy.h"
     226             : #include "util/Windows.h"
     227             : #ifdef ENABLE_BIGINT
     228             : #include "vm/BigIntType.h"
     229             : #endif
     230             : #include "vm/Debugger.h"
     231             : #include "vm/GeckoProfiler.h"
     232             : #include "vm/JSAtom.h"
     233             : #include "vm/JSContext.h"
     234             : #include "vm/JSObject.h"
     235             : #include "vm/JSScript.h"
     236             : #include "vm/Printer.h"
     237             : #include "vm/ProxyObject.h"
     238             : #include "vm/Realm.h"
     239             : #include "vm/Shape.h"
     240             : #include "vm/StringType.h"
     241             : #include "vm/SymbolType.h"
     242             : #include "vm/Time.h"
     243             : #include "vm/TraceLogging.h"
     244             : #include "vm/WrapperObject.h"
     245             : 
     246             : #include "gc/Heap-inl.h"
     247             : #include "gc/Marking-inl.h"
     248             : #include "gc/Nursery-inl.h"
     249             : #include "gc/PrivateIterators-inl.h"
     250             : #include "vm/GeckoProfiler-inl.h"
     251             : #include "vm/JSObject-inl.h"
     252             : #include "vm/JSScript-inl.h"
     253             : #include "vm/Stack-inl.h"
     254             : #include "vm/StringType-inl.h"
     255             : 
     256             : using namespace js;
     257             : using namespace js::gc;
     258             : 
     259             : using mozilla::ArrayLength;
     260             : using mozilla::Maybe;
     261             : using mozilla::Swap;
     262             : using mozilla::TimeStamp;
     263             : 
     264             : using JS::AutoGCRooter;
     265             : 
     266             : /*
     267             :  * Default settings for tuning the GC.  Some of these can be set at runtime,
     268             :  * This list is not complete, some tuning parameters are not listed here.
     269             :  *
     270             :  * If you change the values here, please also consider changing them in
     271             :  * modules/libpref/init/all.js where they are duplicated for the Firefox
     272             :  * preferences.
     273             :  */
     274             : namespace js {
     275             : namespace gc {
     276             : namespace TuningDefaults {
     277             : 
     278             :     /* JSGC_ALLOCATION_THRESHOLD */
     279             :     static const size_t GCZoneAllocThresholdBase = 30 * 1024 * 1024;
     280             : 
     281             :     /* JSGC_MAX_MALLOC_BYTES */
     282             :     static const size_t MaxMallocBytes = 128 * 1024 * 1024;
     283             : 
     284             :     /* JSGC_ALLOCATION_THRESHOLD_FACTOR */
     285             :     static const double AllocThresholdFactor = 0.9;
     286             : 
     287             :     /* JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT */
     288             :     static const double AllocThresholdFactorAvoidInterrupt = 0.9;
     289             : 
     290             :     /* no parameter */
     291             :     static const double MallocThresholdGrowFactor = 1.5;
     292             : 
     293             :     /* no parameter */
     294             :     static const double MallocThresholdShrinkFactor = 0.9;
     295             : 
     296             :     /* no parameter */
     297             :     static const size_t MallocThresholdLimit = 1024 * 1024 * 1024;
     298             : 
     299             :     /* no parameter */
     300             :     static const size_t ZoneAllocDelayBytes = 1024 * 1024;
     301             : 
     302             :     /* JSGC_DYNAMIC_HEAP_GROWTH */
     303             :     static const bool DynamicHeapGrowthEnabled = false;
     304             : 
     305             :     /* JSGC_HIGH_FREQUENCY_TIME_LIMIT */
     306             :     static const uint64_t HighFrequencyThresholdUsec = 1000000;
     307             : 
     308             :     /* JSGC_HIGH_FREQUENCY_LOW_LIMIT */
     309             :     static const uint64_t HighFrequencyLowLimitBytes = 100 * 1024 * 1024;
     310             : 
     311             :     /* JSGC_HIGH_FREQUENCY_HIGH_LIMIT */
     312             :     static const uint64_t HighFrequencyHighLimitBytes = 500 * 1024 * 1024;
     313             : 
     314             :     /* JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX */
     315             :     static const double HighFrequencyHeapGrowthMax = 3.0;
     316             : 
     317             :     /* JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN */
     318             :     static const double HighFrequencyHeapGrowthMin = 1.5;
     319             : 
     320             :     /* JSGC_LOW_FREQUENCY_HEAP_GROWTH */
     321             :     static const double LowFrequencyHeapGrowth = 1.5;
     322             : 
     323             :     /* JSGC_DYNAMIC_MARK_SLICE */
     324             :     static const bool DynamicMarkSliceEnabled = false;
     325             : 
     326             :     /* JSGC_MIN_EMPTY_CHUNK_COUNT */
     327             :     static const uint32_t MinEmptyChunkCount = 1;
     328             : 
     329             :     /* JSGC_MAX_EMPTY_CHUNK_COUNT */
     330             :     static const uint32_t MaxEmptyChunkCount = 30;
     331             : 
     332             :     /* JSGC_SLICE_TIME_BUDGET */
     333             :     static const int64_t DefaultTimeBudget = SliceBudget::UnlimitedTimeBudget;
     334             : 
     335             :     /* JSGC_MODE */
     336             :     static const JSGCMode Mode = JSGC_MODE_INCREMENTAL;
     337             : 
     338             :     /* JSGC_COMPACTING_ENABLED */
     339             :     static const bool CompactingEnabled = true;
     340             : 
     341             :     /* JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION */
     342             :     static const uint32_t NurseryFreeThresholdForIdleCollection =
     343             :         Nursery::NurseryChunkUsableSize / 4;
     344             : 
     345             : }}} // namespace js::gc::TuningDefaults
     346             : 
     347             : /*
     348             :  * We start to incremental collection for a zone when a proportion of its
     349             :  * threshold is reached. This is configured by the
     350             :  * JSGC_ALLOCATION_THRESHOLD_FACTOR and
     351             :  * JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT parameters.
     352             :  */
     353             : static const double MinAllocationThresholdFactor = 0.9;
     354             : 
     355             : /*
     356             :  * We may start to collect a zone before its trigger threshold is reached if
     357             :  * GCRuntime::maybeGC() is called for that zone or we start collecting other
     358             :  * zones. These eager threshold factors are not configurable.
     359             :  */
     360             : static const double HighFrequencyEagerAllocTriggerFactor = 0.85;
     361             : static const double LowFrequencyEagerAllocTriggerFactor = 0.9;
     362             : 
     363             : /*
     364             :  * Don't allow heap growth factors to be set so low that collections could
     365             :  * reduce the trigger threshold.
     366             :  */
     367           0 : static const double MinHighFrequencyHeapGrowthFactor =
     368           0 :     1.0 / Min(HighFrequencyEagerAllocTriggerFactor, MinAllocationThresholdFactor);
     369           0 : static const double MinLowFrequencyHeapGrowthFactor =
     370           0 :     1.0 / Min(LowFrequencyEagerAllocTriggerFactor, MinAllocationThresholdFactor);
     371             : 
     372             : /* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
     373             : static const int IGC_MARK_SLICE_MULTIPLIER = 2;
     374             : 
     375             : const AllocKind gc::slotsToThingKind[] = {
     376             :     /*  0 */ AllocKind::OBJECT0,  AllocKind::OBJECT2,  AllocKind::OBJECT2,  AllocKind::OBJECT4,
     377             :     /*  4 */ AllocKind::OBJECT4,  AllocKind::OBJECT8,  AllocKind::OBJECT8,  AllocKind::OBJECT8,
     378             :     /*  8 */ AllocKind::OBJECT8,  AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
     379             :     /* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
     380             :     /* 16 */ AllocKind::OBJECT16
     381             : };
     382             : 
     383             : static_assert(mozilla::ArrayLength(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
     384             :               "We have defined a slot count for each kind.");
     385             : 
     386             : #define CHECK_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, nursery) \
     387             :     static_assert(sizeof(sizedType) >= SortedArenaList::MinThingSize, \
     388             :                   #sizedType " is smaller than SortedArenaList::MinThingSize!"); \
     389             :     static_assert(sizeof(sizedType) >= sizeof(FreeSpan), \
     390             :                   #sizedType " is smaller than FreeSpan"); \
     391             :     static_assert(sizeof(sizedType) % CellAlignBytes == 0, \
     392             :                   "Size of " #sizedType " is not a multiple of CellAlignBytes"); \
     393             :     static_assert(sizeof(sizedType) >= MinCellSize, \
     394             :                   "Size of " #sizedType " is smaller than the minimum size");
     395             : FOR_EACH_ALLOCKIND(CHECK_THING_SIZE);
     396             : #undef CHECK_THING_SIZE
     397             : 
     398             : const uint32_t Arena::ThingSizes[] = {
     399             : #define EXPAND_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, nursery) \
     400             :     sizeof(sizedType),
     401             : FOR_EACH_ALLOCKIND(EXPAND_THING_SIZE)
     402             : #undef EXPAND_THING_SIZE
     403             : };
     404             : 
     405             : FreeSpan ArenaLists::emptySentinel;
     406             : 
     407             : #undef CHECK_THING_SIZE_INNER
     408             : #undef CHECK_THING_SIZE
     409             : 
     410             : #define OFFSET(type) uint32_t(ArenaHeaderSize + (ArenaSize - ArenaHeaderSize) % sizeof(type))
     411             : 
     412             : const uint32_t Arena::FirstThingOffsets[] = {
     413             : #define EXPAND_FIRST_THING_OFFSET(allocKind, traceKind, type, sizedType, bgFinal, nursery) \
     414             :     OFFSET(sizedType),
     415             : FOR_EACH_ALLOCKIND(EXPAND_FIRST_THING_OFFSET)
     416             : #undef EXPAND_FIRST_THING_OFFSET
     417             : };
     418             : 
     419             : #undef OFFSET
     420             : 
     421             : #define COUNT(type) uint32_t((ArenaSize - ArenaHeaderSize) / sizeof(type))
     422             : 
     423             : const uint32_t Arena::ThingsPerArena[] = {
     424             : #define EXPAND_THINGS_PER_ARENA(allocKind, traceKind, type, sizedType, bgFinal, nursery) \
     425             :     COUNT(sizedType),
     426             : FOR_EACH_ALLOCKIND(EXPAND_THINGS_PER_ARENA)
     427             : #undef EXPAND_THINGS_PER_ARENA
     428             : };
     429             : 
     430             : #undef COUNT
     431             : 
     432           0 : struct js::gc::FinalizePhase
     433             : {
     434             :     gcstats::PhaseKind statsPhase;
     435             :     AllocKinds kinds;
     436             : };
     437             : 
     438             : /*
     439             :  * Finalization order for objects swept incrementally on the main thread.
     440             :  */
     441             : static const FinalizePhase ForegroundObjectFinalizePhase = {
     442             :     gcstats::PhaseKind::SWEEP_OBJECT, {
     443             :         AllocKind::OBJECT0,
     444             :         AllocKind::OBJECT2,
     445             :         AllocKind::OBJECT4,
     446             :         AllocKind::OBJECT8,
     447             :         AllocKind::OBJECT12,
     448             :         AllocKind::OBJECT16
     449             :     }
     450           0 : };
     451             : 
     452             : /*
     453             :  * Finalization order for GC things swept incrementally on the main thread.
     454             :  */
     455             : static const FinalizePhase ForegroundNonObjectFinalizePhase = {
     456             :     gcstats::PhaseKind::SWEEP_SCRIPT, {
     457             :         AllocKind::SCRIPT,
     458             :         AllocKind::JITCODE
     459             :     }
     460           0 : };
     461             : 
     462             : /*
     463             :  * Finalization order for GC things swept on the background thread.
     464             :  */
     465             : static const FinalizePhase BackgroundFinalizePhases[] = {
     466             :     {
     467             :         gcstats::PhaseKind::SWEEP_SCRIPT, {
     468             :             AllocKind::LAZY_SCRIPT
     469             :         }
     470             :     },
     471             :     {
     472             :         gcstats::PhaseKind::SWEEP_OBJECT, {
     473             :             AllocKind::FUNCTION,
     474             :             AllocKind::FUNCTION_EXTENDED,
     475             :             AllocKind::OBJECT0_BACKGROUND,
     476             :             AllocKind::OBJECT2_BACKGROUND,
     477             :             AllocKind::OBJECT4_BACKGROUND,
     478             :             AllocKind::OBJECT8_BACKGROUND,
     479             :             AllocKind::OBJECT12_BACKGROUND,
     480             :             AllocKind::OBJECT16_BACKGROUND
     481             :         }
     482             :     },
     483             :     {
     484             :         gcstats::PhaseKind::SWEEP_SCOPE, {
     485             :             AllocKind::SCOPE,
     486             :         }
     487             :     },
     488             :     {
     489             :         gcstats::PhaseKind::SWEEP_REGEXP_SHARED, {
     490             :             AllocKind::REGEXP_SHARED,
     491             :         }
     492             :     },
     493             :     {
     494             :         gcstats::PhaseKind::SWEEP_STRING, {
     495             :             AllocKind::FAT_INLINE_STRING,
     496             :             AllocKind::STRING,
     497             :             AllocKind::EXTERNAL_STRING,
     498             :             AllocKind::FAT_INLINE_ATOM,
     499             :             AllocKind::ATOM,
     500             :             AllocKind::SYMBOL,
     501             : #ifdef ENABLE_BIGINT
     502             :             AllocKind::BIGINT
     503             : #endif
     504             :         }
     505             :     },
     506             :     {
     507             :         gcstats::PhaseKind::SWEEP_SHAPE, {
     508             :             AllocKind::SHAPE,
     509             :             AllocKind::ACCESSOR_SHAPE,
     510             :             AllocKind::BASE_SHAPE,
     511             :             AllocKind::OBJECT_GROUP
     512             :         }
     513             :     }
     514           0 : };
     515             : 
     516             : template<>
     517             : JSObject*
     518           0 : ArenaCellIterImpl::get<JSObject>() const
     519             : {
     520           0 :     MOZ_ASSERT(!done());
     521           0 :     return reinterpret_cast<JSObject*>(getCell());
     522             : }
     523             : 
     524             : void
     525           0 : Arena::unmarkAll()
     526             : {
     527           0 :     uintptr_t* word = chunk()->bitmap.arenaBits(this);
     528           0 :     memset(word, 0, ArenaBitmapWords * sizeof(uintptr_t));
     529           0 : }
     530             : 
     531             : void
     532           0 : Arena::unmarkPreMarkedFreeCells()
     533             : {
     534           0 :     for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
     535           0 :         TenuredCell* cell = iter.getCell();
     536           0 :         MOZ_ASSERT(cell->isMarkedBlack());
     537           0 :         cell->unmark();
     538             :     }
     539           0 : }
     540             : 
     541             : #ifdef DEBUG
     542             : void
     543           0 : Arena::checkNoMarkedFreeCells()
     544             : {
     545           0 :     for (ArenaFreeCellIter iter(this); !iter.done(); iter.next())
     546           0 :         MOZ_ASSERT(!iter.getCell()->isMarkedAny());
     547           0 : }
     548             : #endif
     549             : 
     550             : /* static */ void
     551           0 : Arena::staticAsserts()
     552             : {
     553             :     static_assert(size_t(AllocKind::LIMIT) <= 255,
     554             :                   "We must be able to fit the allockind into uint8_t.");
     555             :     static_assert(mozilla::ArrayLength(ThingSizes) == size_t(AllocKind::LIMIT),
     556             :                   "We haven't defined all thing sizes.");
     557             :     static_assert(mozilla::ArrayLength(FirstThingOffsets) == size_t(AllocKind::LIMIT),
     558             :                   "We haven't defined all offsets.");
     559             :     static_assert(mozilla::ArrayLength(ThingsPerArena) == size_t(AllocKind::LIMIT),
     560             :                   "We haven't defined all counts.");
     561           0 : }
     562             : 
     563             : template<typename T>
     564             : inline size_t
     565           0 : Arena::finalize(FreeOp* fop, AllocKind thingKind, size_t thingSize)
     566             : {
     567             :     /* Enforce requirements on size of T. */
     568           0 :     MOZ_ASSERT(thingSize % CellAlignBytes == 0);
     569           0 :     MOZ_ASSERT(thingSize >= MinCellSize);
     570           0 :     MOZ_ASSERT(thingSize <= 255);
     571             : 
     572           0 :     MOZ_ASSERT(allocated());
     573           0 :     MOZ_ASSERT(thingKind == getAllocKind());
     574           0 :     MOZ_ASSERT(thingSize == getThingSize());
     575           0 :     MOZ_ASSERT(!hasDelayedMarking);
     576           0 :     MOZ_ASSERT(!markOverflow);
     577             : 
     578           0 :     uint_fast16_t firstThing = firstThingOffset(thingKind);
     579           0 :     uint_fast16_t firstThingOrSuccessorOfLastMarkedThing = firstThing;
     580           0 :     uint_fast16_t lastThing = ArenaSize - thingSize;
     581             : 
     582             :     FreeSpan newListHead;
     583           0 :     FreeSpan* newListTail = &newListHead;
     584           0 :     size_t nmarked = 0;
     585             : 
     586           0 :     for (ArenaCellIterUnderFinalize i(this); !i.done(); i.next()) {
     587           0 :         T* t = i.get<T>();
     588           0 :         if (t->asTenured().isMarkedAny()) {
     589           0 :             uint_fast16_t thing = uintptr_t(t) & ArenaMask;
     590           0 :             if (thing != firstThingOrSuccessorOfLastMarkedThing) {
     591             :                 // We just finished passing over one or more free things,
     592             :                 // so record a new FreeSpan.
     593           0 :                 newListTail->initBounds(firstThingOrSuccessorOfLastMarkedThing,
     594             :                                         thing - thingSize, this);
     595           0 :                 newListTail = newListTail->nextSpanUnchecked(this);
     596             :             }
     597           0 :             firstThingOrSuccessorOfLastMarkedThing = thing + thingSize;
     598           0 :             nmarked++;
     599             :         } else {
     600           0 :             t->finalize(fop);
     601           0 :             JS_POISON(t, JS_SWEPT_TENURED_PATTERN, thingSize, MemCheckKind::MakeUndefined);
     602           0 :             gcTracer.traceTenuredFinalize(t);
     603             :         }
     604             :     }
     605             : 
     606           0 :     if (nmarked == 0) {
     607             :         // Do nothing. The caller will update the arena appropriately.
     608           0 :         MOZ_ASSERT(newListTail == &newListHead);
     609           0 :         JS_EXTRA_POISON(data, JS_SWEPT_TENURED_PATTERN, sizeof(data), MemCheckKind::MakeUndefined);
     610           0 :         return nmarked;
     611             :     }
     612             : 
     613           0 :     MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing);
     614           0 :     uint_fast16_t lastMarkedThing = firstThingOrSuccessorOfLastMarkedThing - thingSize;
     615           0 :     if (lastThing == lastMarkedThing) {
     616             :         // If the last thing was marked, we will have already set the bounds of
     617             :         // the final span, and we just need to terminate the list.
     618             :         newListTail->initAsEmpty();
     619             :     } else {
     620             :         // Otherwise, end the list with a span that covers the final stretch of free things.
     621           0 :         newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing, this);
     622             :     }
     623             : 
     624           0 :     firstFreeSpan = newListHead;
     625             : #ifdef DEBUG
     626           0 :     size_t nfree = numFreeThings(thingSize);
     627           0 :     MOZ_ASSERT(nfree + nmarked == thingsPerArena(thingKind));
     628             : #endif
     629             :     return nmarked;
     630             : }
     631             : 
     632             : // Finalize arenas from src list, releasing empty arenas if keepArenas wasn't
     633             : // specified and inserting the others into the appropriate destination size
     634             : // bins.
     635             : template<typename T>
     636             : static inline bool
     637           0 : FinalizeTypedArenas(FreeOp* fop,
     638             :                     Arena** src,
     639             :                     SortedArenaList& dest,
     640             :                     AllocKind thingKind,
     641             :                     SliceBudget& budget,
     642             :                     ArenaLists::KeepArenasEnum keepArenas)
     643             : {
     644             :     // When operating in the foreground, take the lock at the top.
     645           0 :     Maybe<AutoLockGC> maybeLock;
     646           0 :     if (fop->onMainThread())
     647           0 :         maybeLock.emplace(fop->runtime());
     648             : 
     649             :     // During background sweeping free arenas are released later on in
     650             :     // sweepBackgroundThings().
     651           0 :     MOZ_ASSERT_IF(!fop->onMainThread(), keepArenas == ArenaLists::KEEP_ARENAS);
     652             : 
     653           0 :     size_t thingSize = Arena::thingSize(thingKind);
     654           0 :     size_t thingsPerArena = Arena::thingsPerArena(thingKind);
     655             : 
     656           0 :     while (Arena* arena = *src) {
     657           0 :         *src = arena->next;
     658           0 :         size_t nmarked = arena->finalize<T>(fop, thingKind, thingSize);
     659           0 :         size_t nfree = thingsPerArena - nmarked;
     660             : 
     661           0 :         if (nmarked)
     662           0 :             dest.insertAt(arena, nfree);
     663           0 :         else if (keepArenas == ArenaLists::KEEP_ARENAS)
     664           0 :             arena->chunk()->recycleArena(arena, dest, thingsPerArena);
     665             :         else
     666           0 :             fop->runtime()->gc.releaseArena(arena, maybeLock.ref());
     667             : 
     668           0 :         budget.step(thingsPerArena);
     669           0 :         if (budget.isOverBudget())
     670             :             return false;
     671             :     }
     672             : 
     673             :     return true;
     674             : }
     675             : 
     676             : /*
     677             :  * Finalize the list. On return, |al|'s cursor points to the first non-empty
     678             :  * arena in the list (which may be null if all arenas are full).
     679             :  */
     680             : static bool
     681           0 : FinalizeArenas(FreeOp* fop,
     682             :                Arena** src,
     683             :                SortedArenaList& dest,
     684             :                AllocKind thingKind,
     685             :                SliceBudget& budget,
     686             :                ArenaLists::KeepArenasEnum keepArenas)
     687             : {
     688           0 :     switch (thingKind) {
     689             : #define EXPAND_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery) \
     690             :       case AllocKind::allocKind: \
     691             :         return FinalizeTypedArenas<type>(fop, src, dest, thingKind, budget, keepArenas);
     692           0 : FOR_EACH_ALLOCKIND(EXPAND_CASE)
     693             : #undef EXPAND_CASE
     694             : 
     695             :       default:
     696           0 :         MOZ_CRASH("Invalid alloc kind");
     697             :     }
     698             : }
     699             : 
     700             : Chunk*
     701           0 : ChunkPool::pop()
     702             : {
     703           0 :     MOZ_ASSERT(bool(head_) == bool(count_));
     704           0 :     if (!count_)
     705             :         return nullptr;
     706           0 :     return remove(head_);
     707             : }
     708             : 
     709             : void
     710           0 : ChunkPool::push(Chunk* chunk)
     711             : {
     712           0 :     MOZ_ASSERT(!chunk->info.next);
     713           0 :     MOZ_ASSERT(!chunk->info.prev);
     714             : 
     715           0 :     chunk->info.next = head_;
     716           0 :     if (head_)
     717           0 :         head_->info.prev = chunk;
     718           0 :     head_ = chunk;
     719           0 :     ++count_;
     720             : 
     721           0 :     MOZ_ASSERT(verify());
     722           0 : }
     723             : 
     724             : Chunk*
     725           0 : ChunkPool::remove(Chunk* chunk)
     726             : {
     727           0 :     MOZ_ASSERT(count_ > 0);
     728           0 :     MOZ_ASSERT(contains(chunk));
     729             : 
     730           0 :     if (head_ == chunk)
     731           0 :         head_ = chunk->info.next;
     732           0 :     if (chunk->info.prev)
     733           0 :         chunk->info.prev->info.next = chunk->info.next;
     734           0 :     if (chunk->info.next)
     735           0 :         chunk->info.next->info.prev = chunk->info.prev;
     736           0 :     chunk->info.next = chunk->info.prev = nullptr;
     737           0 :     --count_;
     738             : 
     739           0 :     MOZ_ASSERT(verify());
     740           0 :     return chunk;
     741             : }
     742             : 
     743             : #ifdef DEBUG
     744             : bool
     745           0 : ChunkPool::contains(Chunk* chunk) const
     746             : {
     747           0 :     verify();
     748           0 :     for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
     749           0 :         if (cursor == chunk)
     750             :             return true;
     751             :     }
     752             :     return false;
     753             : }
     754             : 
     755             : bool
     756           0 : ChunkPool::verify() const
     757             : {
     758           0 :     MOZ_ASSERT(bool(head_) == bool(count_));
     759             :     uint32_t count = 0;
     760           0 :     for (Chunk* cursor = head_; cursor; cursor = cursor->info.next, ++count) {
     761           0 :         MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor);
     762           0 :         MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor);
     763             :     }
     764           0 :     MOZ_ASSERT(count_ == count);
     765           0 :     return true;
     766             : }
     767             : #endif
     768             : 
     769             : void
     770           0 : ChunkPool::Iter::next()
     771             : {
     772           0 :     MOZ_ASSERT(!done());
     773           0 :     current_ = current_->info.next;
     774           0 : }
     775             : 
     776             : ChunkPool
     777           0 : GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock)
     778             : {
     779           0 :     MOZ_ASSERT(emptyChunks(lock).verify());
     780           0 :     MOZ_ASSERT(tunables.minEmptyChunkCount(lock) <= tunables.maxEmptyChunkCount());
     781             : 
     782             :     ChunkPool expired;
     783           0 :     while (emptyChunks(lock).count() > tunables.minEmptyChunkCount(lock)) {
     784           0 :         Chunk* chunk = emptyChunks(lock).pop();
     785           0 :         prepareToFreeChunk(chunk->info);
     786           0 :         expired.push(chunk);
     787             :     }
     788             : 
     789           0 :     MOZ_ASSERT(expired.verify());
     790           0 :     MOZ_ASSERT(emptyChunks(lock).verify());
     791           0 :     MOZ_ASSERT(emptyChunks(lock).count() <= tunables.maxEmptyChunkCount());
     792           0 :     MOZ_ASSERT(emptyChunks(lock).count() <= tunables.minEmptyChunkCount(lock));
     793           0 :     return expired;
     794             : }
     795             : 
     796             : static void
     797           0 : FreeChunkPool(ChunkPool& pool)
     798             : {
     799           0 :     for (ChunkPool::Iter iter(pool); !iter.done();) {
     800           0 :         Chunk* chunk = iter.get();
     801           0 :         iter.next();
     802           0 :         pool.remove(chunk);
     803           0 :         MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
     804           0 :         UnmapPages(static_cast<void*>(chunk), ChunkSize);
     805             :     }
     806           0 :     MOZ_ASSERT(pool.count() == 0);
     807           0 : }
     808             : 
     809             : void
     810           0 : GCRuntime::freeEmptyChunks(const AutoLockGC& lock)
     811             : {
     812           0 :     FreeChunkPool(emptyChunks(lock));
     813           0 : }
     814             : 
     815             : inline void
     816           0 : GCRuntime::prepareToFreeChunk(ChunkInfo& info)
     817             : {
     818           0 :     MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
     819           0 :     numArenasFreeCommitted -= info.numArenasFreeCommitted;
     820           0 :     stats().count(gcstats::STAT_DESTROY_CHUNK);
     821             : #ifdef DEBUG
     822             :     /*
     823             :      * Let FreeChunkPool detect a missing prepareToFreeChunk call before it
     824             :      * frees chunk.
     825             :      */
     826           0 :     info.numArenasFreeCommitted = 0;
     827             : #endif
     828           0 : }
     829             : 
     830             : inline void
     831             : GCRuntime::updateOnArenaFree()
     832             : {
     833           0 :     ++numArenasFreeCommitted;
     834             : }
     835             : 
     836             : void
     837           0 : Chunk::addArenaToFreeList(JSRuntime* rt, Arena* arena)
     838             : {
     839           0 :     MOZ_ASSERT(!arena->allocated());
     840           0 :     arena->next = info.freeArenasHead;
     841           0 :     info.freeArenasHead = arena;
     842           0 :     ++info.numArenasFreeCommitted;
     843           0 :     ++info.numArenasFree;
     844           0 :     rt->gc.updateOnArenaFree();
     845           0 : }
     846             : 
     847             : void
     848           0 : Chunk::addArenaToDecommittedList(const Arena* arena)
     849             : {
     850           0 :     ++info.numArenasFree;
     851           0 :     decommittedArenas.set(Chunk::arenaIndex(arena->address()));
     852           0 : }
     853             : 
     854             : void
     855           0 : Chunk::recycleArena(Arena* arena, SortedArenaList& dest, size_t thingsPerArena)
     856             : {
     857           0 :     arena->setAsFullyUnused();
     858           0 :     dest.insertAt(arena, thingsPerArena);
     859           0 : }
     860             : 
     861             : void
     862           0 : Chunk::releaseArena(JSRuntime* rt, Arena* arena, const AutoLockGC& lock)
     863             : {
     864           0 :     MOZ_ASSERT(arena->allocated());
     865           0 :     MOZ_ASSERT(!arena->hasDelayedMarking);
     866             : 
     867           0 :     arena->release(lock);
     868           0 :     addArenaToFreeList(rt, arena);
     869           0 :     updateChunkListAfterFree(rt, lock);
     870           0 : }
     871             : 
     872             : bool
     873           0 : Chunk::decommitOneFreeArena(JSRuntime* rt, AutoLockGC& lock)
     874             : {
     875           0 :     MOZ_ASSERT(info.numArenasFreeCommitted > 0);
     876           0 :     Arena* arena = fetchNextFreeArena(rt);
     877           0 :     updateChunkListAfterAlloc(rt, lock);
     878             : 
     879             :     bool ok;
     880             :     {
     881           0 :         AutoUnlockGC unlock(lock);
     882           0 :         ok = MarkPagesUnused(arena, ArenaSize);
     883             :     }
     884             : 
     885           0 :     if (ok)
     886           0 :         addArenaToDecommittedList(arena);
     887             :     else
     888           0 :         addArenaToFreeList(rt, arena);
     889           0 :     updateChunkListAfterFree(rt, lock);
     890             : 
     891           0 :     return ok;
     892             : }
     893             : 
     894             : void
     895           0 : Chunk::decommitAllArenasWithoutUnlocking(const AutoLockGC& lock)
     896             : {
     897           0 :     for (size_t i = 0; i < ArenasPerChunk; ++i) {
     898           0 :         if (decommittedArenas.get(i) || arenas[i].allocated())
     899             :             continue;
     900             : 
     901           0 :         if (MarkPagesUnused(&arenas[i], ArenaSize)) {
     902           0 :             info.numArenasFreeCommitted--;
     903           0 :             decommittedArenas.set(i);
     904             :         }
     905             :     }
     906           0 : }
     907             : 
     908             : void
     909           0 : Chunk::updateChunkListAfterAlloc(JSRuntime* rt, const AutoLockGC& lock)
     910             : {
     911           0 :     if (MOZ_UNLIKELY(!hasAvailableArenas())) {
     912           0 :         rt->gc.availableChunks(lock).remove(this);
     913           0 :         rt->gc.fullChunks(lock).push(this);
     914             :     }
     915           0 : }
     916             : 
     917             : void
     918           0 : Chunk::updateChunkListAfterFree(JSRuntime* rt, const AutoLockGC& lock)
     919             : {
     920           0 :     if (info.numArenasFree == 1) {
     921           0 :         rt->gc.fullChunks(lock).remove(this);
     922           0 :         rt->gc.availableChunks(lock).push(this);
     923           0 :     } else if (!unused()) {
     924           0 :         MOZ_ASSERT(!rt->gc.fullChunks(lock).contains(this));
     925           0 :         MOZ_ASSERT(rt->gc.availableChunks(lock).contains(this));
     926           0 :         MOZ_ASSERT(!rt->gc.emptyChunks(lock).contains(this));
     927             :     } else {
     928           0 :         MOZ_ASSERT(unused());
     929           0 :         rt->gc.availableChunks(lock).remove(this);
     930           0 :         decommitAllArenas();
     931           0 :         MOZ_ASSERT(info.numArenasFreeCommitted == 0);
     932           0 :         rt->gc.recycleChunk(this, lock);
     933             :     }
     934           0 : }
     935             : 
     936             : void
     937           0 : GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock)
     938             : {
     939           0 :     arena->zone->usage.removeGCArena();
     940           0 :     if (isBackgroundSweeping())
     941           0 :         arena->zone->threshold.updateForRemovedArena(tunables);
     942           0 :     return arena->chunk()->releaseArena(rt, arena, lock);
     943             : }
     944             : 
     945           0 : GCRuntime::GCRuntime(JSRuntime* rt) :
     946             :     rt(rt),
     947             :     systemZone(nullptr),
     948             :     atomsZone(nullptr),
     949             :     stats_(rt),
     950             :     marker(rt),
     951             :     usage(nullptr),
     952             :     nextCellUniqueId_(LargestTaggedNullCellPointer + 1), // Ensure disjoint from null tagged pointers.
     953             :     numArenasFreeCommitted(0),
     954             :     verifyPreData(nullptr),
     955             :     chunkAllocationSinceLastGC(false),
     956           0 :     lastGCTime(PRMJ_Now()),
     957             :     mode(TuningDefaults::Mode),
     958             :     numActiveZoneIters(0),
     959             :     cleanUpEverything(false),
     960             :     grayBufferState(GCRuntime::GrayBufferState::Unused),
     961             :     grayBitsValid(false),
     962             :     majorGCTriggerReason(JS::gcreason::NO_REASON),
     963             :     fullGCForAtomsRequested_(false),
     964             :     minorGCNumber(0),
     965             :     majorGCNumber(0),
     966             :     jitReleaseNumber(0),
     967             :     number(0),
     968             :     isFull(false),
     969             :     incrementalState(gc::State::NotActive),
     970             :     initialState(gc::State::NotActive),
     971             : #ifdef JS_GC_ZEAL
     972             :     useZeal(false),
     973             : #endif
     974             :     lastMarkSlice(false),
     975             :     safeToYield(true),
     976             :     sweepOnBackgroundThread(false),
     977             :     blocksToFreeAfterSweeping((size_t) JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
     978             :     sweepGroupIndex(0),
     979             :     sweepGroups(nullptr),
     980             :     currentSweepGroup(nullptr),
     981             :     sweepZone(nullptr),
     982             :     abortSweepAfterCurrentGroup(false),
     983             :     startedCompacting(false),
     984             :     relocatedArenasToRelease(nullptr),
     985             : #ifdef JS_GC_ZEAL
     986             :     markingValidator(nullptr),
     987             : #endif
     988             :     defaultTimeBudget_(TuningDefaults::DefaultTimeBudget),
     989             :     incrementalAllowed(true),
     990             :     compactingEnabled(TuningDefaults::CompactingEnabled),
     991             :     rootsRemoved(false),
     992             : #ifdef JS_GC_ZEAL
     993             :     zealModeBits(0),
     994             :     zealFrequency(0),
     995             :     nextScheduled(0),
     996             :     deterministicOnly(false),
     997             :     incrementalLimit(0),
     998             : #endif
     999             :     fullCompartmentChecks(false),
    1000             :     gcCallbackDepth(0),
    1001             :     alwaysPreserveCode(false),
    1002             : #ifdef DEBUG
    1003             :     arenasEmptyAtShutdown(true),
    1004             : #endif
    1005             :     lock(mutexid::GCLock),
    1006             :     allocTask(rt, emptyChunks_.ref()),
    1007             :     decommitTask(rt),
    1008             :     helperState(rt),
    1009             :     nursery_(rt),
    1010             :     storeBuffer_(rt, nursery()),
    1011           0 :     blocksToFreeAfterMinorGC((size_t) JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE)
    1012             : {
    1013           0 :     setGCMode(JSGC_MODE_GLOBAL);
    1014           0 : }
    1015             : 
    1016             : #ifdef JS_GC_ZEAL
    1017             : 
    1018             : void
    1019           0 : GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency, uint32_t* scheduled)
    1020             : {
    1021           0 :     *zealBits = zealModeBits;
    1022           0 :     *frequency = zealFrequency;
    1023           0 :     *scheduled = nextScheduled;
    1024           0 : }
    1025             : 
    1026             : const char* gc::ZealModeHelpText =
    1027             :     "  Specifies how zealous the garbage collector should be. Some of these modes can\n"
    1028             :     "  be set simultaneously, by passing multiple level options, e.g. \"2;4\" will activate\n"
    1029             :     "  both modes 2 and 4. Modes can be specified by name or number.\n"
    1030             :     "  \n"
    1031             :     "  Values:\n"
    1032             :     "    0:  (None) Normal amount of collection (resets all modes)\n"
    1033             :     "    1:  (RootsChange) Collect when roots are added or removed\n"
    1034             :     "    2:  (Alloc) Collect when every N allocations (default: 100)\n"
    1035             :     "    4:  (VerifierPre) Verify pre write barriers between instructions\n"
    1036             :     "    7:  (GenerationalGC) Collect the nursery every N nursery allocations\n"
    1037             :     "    8:  (YieldBeforeMarking) Incremental GC in two slices that yields between\n"
    1038             :     "        the root marking and marking phases\n"
    1039             :     "    9:  (YieldBeforeSweeping) Incremental GC in two slices that yields between\n"
    1040             :     "        the marking and sweeping phases\n"
    1041             :     "    10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
    1042             :     "    11: (IncrementalMarkingValidator) Verify incremental marking\n"
    1043             :     "    12: (ElementsBarrier) Use the individual element post-write barrier\n"
    1044             :     "        regardless of elements size\n"
    1045             :     "    13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
    1046             :     "    14: (Compact) Perform a shrinking collection every N allocations\n"
    1047             :     "    15: (CheckHeapAfterGC) Walk the heap to check its integrity after every GC\n"
    1048             :     "    16: (CheckNursery) Check nursery integrity on minor GC\n"
    1049             :     "    17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that yields\n"
    1050             :     "        before sweeping the atoms table\n"
    1051             :     "    18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
    1052             :     "    19: (YieldBeforeSweepingCaches) Incremental GC in two slices that yields\n"
    1053             :     "        before sweeping weak caches\n"
    1054             :     "    20: (YieldBeforeSweepingTypes) Incremental GC in two slices that yields\n"
    1055             :     "        before sweeping type information\n"
    1056             :     "    21: (YieldBeforeSweepingObjects) Incremental GC in two slices that yields\n"
    1057             :     "        before sweeping foreground finalized objects\n"
    1058             :     "    22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that yields\n"
    1059             :     "        before sweeping non-object GC things\n"
    1060             :     "    23: (YieldBeforeSweepingShapeTrees) Incremental GC in two slices that yields\n"
    1061             :     "        before sweeping shape trees\n";
    1062             : 
    1063             : // The set of zeal modes that control incremental slices. These modes are
    1064             : // mutually exclusive.
    1065           0 : static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
    1066             :     ZealMode::YieldBeforeMarking,
    1067             :     ZealMode::YieldBeforeSweeping,
    1068             :     ZealMode::IncrementalMultipleSlices,
    1069             :     ZealMode::YieldBeforeSweepingAtoms,
    1070             :     ZealMode::YieldBeforeSweepingCaches,
    1071             :     ZealMode::YieldBeforeSweepingTypes,
    1072             :     ZealMode::YieldBeforeSweepingObjects,
    1073             :     ZealMode::YieldBeforeSweepingNonObjects,
    1074             :     ZealMode::YieldBeforeSweepingShapeTrees
    1075             : };
    1076             : 
    1077             : void
    1078           0 : GCRuntime::setZeal(uint8_t zeal, uint32_t frequency)
    1079             : {
    1080           0 :     MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
    1081             : 
    1082           0 :     if (temporaryAbortIfWasmGc(rt->mainContextFromOwnThread()))
    1083             :         return;
    1084             : 
    1085           0 :     if (verifyPreData)
    1086           0 :         VerifyBarriers(rt, PreBarrierVerifier);
    1087             : 
    1088           0 :     if (zeal == 0) {
    1089           0 :         if (hasZealMode(ZealMode::GenerationalGC)) {
    1090           0 :             evictNursery(JS::gcreason::DEBUG_GC);
    1091           0 :             nursery().leaveZealMode();
    1092             :         }
    1093             : 
    1094           0 :         if (isIncrementalGCInProgress())
    1095           0 :             finishGC(JS::gcreason::DEBUG_GC);
    1096             :     }
    1097             : 
    1098           0 :     ZealMode zealMode = ZealMode(zeal);
    1099           0 :     if (zealMode == ZealMode::GenerationalGC)
    1100           0 :         nursery().enterZealMode();
    1101             : 
    1102             :     // Some modes are mutually exclusive. If we're setting one of those, we
    1103             :     // first reset all of them.
    1104           0 :     if (IncrementalSliceZealModes.contains(zealMode)) {
    1105           0 :         for (auto mode : IncrementalSliceZealModes)
    1106           0 :             clearZealMode(mode);
    1107             :     }
    1108             : 
    1109           0 :     bool schedule = zealMode >= ZealMode::Alloc;
    1110           0 :     if (zeal != 0)
    1111           0 :         zealModeBits |= 1 << unsigned(zeal);
    1112             :     else
    1113           0 :         zealModeBits = 0;
    1114           0 :     zealFrequency = frequency;
    1115           0 :     nextScheduled = schedule ? frequency : 0;
    1116             : }
    1117             : 
    1118             : void
    1119           0 : GCRuntime::setNextScheduled(uint32_t count)
    1120             : {
    1121           0 :     nextScheduled = count;
    1122           0 : }
    1123             : 
    1124             : using CharRange = mozilla::Range<const char>;
    1125             : using CharRangeVector = Vector<CharRange, 0, SystemAllocPolicy>;
    1126             : 
    1127             : static bool
    1128           0 : ParseZealModeName(CharRange text, uint32_t* modeOut)
    1129             : {
    1130             :     struct ModeInfo
    1131             :     {
    1132             :         const char* name;
    1133             :         size_t length;
    1134             :         uint32_t value;
    1135             :     };
    1136             : 
    1137             :     static const ModeInfo zealModes[] = {
    1138             :         {"None", 0},
    1139             : #define ZEAL_MODE(name, value) {#name, strlen(#name), value},
    1140             :         JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
    1141             : #undef ZEAL_MODE
    1142             :     };
    1143             : 
    1144           0 :     for (auto mode : zealModes) {
    1145           0 :         if (text.length() == mode.length &&
    1146           0 :             memcmp(text.begin().get(), mode.name, mode.length) == 0)
    1147             :         {
    1148           0 :             *modeOut = mode.value;
    1149           0 :             return true;
    1150             :         }
    1151             :     }
    1152             : 
    1153             :     return false;
    1154             : }
    1155             : 
    1156             : static bool
    1157           0 : ParseZealModeNumericParam(CharRange text, uint32_t* paramOut)
    1158             : {
    1159           0 :     if (text.length() == 0)
    1160             :         return false;
    1161             : 
    1162           0 :     for (auto c : text) {
    1163           0 :         if (!isdigit(c))
    1164           0 :             return false;
    1165             :     }
    1166             : 
    1167           0 :     *paramOut = atoi(text.begin().get());
    1168           0 :     return true;
    1169             : }
    1170             : 
    1171             : static bool
    1172           0 : SplitStringBy(CharRange text, char delimiter, CharRangeVector* result)
    1173             : {
    1174           0 :     auto start = text.begin();
    1175           0 :     for (auto ptr = start; ptr != text.end(); ptr++) {
    1176           0 :         if (*ptr == delimiter) {
    1177           0 :             if (!result->emplaceBack(start, ptr))
    1178           0 :                 return false;
    1179           0 :             start = ptr + 1;
    1180             :         }
    1181             :     }
    1182             : 
    1183           0 :     return result->emplaceBack(start, text.end());
    1184             : }
    1185             : 
    1186             : static bool
    1187           0 : PrintZealHelpAndFail()
    1188             : {
    1189           0 :     fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
    1190           0 :     fputs(ZealModeHelpText, stderr);
    1191           0 :     return false;
    1192             : }
    1193             : 
    1194             : bool
    1195           0 : GCRuntime::parseAndSetZeal(const char* str)
    1196             : {
    1197             :     // Set the zeal mode from a string consisting of one or more mode specifiers
    1198             :     // separated by ';', optionally followed by a ',' and the trigger frequency.
    1199             :     // The mode specifiers can by a mode name or its number.
    1200             : 
    1201           0 :     auto text = CharRange(str, strlen(str));
    1202             : 
    1203           0 :     CharRangeVector parts;
    1204           0 :     if (!SplitStringBy(text, ',', &parts))
    1205             :         return false;
    1206             : 
    1207           0 :     if (parts.length() == 0 || parts.length() > 2)
    1208           0 :         return PrintZealHelpAndFail();
    1209             : 
    1210           0 :     uint32_t frequency = JS_DEFAULT_ZEAL_FREQ;
    1211           0 :     if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency))
    1212           0 :         return PrintZealHelpAndFail();
    1213             : 
    1214           0 :     CharRangeVector modes;
    1215           0 :     if (!SplitStringBy(parts[0], ';', &modes))
    1216             :         return false;
    1217             : 
    1218           0 :     for (const auto& descr : modes) {
    1219             :         uint32_t mode;
    1220           0 :         if (!ParseZealModeName(descr, &mode) && !ParseZealModeNumericParam(descr, &mode))
    1221           0 :             return PrintZealHelpAndFail();
    1222             : 
    1223           0 :         setZeal(mode, frequency);
    1224             :     }
    1225             : 
    1226             :     return true;
    1227             : }
    1228             : 
    1229             : static const char*
    1230           0 : AllocKindName(AllocKind kind)
    1231             : {
    1232             :     static const char* names[] = {
    1233             : #define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5) \
    1234             :         #allocKind,
    1235             : FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
    1236             : #undef EXPAND_THING_NAME
    1237             :     };
    1238             :     static_assert(ArrayLength(names) == size_t(AllocKind::LIMIT),
    1239             :                   "names array should have an entry for every AllocKind");
    1240             : 
    1241           0 :     size_t i = size_t(kind);
    1242           0 :     MOZ_ASSERT(i < ArrayLength(names));
    1243           0 :     return names[i];
    1244             : }
    1245             : 
    1246             : void
    1247           0 : js::gc::DumpArenaInfo()
    1248             : {
    1249           0 :     fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);
    1250             : 
    1251           0 :     fprintf(stderr, "GC thing kinds:\n");
    1252           0 :     fprintf(stderr, "%25s %8s %8s %8s\n", "AllocKind:", "Size:", "Count:", "Padding:");
    1253           0 :     for (auto kind : AllAllocKinds()) {
    1254           0 :         fprintf(stderr,
    1255             :                 "%25s %8zu %8zu %8zu\n",
    1256             :                 AllocKindName(kind),
    1257             :                 Arena::thingSize(kind),
    1258             :                 Arena::thingsPerArena(kind),
    1259           0 :                 Arena::firstThingOffset(kind) - ArenaHeaderSize);
    1260             :     }
    1261           0 : }
    1262             : 
    1263             : #endif // JS_GC_ZEAL
    1264             : 
    1265             : /*
    1266             :  * Lifetime in number of major GCs for type sets attached to scripts containing
    1267             :  * observed types.
    1268             :  */
    1269             : static const uint64_t JIT_SCRIPT_RELEASE_TYPES_PERIOD = 20;
    1270             : 
    1271             : bool
    1272           0 : GCRuntime::init(uint32_t maxbytes, uint32_t maxNurseryBytes)
    1273             : {
    1274           0 :     MOZ_ASSERT(SystemPageSize());
    1275             : 
    1276           0 :     if (!rootsHash.ref().init(256))
    1277             :         return false;
    1278             : 
    1279             :     {
    1280           0 :         AutoLockGCBgAlloc lock(rt);
    1281             : 
    1282           0 :         MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes, lock));
    1283           0 :         MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_NURSERY_BYTES, maxNurseryBytes, lock));
    1284           0 :         setMaxMallocBytes(TuningDefaults::MaxMallocBytes, lock);
    1285             : 
    1286           0 :         const char* size = getenv("JSGC_MARK_STACK_LIMIT");
    1287           0 :         if (size)
    1288           0 :             setMarkStackLimit(atoi(size), lock);
    1289             : 
    1290           0 :         jitReleaseNumber = majorGCNumber + JIT_SCRIPT_RELEASE_TYPES_PERIOD;
    1291             : 
    1292           0 :         if (!nursery().init(maxNurseryBytes, lock))
    1293           0 :             return false;
    1294             :     }
    1295             : 
    1296             : #ifdef JS_GC_ZEAL
    1297           0 :     const char* zealSpec = getenv("JS_GC_ZEAL");
    1298           0 :     if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec))
    1299             :         return false;
    1300             : #endif
    1301             : 
    1302           0 :     if (!gcTracer.initTrace(*this))
    1303             :         return false;
    1304             : 
    1305           0 :     if (!marker.init(mode))
    1306             :         return false;
    1307             : 
    1308           0 :     if (!initSweepActions())
    1309             :         return false;
    1310             : 
    1311           0 :     return true;
    1312             : }
    1313             : 
    1314             : void
    1315           0 : GCRuntime::finish()
    1316             : {
    1317             :     /* Wait for nursery background free to end and disable it to release memory. */
    1318           0 :     if (nursery().isEnabled()) {
    1319           0 :         nursery().waitBackgroundFreeEnd();
    1320           0 :         nursery().disable();
    1321             :     }
    1322             : 
    1323             :     /*
    1324             :      * Wait until the background finalization and allocation stops and the
    1325             :      * helper thread shuts down before we forcefully release any remaining GC
    1326             :      * memory.
    1327             :      */
    1328           0 :     helperState.finish();
    1329           0 :     allocTask.cancelAndWait();
    1330           0 :     decommitTask.cancelAndWait();
    1331             : 
    1332             : #ifdef JS_GC_ZEAL
    1333             :     /* Free memory associated with GC verification. */
    1334           0 :     finishVerifier();
    1335             : #endif
    1336             : 
    1337             :     /* Delete all remaining zones. */
    1338           0 :     if (rt->gcInitialized) {
    1339           0 :         AutoSetThreadIsSweeping threadIsSweeping;
    1340           0 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    1341           0 :             for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
    1342           0 :                 for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next())
    1343           0 :                     js_delete(realm.get());
    1344           0 :                 comp->realms().clear();
    1345           0 :                 js_delete(comp.get());
    1346             :             }
    1347           0 :             zone->compartments().clear();
    1348           0 :             js_delete(zone.get());
    1349             :         }
    1350             :     }
    1351             : 
    1352           0 :     zones().clear();
    1353             : 
    1354           0 :     FreeChunkPool(fullChunks_.ref());
    1355           0 :     FreeChunkPool(availableChunks_.ref());
    1356           0 :     FreeChunkPool(emptyChunks_.ref());
    1357             : 
    1358           0 :     gcTracer.finishTrace();
    1359             : 
    1360           0 :     nursery().printTotalProfileTimes();
    1361           0 :     stats().printTotalProfileTimes();
    1362           0 : }
    1363             : 
    1364             : bool
    1365           0 : GCRuntime::setParameter(JSGCParamKey key, uint32_t value, AutoLockGC& lock)
    1366             : {
    1367          50 :     switch (key) {
    1368             :       case JSGC_MAX_MALLOC_BYTES:
    1369           0 :         setMaxMallocBytes(value, lock);
    1370           4 :         break;
    1371             :       case JSGC_SLICE_TIME_BUDGET:
    1372           8 :         defaultTimeBudget_ = value ? value : SliceBudget::UnlimitedTimeBudget;
    1373           0 :         break;
    1374             :       case JSGC_MARK_STACK_LIMIT:
    1375           0 :         if (value == 0)
    1376             :             return false;
    1377           0 :         setMarkStackLimit(value, lock);
    1378           0 :         break;
    1379             :       case JSGC_MODE:
    1380           7 :         if (mode != JSGC_MODE_GLOBAL &&
    1381           4 :             mode != JSGC_MODE_ZONE &&
    1382           0 :             mode != JSGC_MODE_INCREMENTAL)
    1383             :         {
    1384             :             return false;
    1385             :         }
    1386           0 :         mode = JSGCMode(value);
    1387           2 :         break;
    1388             :       case JSGC_COMPACTING_ENABLED:
    1389           2 :         compactingEnabled = value != 0;
    1390           0 :         break;
    1391             :       default:
    1392           0 :         if (!tunables.setParameter(key, value, lock))
    1393             :             return false;
    1394         251 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    1395         268 :             zone->threshold.updateAfterGC(zone->usage.gcBytes(), GC_NORMAL, tunables,
    1396          67 :                                           schedulingState, lock);
    1397             :         }
    1398             :     }
    1399             : 
    1400             :     return true;
    1401             : }
    1402             : 
    1403             : bool
    1404          47 : GCSchedulingTunables::setParameter(JSGCParamKey key, uint32_t value, const AutoLockGC& lock)
    1405             : {
    1406             :     // Limit heap growth factor to one hundred times size of current heap.
    1407           0 :     const double MaxHeapGrowthFactor = 100;
    1408             : 
    1409          47 :     switch(key) {
    1410             :       case JSGC_MAX_BYTES:
    1411           8 :         gcMaxBytes_ = value;
    1412             :         break;
    1413             :       case JSGC_MAX_NURSERY_BYTES:
    1414           0 :         gcMaxNurseryBytes_ = value;
    1415             :         break;
    1416             :       case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
    1417           8 :         highFrequencyThresholdUsec_ = value * PRMJ_USEC_PER_MSEC;
    1418           4 :         break;
    1419             :       case JSGC_HIGH_FREQUENCY_LOW_LIMIT: {
    1420           0 :         uint64_t newLimit = (uint64_t)value * 1024 * 1024;
    1421             :         if (newLimit == UINT64_MAX)
    1422             :             return false;
    1423           0 :         setHighFrequencyLowLimit(newLimit);
    1424           0 :         break;
    1425             :       }
    1426             :       case JSGC_HIGH_FREQUENCY_HIGH_LIMIT: {
    1427           0 :         uint64_t newLimit = (uint64_t)value * 1024 * 1024;
    1428           4 :         if (newLimit == 0)
    1429             :             return false;
    1430           0 :         setHighFrequencyHighLimit(newLimit);
    1431           0 :         break;
    1432             :       }
    1433             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX: {
    1434           0 :         double newGrowth = value / 100.0;
    1435           4 :         if (newGrowth < MinHighFrequencyHeapGrowthFactor || newGrowth > MaxHeapGrowthFactor)
    1436             :             return false;
    1437           0 :         setHighFrequencyHeapGrowthMax(newGrowth);
    1438           0 :         break;
    1439             :       }
    1440             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN: {
    1441           0 :         double newGrowth = value / 100.0;
    1442           4 :         if (newGrowth < MinHighFrequencyHeapGrowthFactor || newGrowth > MaxHeapGrowthFactor)
    1443             :             return false;
    1444           0 :         setHighFrequencyHeapGrowthMin(newGrowth);
    1445           0 :         break;
    1446             :       }
    1447             :       case JSGC_LOW_FREQUENCY_HEAP_GROWTH: {
    1448           0 :         double newGrowth = value / 100.0;
    1449           4 :         if (newGrowth < MinLowFrequencyHeapGrowthFactor || newGrowth > MaxHeapGrowthFactor)
    1450             :             return false;
    1451           0 :         setLowFrequencyHeapGrowth(newGrowth);
    1452           0 :         break;
    1453             :       }
    1454             :       case JSGC_DYNAMIC_HEAP_GROWTH:
    1455           0 :         dynamicHeapGrowthEnabled_ = value != 0;
    1456           1 :         break;
    1457             :       case JSGC_DYNAMIC_MARK_SLICE:
    1458           0 :         dynamicMarkSliceEnabled_ = value != 0;
    1459           1 :         break;
    1460             :       case JSGC_ALLOCATION_THRESHOLD:
    1461           0 :         gcZoneAllocThresholdBase_ = value * 1024 * 1024;
    1462           4 :         break;
    1463             :       case JSGC_ALLOCATION_THRESHOLD_FACTOR: {
    1464           0 :         double newFactor = value / 100.0;
    1465           1 :         if (newFactor < MinAllocationThresholdFactor || newFactor > 1.0)
    1466             :             return false;
    1467           0 :         allocThresholdFactor_ = newFactor;
    1468           0 :         break;
    1469             :       }
    1470             :       case JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT: {
    1471           0 :         double newFactor = value / 100.0;
    1472           1 :         if (newFactor < MinAllocationThresholdFactor || newFactor > 1.0)
    1473             :             return false;
    1474           0 :         allocThresholdFactorAvoidInterrupt_ = newFactor;
    1475           0 :         break;
    1476             :       }
    1477             :       case JSGC_MIN_EMPTY_CHUNK_COUNT:
    1478           0 :         setMinEmptyChunkCount(value);
    1479           1 :         break;
    1480             :       case JSGC_MAX_EMPTY_CHUNK_COUNT:
    1481           1 :         setMaxEmptyChunkCount(value);
    1482           1 :         break;
    1483             :       case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION:
    1484           0 :         if (value > gcMaxNurseryBytes())
    1485           0 :             value = gcMaxNurseryBytes();
    1486           0 :         nurseryFreeThresholdForIdleCollection_ = value;
    1487             :         break;
    1488             :       default:
    1489           0 :         MOZ_CRASH("Unknown GC parameter.");
    1490             :     }
    1491             : 
    1492             :     return true;
    1493             : }
    1494             : 
    1495             : void
    1496           0 : GCSchedulingTunables::setMaxMallocBytes(size_t value)
    1497             : {
    1498           0 :     maxMallocBytes_ = std::min(value, TuningDefaults::MallocThresholdLimit);
    1499           0 : }
    1500             : 
    1501             : void
    1502           0 : GCSchedulingTunables::setHighFrequencyLowLimit(uint64_t newLimit)
    1503             : {
    1504           0 :     highFrequencyLowLimitBytes_ = newLimit;
    1505           4 :     if (highFrequencyLowLimitBytes_ >= highFrequencyHighLimitBytes_)
    1506           0 :         highFrequencyHighLimitBytes_ = highFrequencyLowLimitBytes_ + 1;
    1507           0 :     MOZ_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_);
    1508           4 : }
    1509             : 
    1510             : void
    1511           0 : GCSchedulingTunables::setHighFrequencyHighLimit(uint64_t newLimit)
    1512             : {
    1513           0 :     highFrequencyHighLimitBytes_ = newLimit;
    1514           4 :     if (highFrequencyHighLimitBytes_ <= highFrequencyLowLimitBytes_)
    1515           0 :         highFrequencyLowLimitBytes_ = highFrequencyHighLimitBytes_ - 1;
    1516           0 :     MOZ_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_);
    1517           4 : }
    1518             : 
    1519             : void
    1520           0 : GCSchedulingTunables::setHighFrequencyHeapGrowthMin(double value)
    1521             : {
    1522           0 :     highFrequencyHeapGrowthMin_ = value;
    1523           0 :     if (highFrequencyHeapGrowthMin_ > highFrequencyHeapGrowthMax_)
    1524           0 :         highFrequencyHeapGrowthMax_ = highFrequencyHeapGrowthMin_;
    1525           8 :     MOZ_ASSERT(highFrequencyHeapGrowthMin_ >= MinHighFrequencyHeapGrowthFactor);
    1526           0 :     MOZ_ASSERT(highFrequencyHeapGrowthMin_ <= highFrequencyHeapGrowthMax_);
    1527           4 : }
    1528             : 
    1529             : void
    1530           0 : GCSchedulingTunables::setHighFrequencyHeapGrowthMax(double value)
    1531             : {
    1532           0 :     highFrequencyHeapGrowthMax_ = value;
    1533           0 :     if (highFrequencyHeapGrowthMax_ < highFrequencyHeapGrowthMin_)
    1534           0 :         highFrequencyHeapGrowthMin_ = highFrequencyHeapGrowthMax_;
    1535           8 :     MOZ_ASSERT(highFrequencyHeapGrowthMin_ >= MinHighFrequencyHeapGrowthFactor);
    1536           0 :     MOZ_ASSERT(highFrequencyHeapGrowthMin_ <= highFrequencyHeapGrowthMax_);
    1537           4 : }
    1538             : 
    1539             : void
    1540           0 : GCSchedulingTunables::setLowFrequencyHeapGrowth(double value)
    1541             : {
    1542           8 :     lowFrequencyHeapGrowth_ = value;
    1543           0 :     MOZ_ASSERT(lowFrequencyHeapGrowth_ >= MinLowFrequencyHeapGrowthFactor);
    1544           4 : }
    1545             : 
    1546             : void
    1547           0 : GCSchedulingTunables::setMinEmptyChunkCount(uint32_t value)
    1548             : {
    1549           0 :     minEmptyChunkCount_ = value;
    1550           2 :     if (minEmptyChunkCount_ > maxEmptyChunkCount_)
    1551           0 :         maxEmptyChunkCount_ = minEmptyChunkCount_;
    1552           0 :     MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
    1553           1 : }
    1554             : 
    1555             : void
    1556           0 : GCSchedulingTunables::setMaxEmptyChunkCount(uint32_t value)
    1557             : {
    1558           0 :     maxEmptyChunkCount_ = value;
    1559           2 :     if (minEmptyChunkCount_ > maxEmptyChunkCount_)
    1560           0 :         minEmptyChunkCount_ = maxEmptyChunkCount_;
    1561           2 :     MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
    1562           1 : }
    1563             : 
    1564           4 : GCSchedulingTunables::GCSchedulingTunables()
    1565             :   : gcMaxBytes_(0),
    1566             :     maxMallocBytes_(TuningDefaults::MaxMallocBytes),
    1567             :     gcMaxNurseryBytes_(0),
    1568             :     gcZoneAllocThresholdBase_(TuningDefaults::GCZoneAllocThresholdBase),
    1569             :     allocThresholdFactor_(TuningDefaults::AllocThresholdFactor),
    1570             :     allocThresholdFactorAvoidInterrupt_(TuningDefaults::AllocThresholdFactorAvoidInterrupt),
    1571             :     zoneAllocDelayBytes_(TuningDefaults::ZoneAllocDelayBytes),
    1572             :     dynamicHeapGrowthEnabled_(TuningDefaults::DynamicHeapGrowthEnabled),
    1573             :     highFrequencyThresholdUsec_(TuningDefaults::HighFrequencyThresholdUsec),
    1574             :     highFrequencyLowLimitBytes_(TuningDefaults::HighFrequencyLowLimitBytes),
    1575             :     highFrequencyHighLimitBytes_(TuningDefaults::HighFrequencyHighLimitBytes),
    1576             :     highFrequencyHeapGrowthMax_(TuningDefaults::HighFrequencyHeapGrowthMax),
    1577             :     highFrequencyHeapGrowthMin_(TuningDefaults::HighFrequencyHeapGrowthMin),
    1578             :     lowFrequencyHeapGrowth_(TuningDefaults::LowFrequencyHeapGrowth),
    1579             :     dynamicMarkSliceEnabled_(TuningDefaults::DynamicMarkSliceEnabled),
    1580             :     minEmptyChunkCount_(TuningDefaults::MinEmptyChunkCount),
    1581             :     maxEmptyChunkCount_(TuningDefaults::MaxEmptyChunkCount),
    1582           0 :     nurseryFreeThresholdForIdleCollection_(TuningDefaults::NurseryFreeThresholdForIdleCollection)
    1583           0 : {}
    1584             : 
    1585             : void
    1586           1 : GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock)
    1587             : {
    1588           1 :     switch (key) {
    1589             :       case JSGC_MAX_MALLOC_BYTES:
    1590           0 :         setMaxMallocBytes(TuningDefaults::MaxMallocBytes, lock);
    1591           0 :         break;
    1592             :       case JSGC_SLICE_TIME_BUDGET:
    1593           0 :         defaultTimeBudget_ = TuningDefaults::DefaultTimeBudget;
    1594             :         break;
    1595             :       case JSGC_MARK_STACK_LIMIT:
    1596           0 :         setMarkStackLimit(MarkStack::DefaultCapacity, lock);
    1597           0 :         break;
    1598             :       case JSGC_MODE:
    1599           0 :         mode = TuningDefaults::Mode;
    1600             :         break;
    1601             :       case JSGC_COMPACTING_ENABLED:
    1602           0 :         compactingEnabled = TuningDefaults::CompactingEnabled;
    1603             :         break;
    1604             :       default:
    1605           1 :         tunables.resetParameter(key, lock);
    1606           9 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    1607           0 :             zone->threshold.updateAfterGC(zone->usage.gcBytes(), GC_NORMAL,
    1608           3 :                 tunables, schedulingState, lock);
    1609             :         }
    1610             :     }
    1611           1 : }
    1612             : 
    1613             : void
    1614           0 : GCSchedulingTunables::resetParameter(JSGCParamKey key, const AutoLockGC& lock)
    1615             : {
    1616           1 :     switch(key) {
    1617             :       case JSGC_MAX_BYTES:
    1618           2 :         gcMaxBytes_ = 0xffffffff;
    1619           1 :         break;
    1620             :       case JSGC_MAX_NURSERY_BYTES:
    1621           0 :         gcMaxNurseryBytes_ = JS::DefaultNurseryBytes;
    1622             :         break;
    1623             :       case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
    1624             :         highFrequencyThresholdUsec_ =
    1625           0 :             TuningDefaults::HighFrequencyThresholdUsec;
    1626             :         break;
    1627             :       case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
    1628           0 :         setHighFrequencyLowLimit(TuningDefaults::HighFrequencyLowLimitBytes);
    1629           0 :         break;
    1630             :       case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
    1631           0 :         setHighFrequencyHighLimit(TuningDefaults::HighFrequencyHighLimitBytes);
    1632           0 :         break;
    1633             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
    1634           0 :         setHighFrequencyHeapGrowthMax(TuningDefaults::HighFrequencyHeapGrowthMax);
    1635           0 :         break;
    1636             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
    1637           0 :         setHighFrequencyHeapGrowthMin(TuningDefaults::HighFrequencyHeapGrowthMin);
    1638           0 :         break;
    1639             :       case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
    1640           0 :         setLowFrequencyHeapGrowth(TuningDefaults::LowFrequencyHeapGrowth);
    1641           0 :         break;
    1642             :       case JSGC_DYNAMIC_HEAP_GROWTH:
    1643           0 :         dynamicHeapGrowthEnabled_ = TuningDefaults::DynamicHeapGrowthEnabled;
    1644             :         break;
    1645             :       case JSGC_DYNAMIC_MARK_SLICE:
    1646           0 :         dynamicMarkSliceEnabled_ = TuningDefaults::DynamicMarkSliceEnabled;
    1647             :         break;
    1648             :       case JSGC_ALLOCATION_THRESHOLD:
    1649           0 :         gcZoneAllocThresholdBase_ = TuningDefaults::GCZoneAllocThresholdBase;
    1650             :         break;
    1651             :       case JSGC_ALLOCATION_THRESHOLD_FACTOR:
    1652           0 :         allocThresholdFactor_ = TuningDefaults::AllocThresholdFactor;
    1653             :         break;
    1654             :       case JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT:
    1655           0 :         allocThresholdFactorAvoidInterrupt_ = TuningDefaults::AllocThresholdFactorAvoidInterrupt;
    1656             :         break;
    1657             :       case JSGC_MIN_EMPTY_CHUNK_COUNT:
    1658           0 :         setMinEmptyChunkCount(TuningDefaults::MinEmptyChunkCount);
    1659           0 :         break;
    1660             :       case JSGC_MAX_EMPTY_CHUNK_COUNT:
    1661           0 :         setMaxEmptyChunkCount(TuningDefaults::MaxEmptyChunkCount);
    1662           0 :         break;
    1663             :       case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION:
    1664             :         nurseryFreeThresholdForIdleCollection_ =
    1665           0 :             TuningDefaults::NurseryFreeThresholdForIdleCollection;
    1666             :         break;
    1667             :       default:
    1668           0 :         MOZ_CRASH("Unknown GC parameter.");
    1669             :     }
    1670           1 : }
    1671             : 
    1672             : uint32_t
    1673           0 : GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock)
    1674             : {
    1675           0 :     switch (key) {
    1676             :       case JSGC_MAX_BYTES:
    1677           0 :         return uint32_t(tunables.gcMaxBytes());
    1678             :       case JSGC_MAX_MALLOC_BYTES:
    1679           0 :         return mallocCounter.maxBytes();
    1680             :       case JSGC_BYTES:
    1681           0 :         return uint32_t(usage.gcBytes());
    1682             :       case JSGC_MODE:
    1683           0 :         return uint32_t(mode);
    1684             :       case JSGC_UNUSED_CHUNKS:
    1685           0 :         return uint32_t(emptyChunks(lock).count());
    1686             :       case JSGC_TOTAL_CHUNKS:
    1687           0 :         return uint32_t(fullChunks(lock).count() +
    1688           0 :                         availableChunks(lock).count() +
    1689           0 :                         emptyChunks(lock).count());
    1690             :       case JSGC_SLICE_TIME_BUDGET:
    1691           0 :         if (defaultTimeBudget_.ref() == SliceBudget::UnlimitedTimeBudget) {
    1692             :             return 0;
    1693             :         } else {
    1694           0 :             MOZ_RELEASE_ASSERT(defaultTimeBudget_ >= 0);
    1695           0 :             MOZ_RELEASE_ASSERT(defaultTimeBudget_ <= UINT32_MAX);
    1696           0 :             return uint32_t(defaultTimeBudget_);
    1697             :         }
    1698             :       case JSGC_MARK_STACK_LIMIT:
    1699           0 :         return marker.maxCapacity();
    1700             :       case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
    1701           0 :         return tunables.highFrequencyThresholdUsec() / PRMJ_USEC_PER_MSEC;
    1702             :       case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
    1703           0 :         return tunables.highFrequencyLowLimitBytes() / 1024 / 1024;
    1704             :       case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
    1705           0 :         return tunables.highFrequencyHighLimitBytes() / 1024 / 1024;
    1706             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
    1707           0 :         return uint32_t(tunables.highFrequencyHeapGrowthMax() * 100);
    1708             :       case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
    1709           0 :         return uint32_t(tunables.highFrequencyHeapGrowthMin() * 100);
    1710             :       case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
    1711           0 :         return uint32_t(tunables.lowFrequencyHeapGrowth() * 100);
    1712             :       case JSGC_DYNAMIC_HEAP_GROWTH:
    1713           0 :         return tunables.isDynamicHeapGrowthEnabled();
    1714             :       case JSGC_DYNAMIC_MARK_SLICE:
    1715           0 :         return tunables.isDynamicMarkSliceEnabled();
    1716             :       case JSGC_ALLOCATION_THRESHOLD:
    1717           0 :         return tunables.gcZoneAllocThresholdBase() / 1024 / 1024;
    1718             :       case JSGC_ALLOCATION_THRESHOLD_FACTOR:
    1719           0 :         return uint32_t(tunables.allocThresholdFactor() * 100);
    1720             :       case JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT:
    1721           0 :         return uint32_t(tunables.allocThresholdFactorAvoidInterrupt() * 100);
    1722             :       case JSGC_MIN_EMPTY_CHUNK_COUNT:
    1723           0 :         return tunables.minEmptyChunkCount(lock);
    1724             :       case JSGC_MAX_EMPTY_CHUNK_COUNT:
    1725           0 :         return tunables.maxEmptyChunkCount();
    1726             :       case JSGC_COMPACTING_ENABLED:
    1727           0 :         return compactingEnabled;
    1728             :       default:
    1729           0 :         MOZ_ASSERT(key == JSGC_NUMBER);
    1730           0 :         return uint32_t(number);
    1731             :     }
    1732             : }
    1733             : 
    1734             : void
    1735           0 : GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock)
    1736             : {
    1737           0 :     MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
    1738           0 :     AutoUnlockGC unlock(lock);
    1739           0 :     AutoStopVerifyingBarriers pauseVerification(rt, false);
    1740           0 :     marker.setMaxCapacity(limit);
    1741           0 : }
    1742             : 
    1743             : bool
    1744           7 : GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data)
    1745             : {
    1746           7 :     AssertHeapIsIdle();
    1747           0 :     return !!blackRootTracers.ref().append(Callback<JSTraceDataOp>(traceOp, data));
    1748             : }
    1749             : 
    1750             : void
    1751           0 : GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data)
    1752             : {
    1753             :     // Can be called from finalizers
    1754           0 :     for (size_t i = 0; i < blackRootTracers.ref().length(); i++) {
    1755           0 :         Callback<JSTraceDataOp>* e = &blackRootTracers.ref()[i];
    1756           0 :         if (e->op == traceOp && e->data == data) {
    1757           0 :             blackRootTracers.ref().erase(e);
    1758             :         }
    1759             :     }
    1760           0 : }
    1761             : 
    1762             : void
    1763           0 : GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp, void* data)
    1764             : {
    1765           4 :     AssertHeapIsIdle();
    1766           8 :     grayRootTracer.op = traceOp;
    1767           0 :     grayRootTracer.data = data;
    1768           4 : }
    1769             : 
    1770             : void
    1771           0 : GCRuntime::setGCCallback(JSGCCallback callback, void* data)
    1772             : {
    1773           8 :     gcCallback.op = callback;
    1774           0 :     gcCallback.data = data;
    1775           4 : }
    1776             : 
    1777             : void
    1778           0 : GCRuntime::callGCCallback(JSGCStatus status) const
    1779             : {
    1780           0 :     MOZ_ASSERT(gcCallback.op);
    1781           0 :     gcCallback.op(rt->mainContextFromOwnThread(), status, gcCallback.data);
    1782           0 : }
    1783             : 
    1784             : void
    1785           0 : GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
    1786             :                                      void* data)
    1787             : {
    1788           8 :     tenuredCallback.op = callback;
    1789           0 :     tenuredCallback.data = data;
    1790           4 : }
    1791             : 
    1792             : void
    1793           0 : GCRuntime::callObjectsTenuredCallback()
    1794             : {
    1795           8 :     if (tenuredCallback.op)
    1796           0 :         tenuredCallback.op(rt->mainContextFromOwnThread(), tenuredCallback.data);
    1797           4 : }
    1798             : 
    1799             : bool
    1800           1 : GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data)
    1801             : {
    1802           0 :     return finalizeCallbacks.ref().append(Callback<JSFinalizeCallback>(callback, data));
    1803             : }
    1804             : 
    1805             : void
    1806           0 : GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback)
    1807             : {
    1808           0 :     for (Callback<JSFinalizeCallback>* p = finalizeCallbacks.ref().begin();
    1809           0 :          p < finalizeCallbacks.ref().end(); p++)
    1810             :     {
    1811           0 :         if (p->op == callback) {
    1812           0 :             finalizeCallbacks.ref().erase(p);
    1813           0 :             break;
    1814             :         }
    1815             :     }
    1816           0 : }
    1817             : 
    1818             : void
    1819           0 : GCRuntime::callFinalizeCallbacks(FreeOp* fop, JSFinalizeStatus status) const
    1820             : {
    1821           0 :     for (auto& p : finalizeCallbacks.ref())
    1822           0 :         p.op(fop, status, p.data);
    1823           0 : }
    1824             : 
    1825             : bool
    1826           1 : GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback, void* data)
    1827             : {
    1828           1 :     return updateWeakPointerZonesCallbacks.ref().append(
    1829           0 :             Callback<JSWeakPointerZonesCallback>(callback, data));
    1830             : }
    1831             : 
    1832             : void
    1833           0 : GCRuntime::removeWeakPointerZonesCallback(JSWeakPointerZonesCallback callback)
    1834             : {
    1835           0 :     for (auto& p : updateWeakPointerZonesCallbacks.ref()) {
    1836           0 :         if (p.op == callback) {
    1837           0 :             updateWeakPointerZonesCallbacks.ref().erase(&p);
    1838           0 :             break;
    1839             :         }
    1840             :     }
    1841           0 : }
    1842             : 
    1843             : void
    1844           0 : GCRuntime::callWeakPointerZonesCallbacks() const
    1845             : {
    1846           0 :     JSContext* cx = rt->mainContextFromOwnThread();
    1847           0 :     for (auto const& p : updateWeakPointerZonesCallbacks.ref())
    1848           0 :         p.op(cx, p.data);
    1849           0 : }
    1850             : 
    1851             : bool
    1852           1 : GCRuntime::addWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallback callback, void* data)
    1853             : {
    1854           1 :     return updateWeakPointerCompartmentCallbacks.ref().append(
    1855           0 :             Callback<JSWeakPointerCompartmentCallback>(callback, data));
    1856             : }
    1857             : 
    1858             : void
    1859           0 : GCRuntime::removeWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallback callback)
    1860             : {
    1861           0 :     for (auto& p : updateWeakPointerCompartmentCallbacks.ref()) {
    1862           0 :         if (p.op == callback) {
    1863           0 :             updateWeakPointerCompartmentCallbacks.ref().erase(&p);
    1864           0 :             break;
    1865             :         }
    1866             :     }
    1867           0 : }
    1868             : 
    1869             : void
    1870           0 : GCRuntime::callWeakPointerCompartmentCallbacks(JS::Compartment* comp) const
    1871             : {
    1872           0 :     JSContext* cx = rt->mainContextFromOwnThread();
    1873           0 :     for (auto const& p : updateWeakPointerCompartmentCallbacks.ref())
    1874           0 :         p.op(cx, comp, p.data);
    1875           0 : }
    1876             : 
    1877             : JS::GCSliceCallback
    1878           0 : GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
    1879           1 :     return stats().setSliceCallback(callback);
    1880             : }
    1881             : 
    1882             : JS::GCNurseryCollectionCallback
    1883           0 : GCRuntime::setNurseryCollectionCallback(JS::GCNurseryCollectionCallback callback) {
    1884           0 :     return stats().setNurseryCollectionCallback(callback);
    1885             : }
    1886             : 
    1887             : JS::DoCycleCollectionCallback
    1888           0 : GCRuntime::setDoCycleCollectionCallback(JS::DoCycleCollectionCallback callback)
    1889             : {
    1890           1 :     auto prior = gcDoCycleCollectionCallback;
    1891           1 :     gcDoCycleCollectionCallback = Callback<JS::DoCycleCollectionCallback>(callback, nullptr);
    1892           1 :     return prior.op;
    1893             : }
    1894             : 
    1895             : void
    1896           0 : GCRuntime::callDoCycleCollectionCallback(JSContext* cx)
    1897             : {
    1898           0 :     if (gcDoCycleCollectionCallback.op)
    1899           0 :         gcDoCycleCollectionCallback.op(cx);
    1900           0 : }
    1901             : 
    1902             : bool
    1903           0 : GCRuntime::addRoot(Value* vp, const char* name)
    1904             : {
    1905             :     /*
    1906             :      * Sometimes Firefox will hold weak references to objects and then convert
    1907             :      * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
    1908             :      * or ModifyBusyCount in workers). We need a read barrier to cover these
    1909             :      * cases.
    1910             :      */
    1911           0 :     if (isIncrementalGCInProgress())
    1912           0 :         GCPtrValue::writeBarrierPre(*vp);
    1913             : 
    1914           0 :     return rootsHash.ref().put(vp, name);
    1915             : }
    1916             : 
    1917             : void
    1918           0 : GCRuntime::removeRoot(Value* vp)
    1919             : {
    1920           0 :     rootsHash.ref().remove(vp);
    1921           0 :     notifyRootsRemoved();
    1922           0 : }
    1923             : 
    1924             : extern JS_FRIEND_API(bool)
    1925           0 : js::AddRawValueRoot(JSContext* cx, Value* vp, const char* name)
    1926             : {
    1927           0 :     MOZ_ASSERT(vp);
    1928           0 :     MOZ_ASSERT(name);
    1929           0 :     bool ok = cx->runtime()->gc.addRoot(vp, name);
    1930           0 :     if (!ok)
    1931           0 :         JS_ReportOutOfMemory(cx);
    1932           0 :     return ok;
    1933             : }
    1934             : 
    1935             : extern JS_FRIEND_API(void)
    1936           0 : js::RemoveRawValueRoot(JSContext* cx, Value* vp)
    1937             : {
    1938           0 :     cx->runtime()->gc.removeRoot(vp);
    1939           0 : }
    1940             : 
    1941             : void
    1942           0 : GCRuntime::setMaxMallocBytes(size_t value, const AutoLockGC& lock)
    1943             : {
    1944           0 :     tunables.setMaxMallocBytes(value);
    1945          16 :     mallocCounter.setMax(value, lock);
    1946          36 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    1947           1 :         zone->setGCMaxMallocBytes(value, lock);
    1948           8 : }
    1949             : 
    1950             : double
    1951           0 : ZoneHeapThreshold::eagerAllocTrigger(bool highFrequencyGC) const
    1952             : {
    1953         260 :     double eagerTriggerFactor = highFrequencyGC ? HighFrequencyEagerAllocTriggerFactor
    1954         260 :                                                 : LowFrequencyEagerAllocTriggerFactor;
    1955           0 :     return eagerTriggerFactor * gcTriggerBytes();
    1956             : }
    1957             : 
    1958             : /* static */ double
    1959           0 : ZoneHeapThreshold::computeZoneHeapGrowthFactorForHeapSize(size_t lastBytes,
    1960             :                                                           const GCSchedulingTunables& tunables,
    1961             :                                                           const GCSchedulingState& state)
    1962             : {
    1963          91 :     if (!tunables.isDynamicHeapGrowthEnabled())
    1964             :         return 3.0;
    1965             : 
    1966             :     // For small zones, our collection heuristics do not matter much: favor
    1967             :     // something simple in this case.
    1968          45 :     if (lastBytes < 1 * 1024 * 1024)
    1969           0 :         return tunables.lowFrequencyHeapGrowth();
    1970             : 
    1971             :     // If GC's are not triggering in rapid succession, use a lower threshold so
    1972             :     // that we will collect garbage sooner.
    1973           0 :     if (!state.inHighFrequencyGCMode())
    1974           0 :         return tunables.lowFrequencyHeapGrowth();
    1975             : 
    1976             :     // The heap growth factor depends on the heap size after a GC and the GC
    1977             :     // frequency. For low frequency GCs (more than 1sec between GCs) we let
    1978             :     // the heap grow to 150%. For high frequency GCs we let the heap grow
    1979             :     // depending on the heap size:
    1980             :     //   lastBytes < highFrequencyLowLimit: 300%
    1981             :     //   lastBytes > highFrequencyHighLimit: 150%
    1982             :     //   otherwise: linear interpolation between 300% and 150% based on lastBytes
    1983             : 
    1984           0 :     double minRatio = tunables.highFrequencyHeapGrowthMin();
    1985           0 :     double maxRatio = tunables.highFrequencyHeapGrowthMax();
    1986           0 :     double lowLimit = tunables.highFrequencyLowLimitBytes();
    1987           0 :     double highLimit = tunables.highFrequencyHighLimitBytes();
    1988             : 
    1989           0 :     MOZ_ASSERT(minRatio <= maxRatio);
    1990           0 :     MOZ_ASSERT(lowLimit < highLimit);
    1991             : 
    1992           0 :     if (lastBytes <= lowLimit)
    1993             :         return maxRatio;
    1994             : 
    1995           0 :     if (lastBytes >= highLimit)
    1996             :         return minRatio;
    1997             : 
    1998           0 :     double factor = maxRatio - ((maxRatio - minRatio) * ((lastBytes - lowLimit) /
    1999           0 :                                                          (highLimit - lowLimit)));
    2000             : 
    2001           0 :     MOZ_ASSERT(factor >= minRatio);
    2002           0 :     MOZ_ASSERT(factor <= maxRatio);
    2003             :     return factor;
    2004             : }
    2005             : 
    2006             : /* static */ size_t
    2007          91 : ZoneHeapThreshold::computeZoneTriggerBytes(double growthFactor, size_t lastBytes,
    2008             :                                            JSGCInvocationKind gckind,
    2009             :                                            const GCSchedulingTunables& tunables,
    2010             :                                            const AutoLockGC& lock)
    2011             : {
    2012             :     size_t base = gckind == GC_SHRINK
    2013          91 :                 ? Max(lastBytes, tunables.minEmptyChunkCount(lock) * ChunkSize)
    2014         182 :                 : Max(lastBytes, tunables.gcZoneAllocThresholdBase());
    2015          91 :     double trigger = double(base) * growthFactor;
    2016           0 :     return size_t(Min(double(tunables.gcMaxBytes()), trigger));
    2017             : }
    2018             : 
    2019             : void
    2020           0 : ZoneHeapThreshold::updateAfterGC(size_t lastBytes, JSGCInvocationKind gckind,
    2021             :                                  const GCSchedulingTunables& tunables,
    2022             :                                  const GCSchedulingState& state, const AutoLockGC& lock)
    2023             : {
    2024         182 :     gcHeapGrowthFactor_ = computeZoneHeapGrowthFactorForHeapSize(lastBytes, tunables, state);
    2025          91 :     gcTriggerBytes_ = computeZoneTriggerBytes(gcHeapGrowthFactor_, lastBytes, gckind, tunables,
    2026           0 :                                               lock);
    2027          91 : }
    2028             : 
    2029             : void
    2030           0 : ZoneHeapThreshold::updateForRemovedArena(const GCSchedulingTunables& tunables)
    2031             : {
    2032           0 :     size_t amount = ArenaSize * gcHeapGrowthFactor_;
    2033           0 :     MOZ_ASSERT(amount > 0);
    2034             : 
    2035           0 :     if ((gcTriggerBytes_ < amount) ||
    2036           0 :         (gcTriggerBytes_ - amount < tunables.gcZoneAllocThresholdBase() * gcHeapGrowthFactor_))
    2037             :     {
    2038             :         return;
    2039             :     }
    2040             : 
    2041           0 :     gcTriggerBytes_ -= amount;
    2042             : }
    2043             : 
    2044           0 : MemoryCounter::MemoryCounter()
    2045             :   : bytes_(0),
    2046             :     maxBytes_(0),
    2047           1 :     triggered_(NoTrigger)
    2048          42 : {}
    2049             : 
    2050             : void
    2051           0 : MemoryCounter::updateOnGCStart()
    2052             : {
    2053             :     // Record the current byte count at the start of GC.
    2054           0 :     bytesAtStartOfGC_ = bytes_;
    2055           0 : }
    2056             : 
    2057             : void
    2058           0 : MemoryCounter::updateOnGCEnd(const GCSchedulingTunables& tunables, const AutoLockGC& lock)
    2059             : {
    2060             :     // Update the trigger threshold at the end of GC and adjust the current
    2061             :     // byte count to reflect bytes allocated since the start of GC.
    2062           0 :     MOZ_ASSERT(bytes_ >= bytesAtStartOfGC_);
    2063           0 :     if (shouldTriggerGC(tunables)) {
    2064           0 :         maxBytes_ = std::min(TuningDefaults::MallocThresholdLimit,
    2065           0 :                              size_t(maxBytes_ * TuningDefaults::MallocThresholdGrowFactor));
    2066             :     } else {
    2067           0 :         maxBytes_ = std::max(tunables.maxMallocBytes(),
    2068           0 :                              size_t(maxBytes_ * TuningDefaults::MallocThresholdShrinkFactor));
    2069             :     }
    2070           0 :     bytes_ -= bytesAtStartOfGC_;
    2071           0 :     triggered_ = NoTrigger;
    2072           0 : }
    2073             : 
    2074             : void
    2075          42 : MemoryCounter::setMax(size_t newMax, const AutoLockGC& lock)
    2076             : {
    2077           0 :     maxBytes_ = newMax;
    2078          42 : }
    2079             : 
    2080             : void
    2081           0 : MemoryCounter::adopt(MemoryCounter& other)
    2082             : {
    2083          10 :     update(other.bytes());
    2084          10 :     other.bytes_ = 0;
    2085           0 :     other.triggered_ = NoTrigger;
    2086           5 : }
    2087             : 
    2088             : void
    2089           0 : MemoryCounter::recordTrigger(TriggerKind trigger)
    2090             : {
    2091           0 :     MOZ_ASSERT(trigger > triggered_);
    2092           0 :     triggered_ = trigger;
    2093           0 : }
    2094             : 
    2095             : void
    2096           0 : GCMarker::delayMarkingArena(Arena* arena)
    2097             : {
    2098           0 :     if (arena->hasDelayedMarking) {
    2099             :         /* Arena already scheduled to be marked later */
    2100             :         return;
    2101             :     }
    2102           0 :     arena->setNextDelayedMarking(unmarkedArenaStackTop);
    2103           0 :     unmarkedArenaStackTop = arena;
    2104             : #ifdef DEBUG
    2105           0 :     markLaterArenas++;
    2106             : #endif
    2107             : }
    2108             : 
    2109             : void
    2110           0 : GCMarker::delayMarkingChildren(const void* thing)
    2111             : {
    2112           0 :     const TenuredCell* cell = TenuredCell::fromPointer(thing);
    2113           0 :     cell->arena()->markOverflow = 1;
    2114           0 :     delayMarkingArena(cell->arena());
    2115           0 : }
    2116             : 
    2117             : inline void
    2118           0 : ArenaLists::unmarkPreMarkedFreeCells()
    2119             : {
    2120           0 :     for (auto i : AllAllocKinds()) {
    2121           0 :         FreeSpan* freeSpan = freeList(i);
    2122           0 :         if (!freeSpan->isEmpty())
    2123           0 :             freeSpan->getArena()->unmarkPreMarkedFreeCells();
    2124             :     }
    2125           0 : }
    2126             : 
    2127             : /* Compacting GC */
    2128             : 
    2129             : bool
    2130           0 : GCRuntime::shouldCompact()
    2131             : {
    2132             :     // Compact on shrinking GC if enabled.  Skip compacting in incremental GCs
    2133             :     // if we are currently animating, unless the user is inactive or we're
    2134             :     // responding to memory pressure.
    2135             : 
    2136           0 :     if (invocationKind != GC_SHRINK || !isCompactingGCEnabled())
    2137             :         return false;
    2138             : 
    2139           0 :     if (initialReason == JS::gcreason::USER_INACTIVE ||
    2140           0 :         initialReason == JS::gcreason::MEM_PRESSURE)
    2141             :     {
    2142             :         return true;
    2143             :     }
    2144             : 
    2145           0 :     return !isIncremental || rt->lastAnimationTime + PRMJ_USEC_PER_SEC < PRMJ_Now();
    2146             : }
    2147             : 
    2148             : bool
    2149           0 : GCRuntime::isCompactingGCEnabled() const
    2150             : {
    2151           0 :     return compactingEnabled && rt->mainContextFromOwnThread()->compactingDisabledCount == 0;
    2152             : }
    2153             : 
    2154           0 : AutoDisableCompactingGC::AutoDisableCompactingGC(JSContext* cx)
    2155           0 :   : cx(cx)
    2156             : {
    2157          12 :     ++cx->compactingDisabledCount;
    2158           0 :     if (cx->runtime()->gc.isIncrementalGCInProgress() && cx->runtime()->gc.isCompactingGc())
    2159           0 :         FinishGC(cx);
    2160           0 : }
    2161             : 
    2162           0 : AutoDisableCompactingGC::~AutoDisableCompactingGC()
    2163             : {
    2164          12 :     MOZ_ASSERT(cx->compactingDisabledCount > 0);
    2165          12 :     --cx->compactingDisabledCount;
    2166           6 : }
    2167             : 
    2168             : static bool
    2169             : CanRelocateZone(Zone* zone)
    2170             : {
    2171           0 :     return !zone->isAtomsZone() && !zone->isSelfHostingZone();
    2172             : }
    2173             : 
    2174             : static const AllocKind AllocKindsToRelocate[] = {
    2175             :     AllocKind::FUNCTION,
    2176             :     AllocKind::FUNCTION_EXTENDED,
    2177             :     AllocKind::OBJECT0,
    2178             :     AllocKind::OBJECT0_BACKGROUND,
    2179             :     AllocKind::OBJECT2,
    2180             :     AllocKind::OBJECT2_BACKGROUND,
    2181             :     AllocKind::OBJECT4,
    2182             :     AllocKind::OBJECT4_BACKGROUND,
    2183             :     AllocKind::OBJECT8,
    2184             :     AllocKind::OBJECT8_BACKGROUND,
    2185             :     AllocKind::OBJECT12,
    2186             :     AllocKind::OBJECT12_BACKGROUND,
    2187             :     AllocKind::OBJECT16,
    2188             :     AllocKind::OBJECT16_BACKGROUND,
    2189             :     AllocKind::SCRIPT,
    2190             :     AllocKind::LAZY_SCRIPT,
    2191             :     AllocKind::SHAPE,
    2192             :     AllocKind::ACCESSOR_SHAPE,
    2193             :     AllocKind::BASE_SHAPE,
    2194             :     AllocKind::FAT_INLINE_STRING,
    2195             :     AllocKind::STRING,
    2196             :     AllocKind::EXTERNAL_STRING,
    2197             :     AllocKind::FAT_INLINE_ATOM,
    2198             :     AllocKind::ATOM,
    2199             :     AllocKind::SCOPE,
    2200             :     AllocKind::REGEXP_SHARED
    2201             : };
    2202             : 
    2203             : Arena*
    2204           0 : ArenaList::removeRemainingArenas(Arena** arenap)
    2205             : {
    2206             :     // This is only ever called to remove arenas that are after the cursor, so
    2207             :     // we don't need to update it.
    2208             : #ifdef DEBUG
    2209           0 :     for (Arena* arena = *arenap; arena; arena = arena->next)
    2210           0 :         MOZ_ASSERT(cursorp_ != &arena->next);
    2211             : #endif
    2212           0 :     Arena* remainingArenas = *arenap;
    2213           0 :     *arenap = nullptr;
    2214           0 :     check();
    2215           0 :     return remainingArenas;
    2216             : }
    2217             : 
    2218             : static bool
    2219             : ShouldRelocateAllArenas(JS::gcreason::Reason reason)
    2220             : {
    2221             :     return reason == JS::gcreason::DEBUG_GC;
    2222             : }
    2223             : 
    2224             : /*
    2225             :  * Choose which arenas to relocate all cells from. Return an arena cursor that
    2226             :  * can be passed to removeRemainingArenas().
    2227             :  */
    2228             : Arena**
    2229           0 : ArenaList::pickArenasToRelocate(size_t& arenaTotalOut, size_t& relocTotalOut)
    2230             : {
    2231             :     // Relocate the greatest number of arenas such that the number of used cells
    2232             :     // in relocated arenas is less than or equal to the number of free cells in
    2233             :     // unrelocated arenas. In other words we only relocate cells we can move
    2234             :     // into existing arenas, and we choose the least full areans to relocate.
    2235             :     //
    2236             :     // This is made easier by the fact that the arena list has been sorted in
    2237             :     // descending order of number of used cells, so we will always relocate a
    2238             :     // tail of the arena list. All we need to do is find the point at which to
    2239             :     // start relocating.
    2240             : 
    2241           0 :     check();
    2242             : 
    2243           0 :     if (isCursorAtEnd())
    2244             :         return nullptr;
    2245             : 
    2246           0 :     Arena** arenap = cursorp_;     // Next arena to consider for relocation.
    2247           0 :     size_t previousFreeCells = 0;  // Count of free cells before arenap.
    2248           0 :     size_t followingUsedCells = 0; // Count of used cells after arenap.
    2249           0 :     size_t fullArenaCount = 0;     // Number of full arenas (not relocated).
    2250           0 :     size_t nonFullArenaCount = 0;  // Number of non-full arenas (considered for relocation).
    2251           0 :     size_t arenaIndex = 0;         // Index of the next arena to consider.
    2252             : 
    2253           0 :     for (Arena* arena = head_; arena != *cursorp_; arena = arena->next)
    2254           0 :         fullArenaCount++;
    2255             : 
    2256           0 :     for (Arena* arena = *cursorp_; arena; arena = arena->next) {
    2257           0 :         followingUsedCells += arena->countUsedCells();
    2258           0 :         nonFullArenaCount++;
    2259             :     }
    2260             : 
    2261           0 :     mozilla::DebugOnly<size_t> lastFreeCells(0);
    2262           0 :     size_t cellsPerArena = Arena::thingsPerArena((*arenap)->getAllocKind());
    2263             : 
    2264           0 :     while (*arenap) {
    2265           0 :         Arena* arena = *arenap;
    2266           0 :         if (followingUsedCells <= previousFreeCells)
    2267             :             break;
    2268             : 
    2269           0 :         size_t freeCells = arena->countFreeCells();
    2270           0 :         size_t usedCells = cellsPerArena - freeCells;
    2271           0 :         followingUsedCells -= usedCells;
    2272             : #ifdef DEBUG
    2273           0 :         MOZ_ASSERT(freeCells >= lastFreeCells);
    2274           0 :         lastFreeCells = freeCells;
    2275             : #endif
    2276           0 :         previousFreeCells += freeCells;
    2277           0 :         arenap = &arena->next;
    2278           0 :         arenaIndex++;
    2279             :     }
    2280             : 
    2281           0 :     size_t relocCount = nonFullArenaCount - arenaIndex;
    2282           0 :     MOZ_ASSERT(relocCount < nonFullArenaCount);
    2283           0 :     MOZ_ASSERT((relocCount == 0) == (!*arenap));
    2284           0 :     arenaTotalOut += fullArenaCount + nonFullArenaCount;
    2285           0 :     relocTotalOut += relocCount;
    2286             : 
    2287           0 :     return arenap;
    2288             : }
    2289             : 
    2290             : #ifdef DEBUG
    2291             : inline bool
    2292             : PtrIsInRange(const void* ptr, const void* start, size_t length)
    2293             : {
    2294           0 :     return uintptr_t(ptr) - uintptr_t(start) < length;
    2295             : }
    2296             : #endif
    2297             : 
    2298             : static TenuredCell*
    2299           0 : AllocRelocatedCell(Zone* zone, AllocKind thingKind, size_t thingSize)
    2300             : {
    2301           0 :     AutoEnterOOMUnsafeRegion oomUnsafe;
    2302           0 :     void* dstAlloc = zone->arenas.allocateFromFreeList(thingKind, thingSize);
    2303           0 :     if (!dstAlloc)
    2304           0 :         dstAlloc = GCRuntime::refillFreeListInGC(zone, thingKind);
    2305           0 :     if (!dstAlloc) {
    2306             :         // This can only happen in zeal mode or debug builds as we don't
    2307             :         // otherwise relocate more cells than we have existing free space
    2308             :         // for.
    2309           0 :         oomUnsafe.crash("Could not allocate new arena while compacting");
    2310             :     }
    2311           0 :     return TenuredCell::fromPointer(dstAlloc);
    2312             : }
    2313             : 
    2314             : static void
    2315           0 : RelocateCell(Zone* zone, TenuredCell* src, AllocKind thingKind, size_t thingSize)
    2316             : {
    2317           0 :     JS::AutoSuppressGCAnalysis nogc(TlsContext.get());
    2318             : 
    2319             :     // Allocate a new cell.
    2320           0 :     MOZ_ASSERT(zone == src->zone());
    2321           0 :     TenuredCell* dst = AllocRelocatedCell(zone, thingKind, thingSize);
    2322             : 
    2323             :     // Copy source cell contents to destination.
    2324           0 :     memcpy(dst, src, thingSize);
    2325             : 
    2326             :     // Move any uid attached to the object.
    2327           0 :     src->zone()->transferUniqueId(dst, src);
    2328             : 
    2329           0 :     if (IsObjectAllocKind(thingKind)) {
    2330           0 :         JSObject* srcObj = static_cast<JSObject*>(static_cast<Cell*>(src));
    2331           0 :         JSObject* dstObj = static_cast<JSObject*>(static_cast<Cell*>(dst));
    2332             : 
    2333           0 :         if (srcObj->isNative()) {
    2334           0 :             NativeObject* srcNative = &srcObj->as<NativeObject>();
    2335           0 :             NativeObject* dstNative = &dstObj->as<NativeObject>();
    2336             : 
    2337             :             // Fixup the pointer to inline object elements if necessary.
    2338           0 :             if (srcNative->hasFixedElements()) {
    2339           0 :                 uint32_t numShifted = srcNative->getElementsHeader()->numShiftedElements();
    2340           0 :                 dstNative->setFixedElements(numShifted);
    2341             :             }
    2342             : 
    2343             :             // For copy-on-write objects that own their elements, fix up the
    2344             :             // owner pointer to point to the relocated object.
    2345           0 :             if (srcNative->denseElementsAreCopyOnWrite()) {
    2346           0 :                 GCPtrNativeObject& owner = dstNative->getElementsHeader()->ownerObject();
    2347           0 :                 if (owner == srcNative)
    2348             :                     owner = dstNative;
    2349             :             }
    2350           0 :         } else if (srcObj->is<ProxyObject>()) {
    2351           0 :             if (srcObj->as<ProxyObject>().usingInlineValueArray())
    2352           0 :                 dstObj->as<ProxyObject>().setInlineValueArray();
    2353             :         }
    2354             : 
    2355             :         // Call object moved hook if present.
    2356           0 :         if (JSObjectMovedOp op = srcObj->getClass()->extObjectMovedOp())
    2357           0 :             op(dstObj, srcObj);
    2358             : 
    2359           0 :         MOZ_ASSERT_IF(dstObj->isNative(),
    2360             :                       !PtrIsInRange((const Value*)dstObj->as<NativeObject>().getDenseElements(),
    2361             :                                     src, thingSize));
    2362             :     }
    2363             : 
    2364             :     // Copy the mark bits.
    2365           0 :     dst->copyMarkBitsFrom(src);
    2366             : 
    2367             :     // Mark source cell as forwarded and leave a pointer to the destination.
    2368           0 :     RelocationOverlay* overlay = RelocationOverlay::fromCell(src);
    2369           0 :     overlay->forwardTo(dst);
    2370           0 : }
    2371             : 
    2372             : static void
    2373           0 : RelocateArena(Arena* arena, SliceBudget& sliceBudget)
    2374             : {
    2375           0 :     MOZ_ASSERT(arena->allocated());
    2376           0 :     MOZ_ASSERT(!arena->hasDelayedMarking);
    2377           0 :     MOZ_ASSERT(!arena->markOverflow);
    2378           0 :     MOZ_ASSERT(arena->bufferedCells()->isEmpty());
    2379             : 
    2380           0 :     Zone* zone = arena->zone;
    2381             : 
    2382           0 :     AllocKind thingKind = arena->getAllocKind();
    2383           0 :     size_t thingSize = arena->getThingSize();
    2384             : 
    2385           0 :     for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
    2386           0 :         RelocateCell(zone, i.getCell(), thingKind, thingSize);
    2387           0 :         sliceBudget.step();
    2388             :     }
    2389             : 
    2390             : #ifdef DEBUG
    2391           0 :     for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
    2392           0 :         TenuredCell* src = i.getCell();
    2393           0 :         MOZ_ASSERT(RelocationOverlay::isCellForwarded(src));
    2394           0 :         TenuredCell* dest = Forwarded(src);
    2395           0 :         MOZ_ASSERT(src->isMarkedBlack() == dest->isMarkedBlack());
    2396           0 :         MOZ_ASSERT(src->isMarkedGray() == dest->isMarkedGray());
    2397             :     }
    2398             : #endif
    2399           0 : }
    2400             : 
    2401             : static inline bool
    2402             : ShouldProtectRelocatedArenas(JS::gcreason::Reason reason)
    2403             : {
    2404             :     // For zeal mode collections we don't release the relocated arenas
    2405             :     // immediately. Instead we protect them and keep them around until the next
    2406             :     // collection so we can catch any stray accesses to them.
    2407             : #ifdef DEBUG
    2408             :     return reason == JS::gcreason::DEBUG_GC;
    2409             : #else
    2410             :     return false;
    2411             : #endif
    2412             : }
    2413             : 
    2414             : /*
    2415             :  * Relocate all arenas identified by pickArenasToRelocate: for each arena,
    2416             :  * relocate each cell within it, then add it to a list of relocated arenas.
    2417             :  */
    2418             : Arena*
    2419           0 : ArenaList::relocateArenas(Arena* toRelocate, Arena* relocated, SliceBudget& sliceBudget,
    2420             :                           gcstats::Statistics& stats)
    2421             : {
    2422           0 :     check();
    2423             : 
    2424           0 :     while (Arena* arena = toRelocate) {
    2425           0 :         toRelocate = arena->next;
    2426           0 :         RelocateArena(arena, sliceBudget);
    2427             :         // Prepend to list of relocated arenas
    2428           0 :         arena->next = relocated;
    2429           0 :         relocated = arena;
    2430           0 :         stats.count(gcstats::STAT_ARENA_RELOCATED);
    2431           0 :     }
    2432             : 
    2433           0 :     check();
    2434             : 
    2435           0 :     return relocated;
    2436             : }
    2437             : 
    2438             : // Skip compacting zones unless we can free a certain proportion of their GC
    2439             : // heap memory.
    2440             : static const double MIN_ZONE_RECLAIM_PERCENT = 2.0;
    2441             : 
    2442             : static bool
    2443             : ShouldRelocateZone(size_t arenaCount, size_t relocCount, JS::gcreason::Reason reason)
    2444             : {
    2445           0 :     if (relocCount == 0)
    2446             :         return false;
    2447             : 
    2448           0 :     if (IsOOMReason(reason))
    2449             :         return true;
    2450             : 
    2451           0 :     return (relocCount * 100.0) / arenaCount >= MIN_ZONE_RECLAIM_PERCENT;
    2452             : }
    2453             : 
    2454             : bool
    2455           0 : ArenaLists::relocateArenas(Zone* zone, Arena*& relocatedListOut, JS::gcreason::Reason reason,
    2456             :                            SliceBudget& sliceBudget, gcstats::Statistics& stats)
    2457             : {
    2458             :     // This is only called from the main thread while we are doing a GC, so
    2459             :     // there is no need to lock.
    2460           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime_));
    2461           0 :     MOZ_ASSERT(runtime_->gc.isHeapCompacting());
    2462           0 :     MOZ_ASSERT(!runtime_->gc.isBackgroundSweeping());
    2463             : 
    2464             :     // Clear all the free lists.
    2465           0 :     clearFreeLists();
    2466             : 
    2467           0 :     if (ShouldRelocateAllArenas(reason)) {
    2468           0 :         zone->prepareForCompacting();
    2469           0 :         for (auto kind : AllocKindsToRelocate) {
    2470           0 :             ArenaList& al = arenaLists(kind);
    2471           0 :             Arena* allArenas = al.head();
    2472           0 :             al.clear();
    2473           0 :             relocatedListOut = al.relocateArenas(allArenas, relocatedListOut, sliceBudget, stats);
    2474             :         }
    2475             :     } else {
    2476           0 :         size_t arenaCount = 0;
    2477           0 :         size_t relocCount = 0;
    2478           0 :         AllAllocKindArray<Arena**> toRelocate;
    2479             : 
    2480           0 :         for (auto kind : AllocKindsToRelocate)
    2481           0 :             toRelocate[kind] = arenaLists(kind).pickArenasToRelocate(arenaCount, relocCount);
    2482             : 
    2483           0 :         if (!ShouldRelocateZone(arenaCount, relocCount, reason))
    2484           0 :             return false;
    2485             : 
    2486           0 :         zone->prepareForCompacting();
    2487           0 :         for (auto kind : AllocKindsToRelocate) {
    2488           0 :             if (toRelocate[kind]) {
    2489           0 :                 ArenaList& al = arenaLists(kind);
    2490           0 :                 Arena* arenas = al.removeRemainingArenas(toRelocate[kind]);
    2491           0 :                 relocatedListOut = al.relocateArenas(arenas, relocatedListOut, sliceBudget, stats);
    2492             :             }
    2493             :         }
    2494             :     }
    2495             : 
    2496             :     return true;
    2497             : }
    2498             : 
    2499             : bool
    2500           0 : GCRuntime::relocateArenas(Zone* zone, JS::gcreason::Reason reason, Arena*& relocatedListOut,
    2501             :                           SliceBudget& sliceBudget)
    2502             : {
    2503           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT_MOVE);
    2504             : 
    2505           0 :     MOZ_ASSERT(!zone->isPreservingCode());
    2506           0 :     MOZ_ASSERT(CanRelocateZone(zone));
    2507             : 
    2508           0 :     js::CancelOffThreadIonCompile(rt, JS::Zone::Compact);
    2509             : 
    2510           0 :     if (!zone->arenas.relocateArenas(zone, relocatedListOut, reason, sliceBudget, stats()))
    2511             :         return false;
    2512             : 
    2513             : #ifdef DEBUG
    2514             :     // Check that we did as much compaction as we should have. There
    2515             :     // should always be less than one arena's worth of free cells.
    2516           0 :     for (auto i : AllocKindsToRelocate) {
    2517           0 :         ArenaList& al = zone->arenas.arenaLists(i);
    2518           0 :         size_t freeCells = 0;
    2519           0 :         for (Arena* arena = al.arenaAfterCursor(); arena; arena = arena->next)
    2520           0 :             freeCells += arena->countFreeCells();
    2521           0 :         MOZ_ASSERT(freeCells < Arena::thingsPerArena(i));
    2522             :     }
    2523             : #endif
    2524             : 
    2525             :     return true;
    2526             : }
    2527             : 
    2528             : template <typename T>
    2529             : inline void
    2530           0 : MovingTracer::updateEdge(T** thingp)
    2531             : {
    2532           0 :     auto thing = *thingp;
    2533           0 :     if (thing->runtimeFromAnyThread() == runtime() && IsForwarded(thing))
    2534           0 :         *thingp = Forwarded(thing);
    2535           0 : }
    2536             : 
    2537           0 : void MovingTracer::onObjectEdge(JSObject** objp) { updateEdge(objp); }
    2538           0 : void MovingTracer::onShapeEdge(Shape** shapep) { updateEdge(shapep); }
    2539           0 : void MovingTracer::onStringEdge(JSString** stringp) { updateEdge(stringp); }
    2540           0 : void MovingTracer::onScriptEdge(JSScript** scriptp) { updateEdge(scriptp); }
    2541           0 : void MovingTracer::onLazyScriptEdge(LazyScript** lazyp) { updateEdge(lazyp); }
    2542           0 : void MovingTracer::onBaseShapeEdge(BaseShape** basep) { updateEdge(basep); }
    2543           0 : void MovingTracer::onScopeEdge(Scope** scopep) { updateEdge(scopep); }
    2544           0 : void MovingTracer::onRegExpSharedEdge(RegExpShared** sharedp) { updateEdge(sharedp); }
    2545             : 
    2546             : void
    2547           0 : Zone::prepareForCompacting()
    2548             : {
    2549           0 :     FreeOp* fop = runtimeFromMainThread()->defaultFreeOp();
    2550           0 :     discardJitCode(fop);
    2551           0 : }
    2552             : 
    2553             : void
    2554           0 : GCRuntime::sweepTypesAfterCompacting(Zone* zone)
    2555             : {
    2556           0 :     zone->beginSweepTypes(rt->gc.releaseObservedTypes && !zone->isPreservingCode());
    2557             : 
    2558           0 :     AutoClearTypeInferenceStateOnOOM oom(zone);
    2559             : 
    2560           0 :     for (auto script = zone->cellIter<JSScript>(); !script.done(); script.next())
    2561           0 :         AutoSweepTypeScript sweep(script, &oom);
    2562           0 :     for (auto group = zone->cellIter<ObjectGroup>(); !group.done(); group.next())
    2563           0 :         AutoSweepObjectGroup sweep(group, &oom);
    2564             : 
    2565           0 :     zone->types.endSweep(rt);
    2566           0 : }
    2567             : 
    2568             : void
    2569           0 : GCRuntime::sweepZoneAfterCompacting(Zone* zone)
    2570             : {
    2571           0 :     MOZ_ASSERT(zone->isCollecting());
    2572           0 :     FreeOp* fop = rt->defaultFreeOp();
    2573           0 :     sweepTypesAfterCompacting(zone);
    2574           0 :     zone->sweepBreakpoints(fop);
    2575           0 :     zone->sweepWeakMaps();
    2576           0 :     for (auto* cache : zone->weakCaches())
    2577           0 :         cache->sweep();
    2578             : 
    2579           0 :     if (jit::JitZone* jitZone = zone->jitZone())
    2580           0 :         jitZone->sweep();
    2581             : 
    2582           0 :     for (RealmsInZoneIter r(zone); !r.done(); r.next()) {
    2583           0 :         r->sweepObjectGroups();
    2584           0 :         r->sweepRegExps();
    2585           0 :         r->sweepSavedStacks();
    2586           0 :         r->sweepVarNames();
    2587           0 :         r->sweepGlobalObject();
    2588           0 :         r->sweepSelfHostingScriptSource();
    2589           0 :         r->sweepDebugEnvironments();
    2590           0 :         r->sweepJitRealm();
    2591           0 :         r->sweepObjectRealm();
    2592           0 :         r->sweepTemplateObjects();
    2593             :     }
    2594           0 : }
    2595             : 
    2596             : template <typename T>
    2597             : static inline void
    2598             : UpdateCellPointers(MovingTracer* trc, T* cell)
    2599             : {
    2600           0 :     cell->fixupAfterMovingGC();
    2601           0 :     cell->traceChildren(trc);
    2602             : }
    2603             : 
    2604             : template <typename T>
    2605             : static void
    2606           0 : UpdateArenaPointersTyped(MovingTracer* trc, Arena* arena)
    2607             : {
    2608           0 :     for (ArenaCellIterUnderGC i(arena); !i.done(); i.next())
    2609           0 :         UpdateCellPointers(trc, reinterpret_cast<T*>(i.getCell()));
    2610           0 : }
    2611             : 
    2612             : /*
    2613             :  * Update the internal pointers for all cells in an arena.
    2614             :  */
    2615             : static void
    2616           0 : UpdateArenaPointers(MovingTracer* trc, Arena* arena)
    2617             : {
    2618           0 :     AllocKind kind = arena->getAllocKind();
    2619             : 
    2620           0 :     switch (kind) {
    2621             : #define EXPAND_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery) \
    2622             :       case AllocKind::allocKind: \
    2623             :         UpdateArenaPointersTyped<type>(trc, arena); \
    2624             :         return;
    2625           0 : FOR_EACH_ALLOCKIND(EXPAND_CASE)
    2626             : #undef EXPAND_CASE
    2627             : 
    2628             :       default:
    2629           0 :         MOZ_CRASH("Invalid alloc kind for UpdateArenaPointers");
    2630             :     }
    2631             : }
    2632             : 
    2633             : namespace js {
    2634             : namespace gc {
    2635             : 
    2636             : struct ArenaListSegment
    2637             : {
    2638             :     Arena* begin;
    2639             :     Arena* end;
    2640             : };
    2641             : 
    2642             : struct ArenasToUpdate
    2643             : {
    2644             :     ArenasToUpdate(Zone* zone, AllocKinds kinds);
    2645             :     bool done() { return kind == AllocKind::LIMIT; }
    2646             :     ArenaListSegment getArenasToUpdate(AutoLockHelperThreadState& lock, unsigned maxLength);
    2647             : 
    2648             :   private:
    2649             :     AllocKinds kinds;  // Selects which thing kinds to update
    2650             :     Zone* zone;        // Zone to process
    2651             :     AllocKind kind;    // Current alloc kind to process
    2652             :     Arena* arena;      // Next arena to process
    2653             : 
    2654           0 :     AllocKind nextAllocKind(AllocKind i) { return AllocKind(uint8_t(i) + 1); }
    2655             :     bool shouldProcessKind(AllocKind kind);
    2656             :     Arena* next(AutoLockHelperThreadState& lock);
    2657             : };
    2658             : 
    2659           0 : ArenasToUpdate::ArenasToUpdate(Zone* zone, AllocKinds kinds)
    2660           0 :   : kinds(kinds), zone(zone), kind(AllocKind::FIRST), arena(nullptr)
    2661             : {
    2662           0 :     MOZ_ASSERT(zone->isGCCompacting());
    2663           0 : }
    2664             : 
    2665             : Arena*
    2666           0 : ArenasToUpdate::next(AutoLockHelperThreadState& lock)
    2667             : {
    2668             :     // Find the next arena to update.
    2669             :     //
    2670             :     // This iterates through the GC thing kinds filtered by shouldProcessKind(),
    2671             :     // and then through thea arenas of that kind.  All state is held in the
    2672             :     // object and we just return when we find an arena.
    2673             : 
    2674           0 :     for (; kind < AllocKind::LIMIT; kind = nextAllocKind(kind)) {
    2675           0 :         if (kinds.contains(kind)) {
    2676           0 :             if (!arena)
    2677           0 :                 arena = zone->arenas.getFirstArena(kind);
    2678             :             else
    2679           0 :                 arena = arena->next;
    2680           0 :             if (arena)
    2681             :                 return arena;
    2682             :         }
    2683             :     }
    2684             : 
    2685           0 :     MOZ_ASSERT(!arena);
    2686           0 :     MOZ_ASSERT(done());
    2687             :     return nullptr;
    2688             : }
    2689             : 
    2690             : ArenaListSegment
    2691           0 : ArenasToUpdate::getArenasToUpdate(AutoLockHelperThreadState& lock, unsigned maxLength)
    2692             : {
    2693           0 :     Arena* begin = next(lock);
    2694           0 :     if (!begin)
    2695           0 :         return { nullptr, nullptr };
    2696             : 
    2697             :     Arena* last = begin;
    2698             :     unsigned count = 1;
    2699           0 :     while (last->next && count < maxLength) {
    2700           0 :         last = last->next;
    2701           0 :         count++;
    2702             :     }
    2703             : 
    2704           0 :     arena = last;
    2705           0 :     return { begin, last->next };
    2706             : }
    2707             : 
    2708           0 : struct UpdatePointersTask : public GCParallelTaskHelper<UpdatePointersTask>
    2709             : {
    2710             :     // Maximum number of arenas to update in one block.
    2711             : #ifdef DEBUG
    2712             :     static const unsigned MaxArenasToProcess = 16;
    2713             : #else
    2714             :     static const unsigned MaxArenasToProcess = 256;
    2715             : #endif
    2716             : 
    2717             :     UpdatePointersTask(JSRuntime* rt, ArenasToUpdate* source, AutoLockHelperThreadState& lock)
    2718           0 :       : GCParallelTaskHelper(rt), source_(source)
    2719             :     {
    2720           0 :         arenas_.begin = nullptr;
    2721           0 :         arenas_.end = nullptr;
    2722             :     }
    2723             : 
    2724             :     void run();
    2725             : 
    2726             :   private:
    2727             :     ArenasToUpdate* source_;
    2728             :     ArenaListSegment arenas_;
    2729             : 
    2730             :     bool getArenasToUpdate();
    2731             :     void updateArenas();
    2732             : };
    2733             : 
    2734             : bool
    2735           0 : UpdatePointersTask::getArenasToUpdate()
    2736             : {
    2737           0 :     AutoLockHelperThreadState lock;
    2738           0 :     arenas_ = source_->getArenasToUpdate(lock, MaxArenasToProcess);
    2739           0 :     return arenas_.begin != nullptr;
    2740             : }
    2741             : 
    2742             : void
    2743           0 : UpdatePointersTask::updateArenas()
    2744             : {
    2745           0 :     MovingTracer trc(runtime());
    2746           0 :     for (Arena* arena = arenas_.begin; arena != arenas_.end; arena = arena->next)
    2747           0 :         UpdateArenaPointers(&trc, arena);
    2748           0 : }
    2749             : 
    2750             : /* virtual */ void
    2751           0 : UpdatePointersTask::run()
    2752             : {
    2753             :     // These checks assert when run in parallel.
    2754             :     AutoDisableProxyCheck noProxyCheck;
    2755             : 
    2756           0 :     while (getArenasToUpdate())
    2757           0 :         updateArenas();
    2758           0 : }
    2759             : 
    2760             : } // namespace gc
    2761             : } // namespace js
    2762             : 
    2763             : static const size_t MinCellUpdateBackgroundTasks = 2;
    2764             : static const size_t MaxCellUpdateBackgroundTasks = 8;
    2765             : 
    2766             : static size_t
    2767           0 : CellUpdateBackgroundTaskCount()
    2768             : {
    2769           0 :     if (!CanUseExtraThreads())
    2770             :         return 0;
    2771             : 
    2772           0 :     size_t targetTaskCount = HelperThreadState().cpuCount / 2;
    2773           0 :     return Min(Max(targetTaskCount, MinCellUpdateBackgroundTasks), MaxCellUpdateBackgroundTasks);
    2774             : }
    2775             : 
    2776             : static bool
    2777             : CanUpdateKindInBackground(AllocKind kind) {
    2778             :     // We try to update as many GC things in parallel as we can, but there are
    2779             :     // kinds for which this might not be safe:
    2780             :     //  - we assume JSObjects that are foreground finalized are not safe to
    2781             :     //    update in parallel
    2782             :     //  - updating a shape touches child shapes in fixupShapeTreeAfterMovingGC()
    2783           0 :     if (!js::gc::IsBackgroundFinalized(kind) || IsShapeAllocKind(kind))
    2784             :         return false;
    2785             : 
    2786             :     return true;
    2787             : }
    2788             : 
    2789             : static AllocKinds
    2790           0 : ForegroundUpdateKinds(AllocKinds kinds)
    2791             : {
    2792           0 :     AllocKinds result;
    2793           0 :     for (AllocKind kind : kinds) {
    2794           0 :         if (!CanUpdateKindInBackground(kind))
    2795             :             result += kind;
    2796             :     }
    2797           0 :     return result;
    2798             : }
    2799             : 
    2800             : void
    2801           0 : GCRuntime::updateTypeDescrObjects(MovingTracer* trc, Zone* zone)
    2802             : {
    2803           0 :     zone->typeDescrObjects().sweep();
    2804           0 :     for (auto r = zone->typeDescrObjects().all(); !r.empty(); r.popFront())
    2805           0 :         UpdateCellPointers(trc, r.front());
    2806           0 : }
    2807             : 
    2808             : void
    2809           0 : GCRuntime::updateCellPointers(Zone* zone, AllocKinds kinds, size_t bgTaskCount)
    2810             : {
    2811           0 :     AllocKinds fgKinds = bgTaskCount == 0 ? kinds : ForegroundUpdateKinds(kinds);
    2812           0 :     AllocKinds bgKinds = kinds - fgKinds;
    2813             : 
    2814           0 :     ArenasToUpdate fgArenas(zone, fgKinds);
    2815           0 :     ArenasToUpdate bgArenas(zone, bgKinds);
    2816           0 :     Maybe<UpdatePointersTask> fgTask;
    2817           0 :     Maybe<UpdatePointersTask> bgTasks[MaxCellUpdateBackgroundTasks];
    2818             : 
    2819           0 :     size_t tasksStarted = 0;
    2820             : 
    2821             :     {
    2822           0 :         AutoLockHelperThreadState lock;
    2823             : 
    2824           0 :         fgTask.emplace(rt, &fgArenas, lock);
    2825             : 
    2826           0 :         for (size_t i = 0; i < bgTaskCount && !bgArenas.done(); i++) {
    2827           0 :             bgTasks[i].emplace(rt, &bgArenas, lock);
    2828           0 :             startTask(*bgTasks[i], gcstats::PhaseKind::COMPACT_UPDATE_CELLS, lock);
    2829           0 :             tasksStarted++;
    2830             :         }
    2831             :     }
    2832             : 
    2833           0 :     fgTask->runFromMainThread(rt);
    2834             : 
    2835             :     {
    2836           0 :         AutoLockHelperThreadState lock;
    2837             : 
    2838           0 :         for (size_t i = 0; i < tasksStarted; i++)
    2839           0 :             joinTask(*bgTasks[i], gcstats::PhaseKind::COMPACT_UPDATE_CELLS, lock);
    2840           0 :         for (size_t i = tasksStarted; i < MaxCellUpdateBackgroundTasks; i++)
    2841           0 :             MOZ_ASSERT(bgTasks[i].isNothing());
    2842             :     }
    2843           0 : }
    2844             : 
    2845             : // After cells have been relocated any pointers to a cell's old locations must
    2846             : // be updated to point to the new location.  This happens by iterating through
    2847             : // all cells in heap and tracing their children (non-recursively) to update
    2848             : // them.
    2849             : //
    2850             : // This is complicated by the fact that updating a GC thing sometimes depends on
    2851             : // making use of other GC things.  After a moving GC these things may not be in
    2852             : // a valid state since they may contain pointers which have not been updated
    2853             : // yet.
    2854             : //
    2855             : // The main dependencies are:
    2856             : //
    2857             : //   - Updating a JSObject makes use of its shape
    2858             : //   - Updating a typed object makes use of its type descriptor object
    2859             : //
    2860             : // This means we require at least three phases for update:
    2861             : //
    2862             : //  1) shapes
    2863             : //  2) typed object type descriptor objects
    2864             : //  3) all other objects
    2865             : //
    2866             : // Also, JSScripts and LazyScripts can have pointers to each other. Each can be
    2867             : // updated safely without requiring the referent to be up-to-date, but TSAN can
    2868             : // warn about data races when calling IsForwarded() on the new location of a
    2869             : // cell that is being updated in parallel. To avoid this, we update these in
    2870             : // separate phases.
    2871             : //
    2872             : // Since we want to minimize the number of phases, arrange kinds into three
    2873             : // arbitrary phases.
    2874             : 
    2875           1 : static const AllocKinds UpdatePhaseOne {
    2876             :     AllocKind::SCRIPT,
    2877             :     AllocKind::BASE_SHAPE,
    2878             :     AllocKind::SHAPE,
    2879             :     AllocKind::ACCESSOR_SHAPE,
    2880             :     AllocKind::OBJECT_GROUP,
    2881             :     AllocKind::STRING,
    2882             :     AllocKind::JITCODE,
    2883             :     AllocKind::SCOPE
    2884             : };
    2885             : 
    2886             : // UpdatePhaseTwo is typed object descriptor objects.
    2887             : 
    2888           1 : static const AllocKinds UpdatePhaseThree {
    2889             :     AllocKind::LAZY_SCRIPT,
    2890             :     AllocKind::FUNCTION,
    2891             :     AllocKind::FUNCTION_EXTENDED,
    2892             :     AllocKind::OBJECT0,
    2893             :     AllocKind::OBJECT0_BACKGROUND,
    2894             :     AllocKind::OBJECT2,
    2895             :     AllocKind::OBJECT2_BACKGROUND,
    2896             :     AllocKind::OBJECT4,
    2897             :     AllocKind::OBJECT4_BACKGROUND,
    2898             :     AllocKind::OBJECT8,
    2899             :     AllocKind::OBJECT8_BACKGROUND,
    2900             :     AllocKind::OBJECT12,
    2901             :     AllocKind::OBJECT12_BACKGROUND,
    2902             :     AllocKind::OBJECT16,
    2903             :     AllocKind::OBJECT16_BACKGROUND
    2904             : };
    2905             : 
    2906             : void
    2907           0 : GCRuntime::updateAllCellPointers(MovingTracer* trc, Zone* zone)
    2908             : {
    2909           0 :     size_t bgTaskCount = CellUpdateBackgroundTaskCount();
    2910             : 
    2911           0 :     updateCellPointers(zone, UpdatePhaseOne, bgTaskCount);
    2912             : 
    2913             :     // UpdatePhaseTwo: Update TypeDescrs before all other objects as typed
    2914             :     // objects access these objects when we trace them.
    2915           0 :     updateTypeDescrObjects(trc, zone);
    2916             : 
    2917           0 :     updateCellPointers(zone, UpdatePhaseThree, bgTaskCount);
    2918           0 : }
    2919             : 
    2920             : /*
    2921             :  * Update pointers to relocated cells in a single zone by doing a traversal of
    2922             :  * that zone's arenas and calling per-zone sweep hooks.
    2923             :  *
    2924             :  * The latter is necessary to update weak references which are not marked as
    2925             :  * part of the traversal.
    2926             :  */
    2927             : void
    2928           0 : GCRuntime::updateZonePointersToRelocatedCells(Zone* zone)
    2929             : {
    2930           0 :     MOZ_ASSERT(!rt->isBeingDestroyed());
    2931           0 :     MOZ_ASSERT(zone->isGCCompacting());
    2932             : 
    2933           0 :     AutoTouchingGrayThings tgt;
    2934             : 
    2935           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT_UPDATE);
    2936           0 :     MovingTracer trc(rt);
    2937             : 
    2938           0 :     zone->fixupAfterMovingGC();
    2939             : 
    2940             :     // Fixup compartment global pointers as these get accessed during marking.
    2941           0 :     for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
    2942           0 :         comp->fixupAfterMovingGC();
    2943             : 
    2944           0 :     zone->externalStringCache().purge();
    2945           0 :     zone->functionToStringCache().purge();
    2946             : 
    2947             :     // Iterate through all cells that can contain relocatable pointers to update
    2948             :     // them. Since updating each cell is independent we try to parallelize this
    2949             :     // as much as possible.
    2950           0 :     updateAllCellPointers(&trc, zone);
    2951             : 
    2952             :     // Mark roots to update them.
    2953             :     {
    2954           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK_ROOTS);
    2955             : 
    2956           0 :         WeakMapBase::traceZone(zone, &trc);
    2957             :     }
    2958             : 
    2959             :     // Sweep everything to fix up weak pointers.
    2960           0 :     rt->gc.sweepZoneAfterCompacting(zone);
    2961             : 
    2962             :     // Call callbacks to get the rest of the system to fixup other untraced pointers.
    2963           0 :     for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
    2964           0 :         callWeakPointerCompartmentCallbacks(comp);
    2965           0 : }
    2966             : 
    2967             : /*
    2968             :  * Update runtime-wide pointers to relocated cells.
    2969             :  */
    2970             : void
    2971           0 : GCRuntime::updateRuntimePointersToRelocatedCells(AutoTraceSession& session)
    2972             : {
    2973           0 :     MOZ_ASSERT(!rt->isBeingDestroyed());
    2974             : 
    2975           0 :     gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::COMPACT_UPDATE);
    2976           0 :     MovingTracer trc(rt);
    2977             : 
    2978           0 :     Compartment::fixupCrossCompartmentWrappersAfterMovingGC(&trc);
    2979             : 
    2980           0 :     rt->geckoProfiler().fixupStringsMapAfterMovingGC();
    2981             : 
    2982           0 :     traceRuntimeForMajorGC(&trc, session);
    2983             : 
    2984             :     // Mark roots to update them.
    2985             :     {
    2986           0 :         gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::MARK_ROOTS);
    2987           0 :         Debugger::traceAllForMovingGC(&trc);
    2988           0 :         Debugger::traceIncomingCrossCompartmentEdges(&trc);
    2989             : 
    2990             :         // Mark all gray roots, making sure we call the trace callback to get the
    2991             :         // current set.
    2992           0 :         if (JSTraceDataOp op = grayRootTracer.op)
    2993           0 :             (*op)(&trc, grayRootTracer.data);
    2994             :     }
    2995             : 
    2996             :     // Sweep everything to fix up weak pointers.
    2997           0 :     Debugger::sweepAll(rt->defaultFreeOp());
    2998           0 :     jit::JitRuntime::SweepJitcodeGlobalTable(rt);
    2999           0 :     for (JS::detail::WeakCacheBase* cache : rt->weakCaches())
    3000           0 :         cache->sweep();
    3001             : 
    3002             :     // Type inference may put more blocks here to free.
    3003           0 :     blocksToFreeAfterSweeping.ref().freeAll();
    3004             : 
    3005             :     // Call callbacks to get the rest of the system to fixup other untraced pointers.
    3006           0 :     callWeakPointerZonesCallbacks();
    3007           0 : }
    3008             : 
    3009             : void
    3010           0 : GCRuntime::protectAndHoldArenas(Arena* arenaList)
    3011             : {
    3012           0 :     for (Arena* arena = arenaList; arena; ) {
    3013           0 :         MOZ_ASSERT(arena->allocated());
    3014           0 :         Arena* next = arena->next;
    3015           0 :         if (!next) {
    3016             :             // Prepend to hold list before we protect the memory.
    3017           0 :             arena->next = relocatedArenasToRelease;
    3018           0 :             relocatedArenasToRelease = arenaList;
    3019             :         }
    3020           0 :         ProtectPages(arena, ArenaSize);
    3021           0 :         arena = next;
    3022             :     }
    3023           0 : }
    3024             : 
    3025             : void
    3026           0 : GCRuntime::unprotectHeldRelocatedArenas()
    3027             : {
    3028          10 :     for (Arena* arena = relocatedArenasToRelease; arena; arena = arena->next) {
    3029           0 :         UnprotectPages(arena, ArenaSize);
    3030           0 :         MOZ_ASSERT(arena->allocated());
    3031             :     }
    3032           5 : }
    3033             : 
    3034             : void
    3035           0 : GCRuntime::releaseRelocatedArenas(Arena* arenaList)
    3036             : {
    3037          15 :     AutoLockGC lock(rt);
    3038           5 :     releaseRelocatedArenasWithoutUnlocking(arenaList, lock);
    3039           0 : }
    3040             : 
    3041             : void
    3042           0 : GCRuntime::releaseRelocatedArenasWithoutUnlocking(Arena* arenaList, const AutoLockGC& lock)
    3043             : {
    3044             :     // Release the relocated arenas, now containing only forwarding pointers
    3045           0 :     unsigned count = 0;
    3046           5 :     while (arenaList) {
    3047           0 :         Arena* arena = arenaList;
    3048           0 :         arenaList = arenaList->next;
    3049             : 
    3050             :         // Clear the mark bits
    3051           0 :         arena->unmarkAll();
    3052             : 
    3053             :         // Mark arena as empty
    3054           0 :         arena->setAsFullyUnused();
    3055             : 
    3056             : #if defined(JS_CRASH_DIAGNOSTICS) || defined(JS_GC_ZEAL)
    3057           0 :         JS_POISON(reinterpret_cast<void*>(arena->thingsStart()),
    3058             :                   JS_MOVED_TENURED_PATTERN, arena->getThingsSpan(),
    3059           0 :                   MemCheckKind::MakeNoAccess);
    3060             : #endif
    3061             : 
    3062           0 :         releaseArena(arena, lock);
    3063           0 :         ++count;
    3064             :     }
    3065           0 : }
    3066             : 
    3067             : // In debug mode we don't always release relocated arenas straight away.
    3068             : // Sometimes protect them instead and hold onto them until the next GC sweep
    3069             : // phase to catch any pointers to them that didn't get forwarded.
    3070             : 
    3071             : void
    3072           5 : GCRuntime::releaseHeldRelocatedArenas()
    3073             : {
    3074             : #ifdef DEBUG
    3075           0 :     unprotectHeldRelocatedArenas();
    3076           0 :     Arena* arenas = relocatedArenasToRelease;
    3077          10 :     relocatedArenasToRelease = nullptr;
    3078           0 :     releaseRelocatedArenas(arenas);
    3079             : #endif
    3080           0 : }
    3081             : 
    3082             : void
    3083           0 : GCRuntime::releaseHeldRelocatedArenasWithoutUnlocking(const AutoLockGC& lock)
    3084             : {
    3085             : #ifdef DEBUG
    3086           0 :     unprotectHeldRelocatedArenas();
    3087           0 :     releaseRelocatedArenasWithoutUnlocking(relocatedArenasToRelease, lock);
    3088           0 :     relocatedArenasToRelease = nullptr;
    3089             : #endif
    3090           0 : }
    3091             : 
    3092           0 : ArenaLists::ArenaLists(JSRuntime* rt, Zone* zone)
    3093             :   : runtime_(rt),
    3094             :     freeLists_(zone),
    3095             :     arenaLists_(zone),
    3096             :     backgroundFinalizeState_(),
    3097             :     arenaListsToSweep_(),
    3098             :     incrementalSweptArenaKind(zone, AllocKind::LIMIT),
    3099             :     incrementalSweptArenas(zone),
    3100             :     gcShapeArenasToUpdate(zone, nullptr),
    3101             :     gcAccessorShapeArenasToUpdate(zone, nullptr),
    3102             :     gcScriptArenasToUpdate(zone, nullptr),
    3103             :     gcObjectGroupArenasToUpdate(zone, nullptr),
    3104         252 :     savedEmptyArenas(zone, nullptr)
    3105             : {
    3106           0 :     for (auto i : AllAllocKinds()) {
    3107           0 :         freeLists()[i] = &emptySentinel;
    3108        1218 :         backgroundFinalizeState(i) = BFS_DONE;
    3109           1 :         arenaListsToSweep(i) = nullptr;
    3110             :     }
    3111           0 : }
    3112             : 
    3113             : void
    3114           0 : ReleaseArenaList(JSRuntime* rt, Arena* arena, const AutoLockGC& lock)
    3115             : {
    3116             :     Arena* next;
    3117         155 :     for (; arena; arena = next) {
    3118           0 :         next = arena->next;
    3119           0 :         rt->gc.releaseArena(arena, lock);
    3120             :     }
    3121           0 : }
    3122             : 
    3123           0 : ArenaLists::~ArenaLists()
    3124             : {
    3125           0 :     AutoLockGC lock(runtime_);
    3126             : 
    3127          15 :     for (auto i : AllAllocKinds()) {
    3128             :         /*
    3129             :          * We can only call this during the shutdown after the last GC when
    3130             :          * the background finalization is disabled.
    3131             :          */
    3132           0 :         MOZ_ASSERT(backgroundFinalizeState(i) == BFS_DONE);
    3133           0 :         ReleaseArenaList(runtime_, arenaLists(i).head(), lock);
    3134             :     }
    3135          15 :     ReleaseArenaList(runtime_, incrementalSweptArenas.ref().head(), lock);
    3136             : 
    3137           0 :     ReleaseArenaList(runtime_, savedEmptyArenas, lock);
    3138           5 : }
    3139             : 
    3140             : void
    3141           0 : ArenaLists::queueForForegroundSweep(FreeOp* fop, const FinalizePhase& phase)
    3142             : {
    3143           0 :     gcstats::AutoPhase ap(fop->runtime()->gc.stats(), phase.statsPhase);
    3144           0 :     for (auto kind : phase.kinds)
    3145           0 :         queueForForegroundSweep(kind);
    3146           0 : }
    3147             : 
    3148             : void
    3149           0 : ArenaLists::queueForForegroundSweep(AllocKind thingKind)
    3150             : {
    3151           0 :     MOZ_ASSERT(!IsBackgroundFinalized(thingKind));
    3152           0 :     MOZ_ASSERT(backgroundFinalizeState(thingKind) == BFS_DONE);
    3153           0 :     MOZ_ASSERT(!arenaListsToSweep(thingKind));
    3154             : 
    3155           0 :     arenaListsToSweep(thingKind) = arenaLists(thingKind).head();
    3156           0 :     arenaLists(thingKind).clear();
    3157           0 : }
    3158             : 
    3159             : void
    3160           0 : ArenaLists::queueForBackgroundSweep(FreeOp* fop, const FinalizePhase& phase)
    3161             : {
    3162           0 :     gcstats::AutoPhase ap(fop->runtime()->gc.stats(), phase.statsPhase);
    3163           0 :     for (auto kind : phase.kinds)
    3164           0 :         queueForBackgroundSweep(kind);
    3165           0 : }
    3166             : 
    3167             : inline void
    3168           0 : ArenaLists::queueForBackgroundSweep(AllocKind thingKind)
    3169             : {
    3170           0 :     MOZ_ASSERT(IsBackgroundFinalized(thingKind));
    3171             : 
    3172           0 :     ArenaList* al = &arenaLists(thingKind);
    3173           0 :     if (al->isEmpty()) {
    3174           0 :         MOZ_ASSERT(backgroundFinalizeState(thingKind) == BFS_DONE);
    3175             :         return;
    3176             :     }
    3177             : 
    3178           0 :     MOZ_ASSERT(backgroundFinalizeState(thingKind) == BFS_DONE);
    3179             : 
    3180           0 :     arenaListsToSweep(thingKind) = al->head();
    3181           0 :     al->clear();
    3182           0 :     backgroundFinalizeState(thingKind) = BFS_RUN;
    3183             : }
    3184             : 
    3185             : /*static*/ void
    3186           0 : ArenaLists::backgroundFinalize(FreeOp* fop, Arena* listHead, Arena** empty)
    3187             : {
    3188           0 :     MOZ_ASSERT(listHead);
    3189           0 :     MOZ_ASSERT(empty);
    3190             : 
    3191           0 :     AllocKind thingKind = listHead->getAllocKind();
    3192           0 :     Zone* zone = listHead->zone;
    3193             : 
    3194           0 :     size_t thingsPerArena = Arena::thingsPerArena(thingKind);
    3195           0 :     SortedArenaList finalizedSorted(thingsPerArena);
    3196             : 
    3197             :     auto unlimited = SliceBudget::unlimited();
    3198           0 :     FinalizeArenas(fop, &listHead, finalizedSorted, thingKind, unlimited, KEEP_ARENAS);
    3199           0 :     MOZ_ASSERT(!listHead);
    3200             : 
    3201           0 :     finalizedSorted.extractEmpty(empty);
    3202             : 
    3203             :     // When arenas are queued for background finalization, all arenas are moved
    3204             :     // to arenaListsToSweep[], leaving the arenaLists[] empty. However, new
    3205             :     // arenas may be allocated before background finalization finishes; now that
    3206             :     // finalization is complete, we want to merge these lists back together.
    3207           0 :     ArenaLists* lists = &zone->arenas;
    3208           0 :     ArenaList* al = &lists->arenaLists(thingKind);
    3209             : 
    3210             :     // Flatten |finalizedSorted| into a regular ArenaList.
    3211           0 :     ArenaList finalized = finalizedSorted.toArenaList();
    3212             : 
    3213             :     // We must take the GC lock to be able to safely modify the ArenaList;
    3214             :     // however, this does not by itself make the changes visible to all threads,
    3215             :     // as not all threads take the GC lock to read the ArenaLists.
    3216             :     // That safety is provided by the ReleaseAcquire memory ordering of the
    3217             :     // background finalize state, which we explicitly set as the final step.
    3218             :     {
    3219           0 :         AutoLockGC lock(lists->runtime_);
    3220           0 :         MOZ_ASSERT(lists->backgroundFinalizeState(thingKind) == BFS_RUN);
    3221             : 
    3222             :         // Join |al| and |finalized| into a single list.
    3223           0 :         *al = finalized.insertListWithCursorAtEnd(*al);
    3224             : 
    3225           0 :         lists->arenaListsToSweep(thingKind) = nullptr;
    3226             :     }
    3227             : 
    3228           0 :     lists->backgroundFinalizeState(thingKind) = BFS_DONE;
    3229           0 : }
    3230             : 
    3231             : void
    3232           0 : ArenaLists::releaseForegroundSweptEmptyArenas()
    3233             : {
    3234           0 :     AutoLockGC lock(runtime_);
    3235           0 :     ReleaseArenaList(runtime_, savedEmptyArenas, lock);
    3236           0 :     savedEmptyArenas = nullptr;
    3237           0 : }
    3238             : 
    3239             : void
    3240           0 : ArenaLists::queueForegroundThingsForSweep()
    3241             : {
    3242           0 :     gcShapeArenasToUpdate = arenaListsToSweep(AllocKind::SHAPE);
    3243           0 :     gcAccessorShapeArenasToUpdate = arenaListsToSweep(AllocKind::ACCESSOR_SHAPE);
    3244           0 :     gcObjectGroupArenasToUpdate = arenaListsToSweep(AllocKind::OBJECT_GROUP);
    3245           0 :     gcScriptArenasToUpdate = arenaListsToSweep(AllocKind::SCRIPT);
    3246           0 : }
    3247             : 
    3248           0 : SliceBudget::SliceBudget()
    3249           0 :   : timeBudget(UnlimitedTimeBudget), workBudget(UnlimitedWorkBudget)
    3250             : {
    3251           0 :     makeUnlimited();
    3252           0 : }
    3253             : 
    3254           0 : SliceBudget::SliceBudget(TimeBudget time)
    3255           0 :   : timeBudget(time), workBudget(UnlimitedWorkBudget)
    3256             : {
    3257           0 :     if (time.budget < 0) {
    3258           0 :         makeUnlimited();
    3259             :     } else {
    3260             :         // Note: TimeBudget(0) is equivalent to WorkBudget(CounterReset).
    3261           0 :         deadline = PRMJ_Now() + time.budget * PRMJ_USEC_PER_MSEC;
    3262           0 :         counter = CounterReset;
    3263             :     }
    3264           0 : }
    3265             : 
    3266           0 : SliceBudget::SliceBudget(WorkBudget work)
    3267           0 :   : timeBudget(UnlimitedTimeBudget), workBudget(work)
    3268             : {
    3269           0 :     if (work.budget < 0) {
    3270           0 :         makeUnlimited();
    3271             :     } else {
    3272           0 :         deadline = 0;
    3273           0 :         counter = work.budget;
    3274             :     }
    3275           0 : }
    3276             : 
    3277             : int
    3278           0 : SliceBudget::describe(char* buffer, size_t maxlen) const
    3279             : {
    3280           0 :     if (isUnlimited())
    3281           0 :         return snprintf(buffer, maxlen, "unlimited");
    3282           0 :     else if (isWorkBudget())
    3283           0 :         return snprintf(buffer, maxlen, "work(%" PRId64 ")", workBudget.budget);
    3284             :     else
    3285           0 :         return snprintf(buffer, maxlen, "%" PRId64 "ms", timeBudget.budget);
    3286             : }
    3287             : 
    3288             : bool
    3289           0 : SliceBudget::checkOverBudget()
    3290             : {
    3291           0 :     bool over = PRMJ_Now() >= deadline;
    3292           0 :     if (!over)
    3293           0 :         counter = CounterReset;
    3294           0 :     return over;
    3295             : }
    3296             : 
    3297             : void
    3298           0 : GCRuntime::requestMajorGC(JS::gcreason::Reason reason)
    3299             : {
    3300           0 :     MOZ_ASSERT(!CurrentThreadIsPerformingGC());
    3301             : 
    3302           0 :     if (majorGCRequested())
    3303             :         return;
    3304             : 
    3305           0 :     majorGCTriggerReason = reason;
    3306           0 :     rt->mainContextFromOwnThread()->requestInterrupt(InterruptReason::GC);
    3307             : }
    3308             : 
    3309             : void
    3310           1 : Nursery::requestMinorGC(JS::gcreason::Reason reason) const
    3311             : {
    3312           1 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));
    3313           1 :     MOZ_ASSERT(!CurrentThreadIsPerformingGC());
    3314             : 
    3315           1 :     if (minorGCRequested())
    3316             :         return;
    3317             : 
    3318           1 :     minorGCTriggerReason_ = reason;
    3319           1 :     runtime()->mainContextFromOwnThread()->requestInterrupt(InterruptReason::GC);
    3320             : }
    3321             : 
    3322             : bool
    3323           0 : GCRuntime::triggerGC(JS::gcreason::Reason reason)
    3324             : {
    3325             :     /*
    3326             :      * Don't trigger GCs if this is being called off the main thread from
    3327             :      * onTooMuchMalloc().
    3328             :      */
    3329           0 :     if (!CurrentThreadCanAccessRuntime(rt))
    3330             :         return false;
    3331             : 
    3332             :     /* GC is already running. */
    3333           0 :     if (JS::RuntimeHeapIsCollecting())
    3334             :         return false;
    3335             : 
    3336           0 :     JS::PrepareForFullGC(rt->mainContextFromOwnThread());
    3337           0 :     requestMajorGC(reason);
    3338           0 :     return true;
    3339             : }
    3340             : 
    3341             : void
    3342        4306 : GCRuntime::maybeAllocTriggerZoneGC(Zone* zone, const AutoLockGC& lock)
    3343             : {
    3344        4307 :     if (!CurrentThreadCanAccessRuntime(rt)) {
    3345             :         // Zones in use by a helper thread can't be collected.
    3346           0 :         MOZ_ASSERT(zone->usedByHelperThread() || zone->isAtomsZone());
    3347             :         return;
    3348          67 :     }
    3349             : 
    3350             :     MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());
    3351             : 
    3352           0 :     size_t usedBytes = zone->usage.gcBytes();
    3353           0 :     size_t thresholdBytes = zone->threshold.gcTriggerBytes();
    3354             : 
    3355           0 :     if (usedBytes >= thresholdBytes) {
    3356             :         // The threshold has been surpassed, immediately trigger a GC, which
    3357             :         // will be done non-incrementally.
    3358           0 :         triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER, usedBytes, thresholdBytes);
    3359           0 :         return;
    3360             :     }
    3361             : 
    3362           0 :     bool wouldInterruptCollection = isIncrementalGCInProgress() && !zone->isCollecting();
    3363             :     double zoneGCThresholdFactor =
    3364           0 :         wouldInterruptCollection ? tunables.allocThresholdFactorAvoidInterrupt()
    3365        8488 :                                  : tunables.allocThresholdFactor();
    3366             : 
    3367        4244 :     size_t igcThresholdBytes = thresholdBytes * zoneGCThresholdFactor;
    3368             : 
    3369           0 :     if (usedBytes >= igcThresholdBytes) {
    3370             :         // Reduce the delay to the start of the next incremental slice.
    3371           0 :         if (zone->gcDelayBytes < ArenaSize)
    3372           0 :             zone->gcDelayBytes = 0;
    3373             :         else
    3374           0 :             zone->gcDelayBytes -= ArenaSize;
    3375             : 
    3376           0 :         if (!zone->gcDelayBytes) {
    3377             :             // Start or continue an in progress incremental GC. We do this
    3378             :             // to try to avoid performing non-incremental GCs on zones
    3379             :             // which allocate a lot of data, even when incremental slices
    3380             :             // can't be triggered via scheduling in the event loop.
    3381           0 :             triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER, usedBytes, igcThresholdBytes);
    3382             : 
    3383             :             // Delay the next slice until a certain amount of allocation
    3384             :             // has been performed.
    3385           0 :             zone->gcDelayBytes = tunables.zoneAllocDelayBytes();
    3386           0 :             return;
    3387             :         }
    3388             :     }
    3389             : }
    3390             : 
    3391             : bool
    3392           0 : GCRuntime::triggerZoneGC(Zone* zone, JS::gcreason::Reason reason, size_t used, size_t threshold)
    3393             : {
    3394           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    3395             : 
    3396             :     /* GC is already running. */
    3397           0 :     if (JS::RuntimeHeapIsBusy())
    3398             :         return false;
    3399             : 
    3400             : #ifdef JS_GC_ZEAL
    3401           0 :     if (hasZealMode(ZealMode::Alloc)) {
    3402           0 :         MOZ_RELEASE_ASSERT(triggerGC(reason));
    3403             :         return true;
    3404             :     }
    3405             : #endif
    3406             : 
    3407           0 :     if (zone->isAtomsZone()) {
    3408             :         /* We can't do a zone GC of just the atoms zone. */
    3409           0 :         if (rt->hasHelperThreadZones()) {
    3410             :             /* We can't collect atoms while off-thread parsing is allocating. */
    3411           0 :             fullGCForAtomsRequested_ = true;
    3412           0 :             return false;
    3413             :         }
    3414           0 :         stats().recordTrigger(used, threshold);
    3415           0 :         MOZ_RELEASE_ASSERT(triggerGC(reason));
    3416             :         return true;
    3417             :     }
    3418             : 
    3419           0 :     stats().recordTrigger(used, threshold);
    3420           0 :     PrepareZoneForGC(zone);
    3421           0 :     requestMajorGC(reason);
    3422           0 :     return true;
    3423             : }
    3424             : 
    3425             : void
    3426         260 : GCRuntime::maybeGC(Zone* zone)
    3427             : {
    3428         260 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    3429             : 
    3430             : #ifdef JS_GC_ZEAL
    3431           0 :     if (hasZealMode(ZealMode::Alloc) || hasZealMode(ZealMode::RootsChange)) {
    3432           0 :         JS::PrepareForFullGC(rt->mainContextFromOwnThread());
    3433           0 :         gc(GC_NORMAL, JS::gcreason::DEBUG_GC);
    3434           0 :         return;
    3435             :     }
    3436             : #endif
    3437             : 
    3438         260 :     if (gcIfRequested())
    3439             :         return;
    3440             : 
    3441         780 :     double threshold = zone->threshold.eagerAllocTrigger(schedulingState.inHighFrequencyGCMode());
    3442           0 :     double usedBytes = zone->usage.gcBytes();
    3443         260 :     if (usedBytes > 1024 * 1024 && usedBytes >= threshold &&
    3444           0 :         !isIncrementalGCInProgress() && !isBackgroundSweeping())
    3445             :     {
    3446           0 :         stats().recordTrigger(usedBytes, threshold);
    3447           0 :         PrepareZoneForGC(zone);
    3448           0 :         startGC(GC_NORMAL, JS::gcreason::EAGER_ALLOC_TRIGGER);
    3449             :     }
    3450             : }
    3451             : 
    3452             : void
    3453           0 : GCRuntime::triggerFullGCForAtoms(JSContext* cx)
    3454             : {
    3455           0 :     MOZ_ASSERT(fullGCForAtomsRequested_);
    3456           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    3457           0 :     MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());
    3458           0 :     MOZ_ASSERT(cx->canCollectAtoms());
    3459           0 :     fullGCForAtomsRequested_ = false;
    3460           0 :     MOZ_RELEASE_ASSERT(triggerGC(JS::gcreason::DELAYED_ATOMS_GC));
    3461           0 : }
    3462             : 
    3463             : // Do all possible decommit immediately from the current thread without
    3464             : // releasing the GC lock or allocating any memory.
    3465             : void
    3466           0 : GCRuntime::decommitAllWithoutUnlocking(const AutoLockGC& lock)
    3467             : {
    3468           0 :     MOZ_ASSERT(emptyChunks(lock).count() == 0);
    3469           0 :     for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done(); chunk.next())
    3470           0 :         chunk->decommitAllArenasWithoutUnlocking(lock);
    3471           0 :     MOZ_ASSERT(availableChunks(lock).verify());
    3472           0 : }
    3473             : 
    3474             : void
    3475           0 : GCRuntime::startDecommit()
    3476             : {
    3477           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    3478           0 :     MOZ_ASSERT(!decommitTask.isRunning());
    3479             : 
    3480             :     // If we are allocating heavily enough to trigger "high freqency" GC, then
    3481             :     // skip decommit so that we do not compete with the mutator.
    3482           0 :     if (schedulingState.inHighFrequencyGCMode())
    3483           0 :         return;
    3484             : 
    3485           0 :     BackgroundDecommitTask::ChunkVector toDecommit;
    3486             :     {
    3487           0 :         AutoLockGC lock(rt);
    3488             : 
    3489             :         // Verify that all entries in the empty chunks pool are already decommitted.
    3490           0 :         for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done(); chunk.next())
    3491           0 :             MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
    3492             : 
    3493             :         // Since we release the GC lock while doing the decommit syscall below,
    3494             :         // it is dangerous to iterate the available list directly, as the active
    3495             :         // thread could modify it concurrently. Instead, we build and pass an
    3496             :         // explicit Vector containing the Chunks we want to visit.
    3497           0 :         MOZ_ASSERT(availableChunks(lock).verify());
    3498           0 :         for (ChunkPool::Iter iter(availableChunks(lock)); !iter.done(); iter.next()) {
    3499           0 :             if (!toDecommit.append(iter.get())) {
    3500             :                 // The OOM handler does a full, immediate decommit.
    3501           0 :                 return onOutOfMallocMemory(lock);
    3502             :             }
    3503             :         }
    3504             :     }
    3505           0 :     decommitTask.setChunksToScan(toDecommit);
    3506             : 
    3507           0 :     if (sweepOnBackgroundThread && decommitTask.start())
    3508             :         return;
    3509             : 
    3510           0 :     decommitTask.runFromMainThread(rt);
    3511             : }
    3512             : 
    3513             : void
    3514           0 : js::gc::BackgroundDecommitTask::setChunksToScan(ChunkVector &chunks)
    3515             : {
    3516           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));
    3517           0 :     MOZ_ASSERT(!isRunning());
    3518           0 :     MOZ_ASSERT(toDecommit.ref().empty());
    3519           0 :     Swap(toDecommit.ref(), chunks);
    3520           0 : }
    3521             : 
    3522             : /* virtual */ void
    3523           0 : js::gc::BackgroundDecommitTask::run()
    3524             : {
    3525           0 :     AutoLockGC lock(runtime());
    3526             : 
    3527           0 :     for (Chunk* chunk : toDecommit.ref()) {
    3528             : 
    3529             :         // The arena list is not doubly-linked, so we have to work in the free
    3530             :         // list order and not in the natural order.
    3531           0 :         while (chunk->info.numArenasFreeCommitted) {
    3532           0 :             bool ok = chunk->decommitOneFreeArena(runtime(), lock);
    3533             : 
    3534             :             // If we are low enough on memory that we can't update the page
    3535             :             // tables, or if we need to return for any other reason, break out
    3536             :             // of the loop.
    3537           0 :             if (cancel_ || !ok)
    3538             :                 break;
    3539             :         }
    3540             :     }
    3541           0 :     toDecommit.ref().clearAndFree();
    3542             : 
    3543           0 :     ChunkPool toFree = runtime()->gc.expireEmptyChunkPool(lock);
    3544           0 :     if (toFree.count()) {
    3545           0 :         AutoUnlockGC unlock(lock);
    3546           0 :         FreeChunkPool(toFree);
    3547             :     }
    3548           0 : }
    3549             : 
    3550             : void
    3551           0 : GCRuntime::sweepBackgroundThings(ZoneList& zones, LifoAlloc& freeBlocks)
    3552             : {
    3553           0 :     freeBlocks.freeAll();
    3554             : 
    3555           0 :     if (zones.isEmpty())
    3556           0 :         return;
    3557             : 
    3558           0 :     FreeOp fop(nullptr);
    3559             : 
    3560             :     // Sweep zones in order. The atoms zone must be finalized last as other
    3561             :     // zones may have direct pointers into it.
    3562           0 :     while (!zones.isEmpty()) {
    3563           0 :         Zone* zone = zones.removeFront();
    3564           0 :         Arena* emptyArenas = nullptr;
    3565             : 
    3566             :         // We must finalize thing kinds in the order specified by
    3567             :         // BackgroundFinalizePhases.
    3568           0 :         for (auto phase : BackgroundFinalizePhases) {
    3569           0 :             for (auto kind : phase.kinds) {
    3570           0 :                 Arena* arenas = zone->arenas.arenaListsToSweep(kind);
    3571           0 :                 MOZ_RELEASE_ASSERT(uintptr_t(arenas) != uintptr_t(-1));
    3572           0 :                 if (arenas)
    3573           0 :                     ArenaLists::backgroundFinalize(&fop, arenas, &emptyArenas);
    3574             :             }
    3575             :         }
    3576             : 
    3577           0 :         AutoLockGC lock(rt);
    3578             : 
    3579             :         // Release any arenas that are now empty, dropping and reaquiring the GC
    3580             :         // lock every so often to avoid blocking the main thread from
    3581             :         // allocating chunks.
    3582             :         static const size_t LockReleasePeriod = 32;
    3583           0 :         size_t releaseCount = 0;
    3584             :         Arena* next;
    3585           0 :         for (Arena* arena = emptyArenas; arena; arena = next) {
    3586           0 :             next = arena->next;
    3587           0 :             rt->gc.releaseArena(arena, lock);
    3588           0 :             releaseCount++;
    3589           0 :             if (releaseCount % LockReleasePeriod == 0) {
    3590           0 :                 lock.unlock();
    3591           0 :                 lock.lock();
    3592             :             }
    3593             :         }
    3594             :     }
    3595             : }
    3596             : 
    3597             : void
    3598          51 : GCRuntime::assertBackgroundSweepingFinished()
    3599             : {
    3600             : #ifdef DEBUG
    3601          51 :     MOZ_ASSERT(backgroundSweepZones.ref().isEmpty());
    3602           1 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    3603        2730 :         for (auto i : AllAllocKinds()) {
    3604        2639 :             MOZ_ASSERT(!zone->arenas.arenaListsToSweep(i));
    3605           0 :             MOZ_ASSERT(zone->arenas.doneBackgroundFinalize(i));
    3606             :         }
    3607             :     }
    3608         102 :     MOZ_ASSERT(blocksToFreeAfterSweeping.ref().computedSizeOfExcludingThis() == 0);
    3609             : #endif
    3610          51 : }
    3611             : 
    3612             : void
    3613           0 : GCHelperState::finish()
    3614             : {
    3615             :     // Wait for any lingering background sweeping to finish.
    3616           0 :     waitBackgroundSweepEnd();
    3617           0 : }
    3618             : 
    3619             : GCHelperState::State
    3620           0 : GCHelperState::state(const AutoLockGC&)
    3621             : {
    3622         102 :     return state_;
    3623             : }
    3624             : 
    3625             : void
    3626           0 : GCHelperState::setState(State state, const AutoLockGC&)
    3627             : {
    3628           0 :     state_ = state;
    3629           0 : }
    3630             : 
    3631             : void
    3632           0 : GCHelperState::startBackgroundThread(State newState, const AutoLockGC& lock,
    3633             :                                      const AutoLockHelperThreadState& helperLock)
    3634             : {
    3635           0 :     MOZ_ASSERT(!hasThread && state(lock) == IDLE && newState != IDLE);
    3636           0 :     setState(newState, lock);
    3637             : 
    3638             :     {
    3639           0 :         AutoEnterOOMUnsafeRegion noOOM;
    3640           0 :         if (!HelperThreadState().gcHelperWorklist(helperLock).append(this))
    3641           0 :             noOOM.crash("Could not add to pending GC helpers list");
    3642             :     }
    3643             : 
    3644           0 :     HelperThreadState().notifyAll(GlobalHelperThreadState::PRODUCER, helperLock);
    3645           0 : }
    3646             : 
    3647             : void
    3648           0 : GCHelperState::waitForBackgroundThread(js::AutoLockGC& lock)
    3649             : {
    3650           0 :     while (isBackgroundSweeping())
    3651           0 :         done.wait(lock.guard());
    3652           0 : }
    3653             : 
    3654             : void
    3655           0 : GCHelperState::work()
    3656             : {
    3657           0 :     MOZ_ASSERT(CanUseExtraThreads());
    3658             : 
    3659           0 :     AutoLockGC lock(rt);
    3660             : 
    3661           0 :     MOZ_ASSERT(!hasThread);
    3662           0 :     hasThread = true;
    3663             : 
    3664             : #ifdef DEBUG
    3665           0 :     MOZ_ASSERT(!TlsContext.get()->gcHelperStateThread);
    3666           0 :     TlsContext.get()->gcHelperStateThread = true;
    3667             : #endif
    3668             : 
    3669           0 :     TraceLoggerThread* logger = TraceLoggerForCurrentThread();
    3670             : 
    3671           0 :     switch (state(lock)) {
    3672             : 
    3673             :       case IDLE:
    3674           0 :         MOZ_CRASH("GC helper triggered on idle state");
    3675             :         break;
    3676             : 
    3677             :       case SWEEPING: {
    3678           0 :         AutoTraceLog logSweeping(logger, TraceLogger_GCSweeping);
    3679           0 :         doSweep(lock);
    3680           0 :         MOZ_ASSERT(state(lock) == SWEEPING);
    3681             :         break;
    3682             :       }
    3683             : 
    3684             :     }
    3685             : 
    3686           0 :     setState(IDLE, lock);
    3687           0 :     hasThread = false;
    3688             : 
    3689             : #ifdef DEBUG
    3690           0 :     TlsContext.get()->gcHelperStateThread = false;
    3691             : #endif
    3692             : 
    3693           0 :     done.notify_all();
    3694           0 : }
    3695             : 
    3696             : void
    3697           0 : GCRuntime::queueZonesForBackgroundSweep(ZoneList& zones)
    3698             : {
    3699           0 :     AutoLockHelperThreadState helperLock;
    3700           0 :     AutoLockGC lock(rt);
    3701           0 :     backgroundSweepZones.ref().transferFrom(zones);
    3702           0 :     helperState.maybeStartBackgroundSweep(lock, helperLock);
    3703           0 : }
    3704             : 
    3705             : void
    3706           0 : GCRuntime::freeUnusedLifoBlocksAfterSweeping(LifoAlloc* lifo)
    3707             : {
    3708           0 :     MOZ_ASSERT(JS::RuntimeHeapIsBusy());
    3709           0 :     AutoLockGC lock(rt);
    3710           0 :     blocksToFreeAfterSweeping.ref().transferUnusedFrom(lifo);
    3711           0 : }
    3712             : 
    3713             : void
    3714           0 : GCRuntime::freeAllLifoBlocksAfterSweeping(LifoAlloc* lifo)
    3715             : {
    3716           0 :     MOZ_ASSERT(JS::RuntimeHeapIsBusy());
    3717           0 :     AutoLockGC lock(rt);
    3718           0 :     blocksToFreeAfterSweeping.ref().transferFrom(lifo);
    3719           0 : }
    3720             : 
    3721             : void
    3722           0 : GCRuntime::freeAllLifoBlocksAfterMinorGC(LifoAlloc* lifo)
    3723             : {
    3724           0 :     blocksToFreeAfterMinorGC.ref().transferFrom(lifo);
    3725          24 : }
    3726             : 
    3727             : void
    3728           0 : GCHelperState::maybeStartBackgroundSweep(const AutoLockGC& lock,
    3729             :                                          const AutoLockHelperThreadState& helperLock)
    3730             : {
    3731           0 :     MOZ_ASSERT(CanUseExtraThreads());
    3732             : 
    3733           0 :     if (state(lock) == IDLE)
    3734           0 :         startBackgroundThread(SWEEPING, lock, helperLock);
    3735           0 : }
    3736             : 
    3737             : void
    3738          51 : GCHelperState::waitBackgroundSweepEnd()
    3739             : {
    3740         153 :     AutoLockGC lock(rt);
    3741          51 :     while (state(lock) == SWEEPING)
    3742           0 :         waitForBackgroundThread(lock);
    3743           0 :     if (!rt->gc.isIncrementalGCInProgress())
    3744           0 :         rt->gc.assertBackgroundSweepingFinished();
    3745          51 : }
    3746             : 
    3747             : void
    3748           0 : GCHelperState::doSweep(AutoLockGC& lock)
    3749             : {
    3750             :     // The main thread may call queueZonesForBackgroundSweep() while this is
    3751             :     // running so we must check there is no more work to do before exiting.
    3752             : 
    3753           0 :     do {
    3754           0 :         while (!rt->gc.backgroundSweepZones.ref().isEmpty()) {
    3755           0 :             AutoSetThreadIsSweeping threadIsSweeping;
    3756             : 
    3757           0 :             ZoneList zones;
    3758           0 :             zones.transferFrom(rt->gc.backgroundSweepZones.ref());
    3759           0 :             LifoAlloc freeLifoAlloc(JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE);
    3760           0 :             freeLifoAlloc.transferFrom(&rt->gc.blocksToFreeAfterSweeping.ref());
    3761             : 
    3762           0 :             AutoUnlockGC unlock(lock);
    3763           0 :             rt->gc.sweepBackgroundThings(zones, freeLifoAlloc);
    3764             :         }
    3765           0 :     } while (!rt->gc.backgroundSweepZones.ref().isEmpty());
    3766           0 : }
    3767             : 
    3768             : #ifdef DEBUG
    3769             : 
    3770             : bool
    3771     4087696 : GCHelperState::onBackgroundThread()
    3772             : {
    3773           0 :     return TlsContext.get()->gcHelperStateThread;
    3774             : }
    3775             : 
    3776             : #endif // DEBUG
    3777             : 
    3778             : bool
    3779           0 : GCRuntime::shouldReleaseObservedTypes()
    3780             : {
    3781           0 :     bool releaseTypes = false;
    3782             : 
    3783             : #ifdef JS_GC_ZEAL
    3784           0 :     if (zealModeBits != 0)
    3785           0 :         releaseTypes = true;
    3786             : #endif
    3787             : 
    3788             :     /* We may miss the exact target GC due to resets. */
    3789           0 :     if (majorGCNumber >= jitReleaseNumber)
    3790           0 :         releaseTypes = true;
    3791             : 
    3792           0 :     if (releaseTypes)
    3793           0 :         jitReleaseNumber = majorGCNumber + JIT_SCRIPT_RELEASE_TYPES_PERIOD;
    3794             : 
    3795           0 :     return releaseTypes;
    3796             : }
    3797             : 
    3798             : struct IsAboutToBeFinalizedFunctor {
    3799           0 :     template <typename T> bool operator()(Cell** t) {
    3800           0 :         mozilla::DebugOnly<const Cell*> prior = *t;
    3801           0 :         bool result = IsAboutToBeFinalizedUnbarriered(reinterpret_cast<T**>(t));
    3802             :         // Sweep should not have to deal with moved pointers, since moving GC
    3803             :         // handles updating the UID table manually.
    3804           0 :         MOZ_ASSERT(*t == prior);
    3805           0 :         return result;
    3806             :     }
    3807             : };
    3808             : 
    3809             : /* static */ bool
    3810           0 : UniqueIdGCPolicy::needsSweep(Cell** cell, uint64_t*)
    3811             : {
    3812           0 :     return DispatchTraceKindTyped(IsAboutToBeFinalizedFunctor(), (*cell)->getTraceKind(), cell);
    3813             : }
    3814             : 
    3815             : void
    3816           0 : JS::Zone::sweepUniqueIds()
    3817             : {
    3818           0 :     uniqueIds().sweep();
    3819           0 : }
    3820             : 
    3821             : void
    3822           0 : Realm::destroy(FreeOp* fop)
    3823             : {
    3824           5 :     JSRuntime* rt = fop->runtime();
    3825           0 :     if (auto callback = rt->destroyRealmCallback)
    3826           5 :         callback(fop, this);
    3827           0 :     if (principals())
    3828           0 :         JS_DropPrincipals(rt->mainContextFromOwnThread(), principals());
    3829           0 :     fop->delete_(this);
    3830           0 : }
    3831             : 
    3832             : void
    3833           5 : Compartment::destroy(FreeOp* fop)
    3834             : {
    3835           5 :     JSRuntime* rt = fop->runtime();
    3836          10 :     if (auto callback = rt->destroyCompartmentCallback)
    3837           5 :         callback(fop, this);
    3838           5 :     fop->delete_(this);
    3839          10 :     rt->gc.stats().sweptCompartment();
    3840           5 : }
    3841             : 
    3842             : void
    3843           5 : Zone::destroy(FreeOp* fop)
    3844             : {
    3845           0 :     MOZ_ASSERT(compartments().empty());
    3846          10 :     fop->delete_(this);
    3847           0 :     fop->runtime()->gc.stats().sweptZone();
    3848           0 : }
    3849             : 
    3850             : /*
    3851             :  * It's simpler if we preserve the invariant that every zone (except the atoms
    3852             :  * zone) has at least one compartment, and every compartment has at least one
    3853             :  * realm. If we know we're deleting the entire zone, then sweepCompartments is
    3854             :  * allowed to delete all compartments. In this case, |keepAtleastOne| is false.
    3855             :  * If any cells remain alive in the zone, set |keepAtleastOne| true to prohibit
    3856             :  * sweepCompartments from deleting every compartment. Instead, it preserves an
    3857             :  * arbitrary compartment in the zone.
    3858             :  */
    3859             : void
    3860           0 : Zone::sweepCompartments(FreeOp* fop, bool keepAtleastOne, bool destroyingRuntime)
    3861             : {
    3862           0 :     MOZ_ASSERT(!compartments().empty());
    3863           0 :     MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne);
    3864             : 
    3865           0 :     Compartment** read = compartments().begin();
    3866           0 :     Compartment** end = compartments().end();
    3867           0 :     Compartment** write = read;
    3868           0 :     while (read < end) {
    3869           0 :         Compartment* comp = *read++;
    3870             : 
    3871             :         /*
    3872             :          * Don't delete the last compartment and realm if keepAtleastOne is
    3873             :          * still true, meaning all the other compartments were deleted.
    3874             :          */
    3875           0 :         bool keepAtleastOneRealm = read == end && keepAtleastOne;
    3876           0 :         comp->sweepRealms(fop, keepAtleastOneRealm, destroyingRuntime);
    3877             : 
    3878           0 :         if (!comp->realms().empty()) {
    3879           0 :             *write++ = comp;
    3880           0 :             keepAtleastOne = false;
    3881             :         } else {
    3882           0 :             comp->destroy(fop);
    3883             :         }
    3884             :     }
    3885           0 :     compartments().shrinkTo(write - compartments().begin());
    3886           0 :     MOZ_ASSERT_IF(keepAtleastOne, !compartments().empty());
    3887           0 :     MOZ_ASSERT_IF(destroyingRuntime, compartments().empty());
    3888           0 : }
    3889             : 
    3890             : void
    3891           0 : Compartment::sweepRealms(FreeOp* fop, bool keepAtleastOne, bool destroyingRuntime)
    3892             : {
    3893           0 :     MOZ_ASSERT(!realms().empty());
    3894           0 :     MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne);
    3895             : 
    3896           0 :     Realm** read = realms().begin();
    3897           0 :     Realm** end = realms().end();
    3898           0 :     Realm** write = read;
    3899           0 :     while (read < end) {
    3900           0 :         Realm* realm = *read++;
    3901             : 
    3902             :         /*
    3903             :          * Don't delete the last realm if keepAtleastOne is still true, meaning
    3904             :          * all the other realms were deleted.
    3905             :          */
    3906           0 :         bool dontDelete = read == end && keepAtleastOne;
    3907           0 :         if ((realm->marked() || dontDelete) && !destroyingRuntime) {
    3908           0 :             *write++ = realm;
    3909           0 :             keepAtleastOne = false;
    3910             :         } else {
    3911           0 :             realm->destroy(fop);
    3912             :         }
    3913             :     }
    3914           0 :     realms().shrinkTo(write - realms().begin());
    3915           0 :     MOZ_ASSERT_IF(keepAtleastOne, !realms().empty());
    3916           0 :     MOZ_ASSERT_IF(destroyingRuntime, realms().empty());
    3917           0 : }
    3918             : 
    3919             : void
    3920           5 : GCRuntime::deleteEmptyZone(Zone* zone)
    3921             : {
    3922           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    3923           0 :     MOZ_ASSERT(zone->compartments().empty());
    3924           0 :     for (auto& i : zones()) {
    3925           0 :         if (i == zone) {
    3926           5 :             zones().erase(&i);
    3927           0 :             zone->destroy(rt->defaultFreeOp());
    3928           5 :             return;
    3929             :         }
    3930             :     }
    3931           0 :     MOZ_CRASH("Zone not found");
    3932             : }
    3933             : 
    3934             : void
    3935           0 : GCRuntime::sweepZones(FreeOp* fop, bool destroyingRuntime)
    3936             : {
    3937           0 :     MOZ_ASSERT_IF(destroyingRuntime, numActiveZoneIters == 0);
    3938           0 :     MOZ_ASSERT_IF(destroyingRuntime, arenasEmptyAtShutdown);
    3939             : 
    3940           0 :     if (rt->gc.numActiveZoneIters)
    3941             :         return;
    3942             : 
    3943           0 :     assertBackgroundSweepingFinished();
    3944             : 
    3945           0 :     Zone** read = zones().begin();
    3946           0 :     Zone** end = zones().end();
    3947           0 :     Zone** write = read;
    3948             : 
    3949           0 :     while (read < end) {
    3950           0 :         Zone* zone = *read++;
    3951             : 
    3952           0 :         if (zone->wasGCStarted()) {
    3953           0 :             MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());
    3954           0 :             const bool zoneIsDead = zone->arenas.arenaListsAreEmpty() &&
    3955           0 :                                     !zone->hasMarkedRealms();
    3956           0 :             if (zoneIsDead || destroyingRuntime)
    3957             :             {
    3958             :                 // We have just finished sweeping, so we should have freed any
    3959             :                 // empty arenas back to their Chunk for future allocation.
    3960             :                 zone->arenas.checkEmptyFreeLists();
    3961             : 
    3962             :                 // We are about to delete the Zone; this will leave the Zone*
    3963             :                 // in the arena header dangling if there are any arenas
    3964             :                 // remaining at this point.
    3965             : #ifdef DEBUG
    3966           0 :                 if (!zone->arenas.checkEmptyArenaLists())
    3967           0 :                     arenasEmptyAtShutdown = false;
    3968             : #endif
    3969             : 
    3970           0 :                 zone->sweepCompartments(fop, false, destroyingRuntime);
    3971           0 :                 MOZ_ASSERT(zone->compartments().empty());
    3972           0 :                 MOZ_ASSERT_IF(arenasEmptyAtShutdown, zone->typeDescrObjects().empty());
    3973           0 :                 zone->destroy(fop);
    3974           0 :                 continue;
    3975             :             }
    3976           0 :             zone->sweepCompartments(fop, true, destroyingRuntime);
    3977             :         }
    3978           0 :         *write++ = zone;
    3979             :     }
    3980           0 :     zones().shrinkTo(write - zones().begin());
    3981             : }
    3982             : 
    3983             : #ifdef DEBUG
    3984             : static const char*
    3985           0 : AllocKindToAscii(AllocKind kind)
    3986             : {
    3987           0 :     switch(kind) {
    3988             : #define MAKE_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery) \
    3989             :       case AllocKind:: allocKind: return #allocKind;
    3990           0 : FOR_EACH_ALLOCKIND(MAKE_CASE)
    3991             : #undef MAKE_CASE
    3992             : 
    3993             :       default:
    3994           0 :         MOZ_CRASH("Unknown AllocKind in AllocKindToAscii");
    3995             :     }
    3996             : }
    3997             : #endif // DEBUG
    3998             : 
    3999             : bool
    4000         145 : ArenaLists::checkEmptyArenaList(AllocKind kind)
    4001             : {
    4002         145 :     bool isEmpty = true;
    4003             : #ifdef DEBUG
    4004         145 :     size_t numLive = 0;
    4005         290 :     if (!arenaLists(kind).isEmpty()) {
    4006           0 :         isEmpty = false;
    4007           0 :         size_t maxCells = 20;
    4008           0 :         char *env = getenv("JS_GC_MAX_LIVE_CELLS");
    4009           0 :         if (env && *env)
    4010           0 :             maxCells = atol(env);
    4011           0 :         for (Arena* current = arenaLists(kind).head(); current; current = current->next) {
    4012           0 :             for (ArenaCellIterUnderGC i(current); !i.done(); i.next()) {
    4013           0 :                 TenuredCell* t = i.getCell();
    4014           0 :                 MOZ_ASSERT(t->isMarkedAny(), "unmarked cells should have been finalized");
    4015           0 :                 if (++numLive <= maxCells) {
    4016           0 :                     fprintf(stderr, "ERROR: GC found live Cell %p of kind %s at shutdown\n",
    4017           0 :                             t, AllocKindToAscii(kind));
    4018             :                 }
    4019             :             }
    4020             :         }
    4021           0 :         if (numLive > 0) {
    4022           0 :           fprintf(stderr, "ERROR: GC found %zu live Cells at shutdown\n", numLive);
    4023             :         } else {
    4024           0 :           fprintf(stderr, "ERROR: GC found empty Arenas at shutdown\n");
    4025             :         }
    4026             :     }
    4027             : #endif // DEBUG
    4028         145 :     return isEmpty;
    4029             : }
    4030             : 
    4031             : class MOZ_RAII js::gc::AutoRunParallelTask : public GCParallelTask
    4032             : {
    4033             :     gcstats::PhaseKind phase_;
    4034             :     AutoLockHelperThreadState& lock_;
    4035             : 
    4036             :   public:
    4037             :     AutoRunParallelTask(JSRuntime* rt, TaskFunc func, gcstats::PhaseKind phase,
    4038             :                         AutoLockHelperThreadState& lock)
    4039           0 :       : GCParallelTask(rt, func),
    4040             :         phase_(phase),
    4041           0 :         lock_(lock)
    4042             :     {
    4043           0 :         runtime()->gc.startTask(*this, phase_, lock_);
    4044             :     }
    4045             : 
    4046           0 :     ~AutoRunParallelTask() {
    4047           0 :         runtime()->gc.joinTask(*this, phase_, lock_);
    4048           0 :     }
    4049             : };
    4050             : 
    4051             : void
    4052           0 : GCRuntime::purgeRuntimeForMinorGC()
    4053             : {
    4054             :     // If external strings become nursery allocable, remember to call
    4055             :     // zone->externalStringCache().purge() (and delete this assert.)
    4056           0 :     MOZ_ASSERT(!IsNurseryAllocable(AllocKind::EXTERNAL_STRING));
    4057             : 
    4058           0 :     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next())
    4059          45 :         zone->functionToStringCache().purge();
    4060             : 
    4061           0 :     rt->caches().purgeForMinorGC(rt);
    4062           4 : }
    4063             : 
    4064             : void
    4065           0 : GCRuntime::purgeRuntime()
    4066             : {
    4067           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE);
    4068             : 
    4069           0 :     for (GCRealmsIter realm(rt); !realm.done(); realm.next())
    4070           0 :         realm->purge();
    4071             : 
    4072           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    4073           0 :         zone->purgeAtomCacheOrDefer();
    4074           0 :         zone->externalStringCache().purge();
    4075           0 :         zone->functionToStringCache().purge();
    4076             :     }
    4077             : 
    4078           0 :     JSContext* cx = rt->mainContextFromOwnThread();
    4079           0 :     freeUnusedLifoBlocksAfterSweeping(&cx->tempLifoAlloc());
    4080           0 :     cx->interpreterStack().purge(rt);
    4081           0 :     cx->frontendCollectionPool().purge();
    4082             : 
    4083           0 :     rt->caches().purge();
    4084             : 
    4085           0 :     if (auto cache = rt->maybeThisRuntimeSharedImmutableStrings())
    4086           0 :         cache->purge();
    4087             : 
    4088           0 :     MOZ_ASSERT(unmarkGrayStack.empty());
    4089           0 :     unmarkGrayStack.clearAndFree();
    4090           0 : }
    4091             : 
    4092             : bool
    4093           0 : GCRuntime::shouldPreserveJITCode(Realm* realm, int64_t currentTime,
    4094             :                                  JS::gcreason::Reason reason, bool canAllocateMoreCode)
    4095             : {
    4096           0 :     if (cleanUpEverything)
    4097             :         return false;
    4098           0 :     if (!canAllocateMoreCode)
    4099             :         return false;
    4100             : 
    4101           0 :     if (alwaysPreserveCode)
    4102             :         return true;
    4103           0 :     if (realm->preserveJitCode())
    4104             :         return true;
    4105           0 :     if (realm->lastAnimationTime + PRMJ_USEC_PER_SEC >= currentTime)
    4106             :         return true;
    4107           0 :     if (reason == JS::gcreason::DEBUG_GC)
    4108             :         return true;
    4109             : 
    4110           0 :     return false;
    4111             : }
    4112             : 
    4113             : #ifdef DEBUG
    4114             : class CompartmentCheckTracer : public JS::CallbackTracer
    4115             : {
    4116             :     void onChild(const JS::GCCellPtr& thing) override;
    4117             : 
    4118             :   public:
    4119             :     explicit CompartmentCheckTracer(JSRuntime* rt)
    4120           0 :       : JS::CallbackTracer(rt), src(nullptr), zone(nullptr), compartment(nullptr)
    4121             :     {}
    4122             : 
    4123             :     Cell* src;
    4124             :     JS::TraceKind srcKind;
    4125             :     Zone* zone;
    4126             :     Compartment* compartment;
    4127             : };
    4128             : 
    4129             : namespace {
    4130             : struct IsDestComparatorFunctor {
    4131             :     JS::GCCellPtr dst_;
    4132             :     explicit IsDestComparatorFunctor(JS::GCCellPtr dst) : dst_(dst) {}
    4133             : 
    4134           0 :     template <typename T> bool operator()(T* t) { return (*t) == dst_.asCell(); }
    4135             : };
    4136             : } // namespace (anonymous)
    4137             : 
    4138             : static bool
    4139           0 : InCrossCompartmentMap(JSObject* src, JS::GCCellPtr dst)
    4140             : {
    4141           0 :     Compartment* srccomp = src->compartment();
    4142             : 
    4143           0 :     if (dst.is<JSObject>()) {
    4144           0 :         Value key = ObjectValue(dst.as<JSObject>());
    4145           0 :         if (WrapperMap::Ptr p = srccomp->lookupWrapper(key)) {
    4146           0 :             if (*p->value().unsafeGet() == ObjectValue(*src))
    4147           0 :                 return true;
    4148             :         }
    4149             :     }
    4150             : 
    4151             :     /*
    4152             :      * If the cross-compartment edge is caused by the debugger, then we don't
    4153             :      * know the right hashtable key, so we have to iterate.
    4154             :      */
    4155           0 :     for (Compartment::WrapperEnum e(srccomp); !e.empty(); e.popFront()) {
    4156           0 :         if (e.front().mutableKey().applyToWrapped(IsDestComparatorFunctor(dst)) &&
    4157           0 :             ToMarkable(e.front().value().unbarrieredGet()) == src)
    4158             :         {
    4159           0 :             return true;
    4160             :         }
    4161             :     }
    4162             : 
    4163           0 :     return false;
    4164             : }
    4165             : 
    4166             : struct MaybeCompartmentFunctor {
    4167           0 :     template <typename T> JS::Compartment* operator()(T* t) { return t->maybeCompartment(); }
    4168             : };
    4169             : 
    4170             : void
    4171           0 : CompartmentCheckTracer::onChild(const JS::GCCellPtr& thing)
    4172             : {
    4173           0 :     Compartment* comp = DispatchTyped(MaybeCompartmentFunctor(), thing);
    4174           0 :     if (comp && compartment) {
    4175           0 :         MOZ_ASSERT(comp == compartment ||
    4176             :                    (srcKind == JS::TraceKind::Object &&
    4177             :                     InCrossCompartmentMap(static_cast<JSObject*>(src), thing)));
    4178             :     } else {
    4179           0 :         TenuredCell* tenured = TenuredCell::fromPointer(thing.asCell());
    4180           0 :         Zone* thingZone = tenured->zoneFromAnyThread();
    4181           0 :         MOZ_ASSERT(thingZone == zone || thingZone->isAtomsZone());
    4182             :     }
    4183           0 : }
    4184             : 
    4185             : void
    4186           0 : GCRuntime::checkForCompartmentMismatches()
    4187             : {
    4188           0 :     JSContext* cx = rt->mainContextFromOwnThread();
    4189           0 :     if (cx->disableStrictProxyCheckingCount)
    4190           0 :         return;
    4191             : 
    4192           0 :     CompartmentCheckTracer trc(rt);
    4193           0 :     AutoAssertEmptyNursery empty(cx);
    4194           0 :     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
    4195           0 :         trc.zone = zone;
    4196           0 :         for (auto thingKind : AllAllocKinds()) {
    4197           0 :             for (auto i = zone->cellIter<TenuredCell>(thingKind, empty); !i.done(); i.next()) {
    4198           0 :                 trc.src = i.getCell();
    4199           0 :                 trc.srcKind = MapAllocToTraceKind(thingKind);
    4200           0 :                 trc.compartment = DispatchTraceKindTyped(MaybeCompartmentFunctor(),
    4201             :                                                          trc.src, trc.srcKind);
    4202           0 :                 js::TraceChildren(&trc, trc.src, trc.srcKind);
    4203             :             }
    4204             :         }
    4205             :     }
    4206             : }
    4207             : #endif
    4208             : 
    4209             : static void
    4210           0 : RelazifyFunctions(Zone* zone, AllocKind kind)
    4211             : {
    4212           0 :     MOZ_ASSERT(kind == AllocKind::FUNCTION ||
    4213             :                kind == AllocKind::FUNCTION_EXTENDED);
    4214             : 
    4215           0 :     JSRuntime* rt = zone->runtimeFromMainThread();
    4216           0 :     AutoAssertEmptyNursery empty(rt->mainContextFromOwnThread());
    4217             : 
    4218           0 :     for (auto i = zone->cellIter<JSObject>(kind, empty); !i.done(); i.next()) {
    4219           0 :         JSFunction* fun = &i->as<JSFunction>();
    4220           0 :         if (fun->hasScript())
    4221           0 :             fun->maybeRelazify(rt);
    4222             :     }
    4223           0 : }
    4224             : 
    4225             : static bool
    4226           0 : ShouldCollectZone(Zone* zone, JS::gcreason::Reason reason)
    4227             : {
    4228             :     // If we are repeating a GC because we noticed dead compartments haven't
    4229             :     // been collected, then only collect zones containing those compartments.
    4230           0 :     if (reason == JS::gcreason::COMPARTMENT_REVIVED) {
    4231           0 :         for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
    4232           0 :             if (comp->gcState.scheduledForDestruction)
    4233           0 :                 return true;
    4234             :         }
    4235             : 
    4236           0 :         return false;
    4237             :     }
    4238             : 
    4239             :     // Otherwise we only collect scheduled zones.
    4240           0 :     if (!zone->isGCScheduled())
    4241             :         return false;
    4242             : 
    4243             :     // If canCollectAtoms() is false then either an instance of AutoKeepAtoms is
    4244             :     // currently on the stack or parsing is currently happening on another
    4245             :     // thread. In either case we don't have information about which atoms are
    4246             :     // roots, so we must skip collecting atoms.
    4247             :     //
    4248             :     // Note that only affects the first slice of an incremental GC since root
    4249             :     // marking is completed before we return to the mutator.
    4250             :     //
    4251             :     // Off-thread parsing is inhibited after the start of GC which prevents
    4252             :     // races between creating atoms during parsing and sweeping atoms on the
    4253             :     // main thread.
    4254             :     //
    4255             :     // Otherwise, we always schedule a GC in the atoms zone so that atoms which
    4256             :     // the other collected zones are using are marked, and we can update the
    4257             :     // set of atoms in use by the other collected zones at the end of the GC.
    4258           0 :     if (zone->isAtomsZone())
    4259           0 :         return TlsContext.get()->canCollectAtoms();
    4260             : 
    4261           0 :     return zone->canCollect();
    4262             : }
    4263             : 
    4264             : bool
    4265           0 : GCRuntime::prepareZonesForCollection(JS::gcreason::Reason reason, bool* isFullOut,
    4266             :                                      AutoLockForExclusiveAccess& lock)
    4267             : {
    4268             : #ifdef DEBUG
    4269             :     /* Assert that zone state is as we expect */
    4270           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    4271           0 :         MOZ_ASSERT(!zone->isCollecting());
    4272           0 :         MOZ_ASSERT_IF(!zone->isAtomsZone(), !zone->compartments().empty());
    4273           0 :         for (auto i : AllAllocKinds())
    4274           0 :             MOZ_ASSERT(!zone->arenas.arenaListsToSweep(i));
    4275             :     }
    4276             : #endif
    4277             : 
    4278           0 :     *isFullOut = true;
    4279           0 :     bool any = false;
    4280             : 
    4281           0 :     int64_t currentTime = PRMJ_Now();
    4282             : 
    4283           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    4284             :         /* Set up which zones will be collected. */
    4285           0 :         if (ShouldCollectZone(zone, reason)) {
    4286           0 :             MOZ_ASSERT(zone->canCollect());
    4287           0 :             any = true;
    4288           0 :             zone->changeGCState(Zone::NoGC, Zone::Mark);
    4289             :         } else {
    4290           0 :             *isFullOut = false;
    4291             :         }
    4292             : 
    4293           0 :         zone->setPreservingCode(false);
    4294             :     }
    4295             : 
    4296             :     // Discard JIT code more aggressively if the process is approaching its
    4297             :     // executable code limit.
    4298           0 :     bool canAllocateMoreCode = jit::CanLikelyAllocateMoreExecutableMemory();
    4299             : 
    4300           0 :     for (CompartmentsIter c(rt); !c.done(); c.next()) {
    4301           0 :         c->gcState.scheduledForDestruction = false;
    4302           0 :         c->gcState.maybeAlive = false;
    4303           0 :         c->gcState.hasEnteredRealm = false;
    4304           0 :         for (RealmsInCompartmentIter r(c); !r.done(); r.next()) {
    4305           0 :             r->unmark();
    4306           0 :             if (r->shouldTraceGlobal() || !r->zone()->isGCScheduled())
    4307           0 :                 c->gcState.maybeAlive = true;
    4308           0 :             if (shouldPreserveJITCode(r, currentTime, reason, canAllocateMoreCode))
    4309           0 :                 r->zone()->setPreservingCode(true);
    4310           0 :             if (r->hasBeenEnteredIgnoringJit())
    4311           0 :                 c->gcState.hasEnteredRealm = true;
    4312             :         }
    4313             :     }
    4314             : 
    4315           0 :     if (!cleanUpEverything && canAllocateMoreCode) {
    4316           0 :         jit::JitActivationIterator activation(rt->mainContextFromOwnThread());
    4317           0 :         if (!activation.done())
    4318           0 :             activation->compartment()->zone()->setPreservingCode(true);
    4319             :     }
    4320             : 
    4321             :     /*
    4322             :      * Check that we do collect the atoms zone if we triggered a GC for that
    4323             :      * purpose.
    4324             :      */
    4325           0 :     MOZ_ASSERT_IF(reason == JS::gcreason::DELAYED_ATOMS_GC, atomsZone->isGCMarking());
    4326             : 
    4327             :     /* Check that at least one zone is scheduled for collection. */
    4328           0 :     return any;
    4329             : }
    4330             : 
    4331             : static void
    4332           0 : DiscardJITCodeForGC(JSRuntime* rt)
    4333             : {
    4334           0 :     js::CancelOffThreadIonCompile(rt, JS::Zone::Mark);
    4335           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    4336           0 :         gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::MARK_DISCARD_CODE);
    4337           0 :         zone->discardJitCode(rt->defaultFreeOp());
    4338             :     }
    4339           0 : }
    4340             : 
    4341             : static void
    4342           0 : RelazifyFunctionsForShrinkingGC(JSRuntime* rt)
    4343             : {
    4344           0 :     gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::RELAZIFY_FUNCTIONS);
    4345           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    4346           0 :         if (zone->isSelfHostingZone())
    4347             :             continue;
    4348           0 :         RelazifyFunctions(zone, AllocKind::FUNCTION);
    4349           0 :         RelazifyFunctions(zone, AllocKind::FUNCTION_EXTENDED);
    4350             :     }
    4351           0 : }
    4352             : 
    4353             : static void
    4354           0 : PurgeShapeTablesForShrinkingGC(JSRuntime* rt)
    4355             : {
    4356           0 :     gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::PURGE_SHAPE_TABLES);
    4357           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    4358           0 :         if (zone->keepShapeTables() || zone->isSelfHostingZone())
    4359             :             continue;
    4360           0 :         for (auto baseShape = zone->cellIter<BaseShape>(); !baseShape.done(); baseShape.next())
    4361           0 :             baseShape->maybePurgeTable();
    4362             :     }
    4363           0 : }
    4364             : 
    4365             : static void
    4366           0 : UnmarkCollectedZones(GCParallelTask* task)
    4367             : {
    4368           0 :     JSRuntime* rt = task->runtime();
    4369           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    4370             :         /* Unmark everything in the zones being collected. */
    4371           0 :         zone->arenas.unmarkAll();
    4372             :     }
    4373             : 
    4374           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    4375             :         /* Unmark all weak maps in the zones being collected. */
    4376           0 :         WeakMapBase::unmarkZone(zone);
    4377             :     }
    4378           0 : }
    4379             : 
    4380             : static void
    4381           0 : BufferGrayRoots(GCParallelTask* task)
    4382             : {
    4383           0 :     task->runtime()->gc.bufferGrayRoots();
    4384           0 : }
    4385             : 
    4386             : bool
    4387           0 : GCRuntime::beginMarkPhase(JS::gcreason::Reason reason, AutoTraceSession& session)
    4388             : {
    4389           0 :     MOZ_ASSERT(session.maybeLock.isSome());
    4390             : 
    4391             : #ifdef DEBUG
    4392           0 :     if (fullCompartmentChecks)
    4393           0 :         checkForCompartmentMismatches();
    4394             : #endif
    4395             : 
    4396           0 :     if (!prepareZonesForCollection(reason, &isFull.ref(), session.lock()))
    4397             :         return false;
    4398             : 
    4399             :     /* If we're not collecting the atoms zone we can release the lock now. */
    4400           0 :     if (!atomsZone->isCollecting())
    4401           0 :         session.maybeLock.reset();
    4402             : 
    4403             :     /*
    4404             :      * In an incremental GC, clear the area free lists to ensure that subsequent
    4405             :      * allocations refill them and end up marking new cells back. See
    4406             :      * arenaAllocatedDuringGC().
    4407             :      */
    4408           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next())
    4409           0 :         zone->arenas.clearFreeLists();
    4410           0 : 
    4411             :     marker.start();
    4412             :     GCMarker* gcmarker = &marker;
    4413           0 : 
    4414           0 :     {
    4415             :         gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::PREPARE);
    4416             :         AutoLockHelperThreadState helperLock;
    4417           0 : 
    4418           0 :         /*
    4419             :          * Clear all mark state for the zones we are collecting. This is linear
    4420             :          * in the size of the heap we are collecting and so can be slow. Do this
    4421             :          * in parallel with the rest of this block.
    4422             :          */
    4423             :         AutoRunParallelTask
    4424             :             unmarkCollectedZones(rt, UnmarkCollectedZones, gcstats::PhaseKind::UNMARK, helperLock);
    4425             : 
    4426           0 :         /*
    4427             :          * Buffer gray roots for incremental collections. This is linear in the
    4428             :          * number of roots which can be in the tens of thousands. Do this in
    4429             :          * parallel with the rest of this block.
    4430             :          */
    4431             :         Maybe<AutoRunParallelTask> bufferGrayRoots;
    4432             :         if (isIncremental)
    4433           0 :             bufferGrayRoots.emplace(rt, BufferGrayRoots, gcstats::PhaseKind::BUFFER_GRAY_ROOTS, helperLock);
    4434           0 :         AutoUnlockHelperThreadState unlock(helperLock);
    4435           0 : 
    4436           0 :         // Discard JIT code. For incremental collections, the sweep phase will
    4437             :         // also discard JIT code.
    4438             :         DiscardJITCodeForGC(rt);
    4439             : 
    4440           0 :         /*
    4441             :          * Relazify functions after discarding JIT code (we can't relazify
    4442             :          * functions with JIT code) and before the actual mark phase, so that
    4443             :          * the current GC can collect the JSScripts we're unlinking here.  We do
    4444             :          * this only when we're performing a shrinking GC, as too much
    4445             :          * relazification can cause performance issues when we have to reparse
    4446             :          * the same functions over and over.
    4447             :          */
    4448             :         if (invocationKind == GC_SHRINK) {
    4449             :             RelazifyFunctionsForShrinkingGC(rt);
    4450           0 :             PurgeShapeTablesForShrinkingGC(rt);
    4451           0 :         }
    4452           0 : 
    4453             :         /*
    4454             :          * We must purge the runtime at the beginning of an incremental GC. The
    4455             :          * danger if we purge later is that the snapshot invariant of
    4456             :          * incremental GC will be broken, as follows. If some object is
    4457             :          * reachable only through some cache (say the dtoaCache) then it will
    4458             :          * not be part of the snapshot.  If we purge after root marking, then
    4459             :          * the mutator could obtain a pointer to the object and start using
    4460             :          * it. This object might never be marked, so a GC hazard would exist.
    4461             :          */
    4462             :         purgeRuntime();
    4463             :     }
    4464           0 : 
    4465             :     /*
    4466             :      * Mark phase.
    4467             :      */
    4468             :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK);
    4469             :     traceRuntimeForMajorGC(gcmarker, session);
    4470           0 : 
    4471           0 :     if (isIncremental)
    4472             :         markCompartments();
    4473           0 : 
    4474           0 :     updateMallocCountersOnGCStart();
    4475             : 
    4476           0 :     /*
    4477             :      * Process any queued source compressions during the start of a major
    4478             :      * GC.
    4479             :      */
    4480             :     {
    4481             :         AutoLockHelperThreadState helperLock;
    4482             :         HelperThreadState().startHandlingCompressionTasks(helperLock);
    4483           0 :     }
    4484           0 : 
    4485             :     return true;
    4486             : }
    4487           0 : 
    4488             : void
    4489             : GCRuntime::markCompartments()
    4490             : {
    4491           0 :     gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::MARK_ROOTS);
    4492             :     gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::MARK_COMPARTMENTS);
    4493           0 : 
    4494           0 :     /*
    4495             :      * This code ensures that if a compartment is "dead", then it will be
    4496             :      * collected in this GC. A compartment is considered dead if its maybeAlive
    4497             :      * flag is false. The maybeAlive flag is set if:
    4498             :      *
    4499             :      *   (1) the compartment has been entered (set in beginMarkPhase() above)
    4500             :      *   (2) the compartment is not being collected (set in beginMarkPhase()
    4501             :      *       above)
    4502             :      *   (3) an object in the compartment was marked during root marking, either
    4503             :      *       as a black root or a gray root (set in RootMarking.cpp), or
    4504             :      *   (4) the compartment has incoming cross-compartment edges from another
    4505             :      *       compartment that has maybeAlive set (set by this method).
    4506             :      *
    4507             :      * If the maybeAlive is false, then we set the scheduledForDestruction flag.
    4508             :      * At the end of the GC, we look for compartments where
    4509             :      * scheduledForDestruction is true. These are compartments that were somehow
    4510             :      * "revived" during the incremental GC. If any are found, we do a special,
    4511             :      * non-incremental GC of those compartments to try to collect them.
    4512             :      *
    4513             :      * Compartments can be revived for a variety of reasons. On reason is bug
    4514             :      * 811587, where a reflector that was dead can be revived by DOM code that
    4515             :      * still refers to the underlying DOM node.
    4516             :      *
    4517             :      * Read barriers and allocations can also cause revival. This might happen
    4518             :      * during a function like JS_TransplantObject, which iterates over all
    4519             :      * compartments, live or dead, and operates on their objects. See bug 803376
    4520             :      * for details on this problem. To avoid the problem, we try to avoid
    4521             :      * allocation and read barriers during JS_TransplantObject and the like.
    4522             :      */
    4523             : 
    4524             :     /* Propagate the maybeAlive flag via cross-compartment edges. */
    4525             : 
    4526             :     Vector<Compartment*, 0, js::SystemAllocPolicy> workList;
    4527             : 
    4528           0 :     for (CompartmentsIter comp(rt); !comp.done(); comp.next()) {
    4529             :         if (comp->gcState.maybeAlive) {
    4530           0 :             if (!workList.append(comp))
    4531           0 :                 return;
    4532           0 :         }
    4533           0 :     }
    4534             : 
    4535             :     while (!workList.empty()) {
    4536             :         Compartment* comp = workList.popCopy();
    4537           0 :         for (Compartment::NonStringWrapperEnum e(comp); !e.empty(); e.popFront()) {
    4538           0 :             Compartment* dest = e.front().mutableKey().compartment();
    4539           0 :             if (dest && !dest->gcState.maybeAlive) {
    4540           0 :                 dest->gcState.maybeAlive = true;
    4541           0 :                 if (!workList.append(dest))
    4542           0 :                     return;
    4543           0 :             }
    4544           0 :         }
    4545             :     }
    4546             : 
    4547             :     /* Set scheduleForDestruction based on maybeAlive. */
    4548             : 
    4549             :     for (GCCompartmentsIter comp(rt); !comp.done(); comp.next()) {
    4550             :         MOZ_ASSERT(!comp->gcState.scheduledForDestruction);
    4551           0 :         if (!comp->gcState.maybeAlive)
    4552           0 :             comp->gcState.scheduledForDestruction = true;
    4553           0 :     }
    4554           0 : }
    4555             : 
    4556             : void
    4557             : GCRuntime::updateMallocCountersOnGCStart()
    4558             : {
    4559           0 :     // Update the malloc counters for any zones we are collecting.
    4560             :     for (GCZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    4561             :         zone->updateAllGCMallocCountersOnGCStart();
    4562           0 : 
    4563           0 :     // Update the runtime malloc counter only if we are doing a full GC.
    4564             :     if (isFull)
    4565             :         mallocCounter.updateOnGCStart();
    4566           0 : }
    4567           0 : 
    4568           0 : template <class ZoneIterT>
    4569             : void
    4570             : GCRuntime::markWeakReferences(gcstats::PhaseKind phase)
    4571             : {
    4572           0 :     MOZ_ASSERT(marker.isDrained());
    4573             : 
    4574           0 :     gcstats::AutoPhase ap1(stats(), phase);
    4575             : 
    4576           0 :     marker.enterWeakMarkingMode();
    4577             : 
    4578           0 :     // TODO bug 1167452: Make weak marking incremental
    4579             :     auto unlimited = SliceBudget::unlimited();
    4580             :     MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));
    4581             : 
    4582           0 :     for (;;) {
    4583             :         bool markedAny = false;
    4584           0 :         if (!marker.isWeakMarkingTracer()) {
    4585           0 :             for (ZoneIterT zone(rt); !zone.done(); zone.next())
    4586           0 :                 markedAny |= WeakMapBase::markZoneIteratively(zone, &marker);
    4587           0 :         }
    4588           0 :         markedAny |= Debugger::markIteratively(&marker);
    4589             :         markedAny |= jit::JitRuntime::MarkJitcodeGlobalTableIteratively(&marker);
    4590           0 : 
    4591           0 :         if (!markedAny)
    4592             :             break;
    4593           0 : 
    4594             :         auto unlimited = SliceBudget::unlimited();
    4595             :         MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));
    4596             :     }
    4597           0 :     MOZ_ASSERT(marker.isDrained());
    4598             : 
    4599           0 :     marker.leaveWeakMarkingMode();
    4600             : }
    4601           0 : 
    4602           0 : void
    4603             : GCRuntime::markWeakReferencesInCurrentGroup(gcstats::PhaseKind phase)
    4604             : {
    4605           0 :     markWeakReferences<SweepGroupZonesIter>(phase);
    4606             : }
    4607           0 : 
    4608           0 : template <class ZoneIterT, class CompartmentIterT>
    4609             : void
    4610             : GCRuntime::markGrayReferences(gcstats::PhaseKind phase)
    4611             : {
    4612           0 :     gcstats::AutoPhase ap(stats(), phase);
    4613             :     if (hasValidGrayRootsBuffer()) {
    4614           0 :         for (ZoneIterT zone(rt); !zone.done(); zone.next())
    4615           0 :             markBufferedGrayRoots(zone);
    4616           0 :     } else {
    4617           0 :         MOZ_ASSERT(!isIncremental);
    4618             :         if (JSTraceDataOp op = grayRootTracer.op)
    4619           0 :             (*op)(&marker, grayRootTracer.data);
    4620           0 :     }
    4621           0 :     auto unlimited = SliceBudget::unlimited();
    4622             :     MOZ_RELEASE_ASSERT(marker.drainMarkStack(unlimited));
    4623             : }
    4624           0 : 
    4625           0 : void
    4626             : GCRuntime::markGrayReferencesInCurrentGroup(gcstats::PhaseKind phase)
    4627             : {
    4628           0 :     markGrayReferences<SweepGroupZonesIter, SweepGroupCompartmentsIter>(phase);
    4629             : }
    4630           0 : 
    4631           0 : void
    4632             : GCRuntime::markAllWeakReferences(gcstats::PhaseKind phase)
    4633             : {
    4634           0 :     markWeakReferences<GCZonesIter>(phase);
    4635             : }
    4636           0 : 
    4637           0 : void
    4638             : GCRuntime::markAllGrayReferences(gcstats::PhaseKind phase)
    4639             : {
    4640           0 :     markGrayReferences<GCZonesIter, GCCompartmentsIter>(phase);
    4641             : }
    4642           0 : 
    4643           0 : #ifdef JS_GC_ZEAL
    4644             : 
    4645             : struct GCChunkHasher {
    4646             :     typedef gc::Chunk* Lookup;
    4647             : 
    4648             :     /*
    4649             :      * Strip zeros for better distribution after multiplying by the golden
    4650             :      * ratio.
    4651             :      */
    4652             :     static HashNumber hash(gc::Chunk* chunk) {
    4653             :         MOZ_ASSERT(!(uintptr_t(chunk) & gc::ChunkMask));
    4654           0 :         return HashNumber(uintptr_t(chunk) >> gc::ChunkShift);
    4655           0 :     }
    4656           0 : 
    4657             :     static bool match(gc::Chunk* k, gc::Chunk* l) {
    4658             :         MOZ_ASSERT(!(uintptr_t(k) & gc::ChunkMask));
    4659           0 :         MOZ_ASSERT(!(uintptr_t(l) & gc::ChunkMask));
    4660           0 :         return k == l;
    4661           0 :     }
    4662           0 : };
    4663             : 
    4664             : class js::gc::MarkingValidator
    4665             : {
    4666             :   public:
    4667             :     explicit MarkingValidator(GCRuntime* gc);
    4668             :     ~MarkingValidator();
    4669             :     void nonIncrementalMark(AutoTraceSession& session);
    4670             :     void validate();
    4671             : 
    4672             :   private:
    4673             :     GCRuntime* gc;
    4674             :     bool initialized;
    4675             : 
    4676             :     typedef HashMap<Chunk*, ChunkBitmap*, GCChunkHasher, SystemAllocPolicy> BitmapMap;
    4677             :     BitmapMap map;
    4678             : };
    4679             : 
    4680             : js::gc::MarkingValidator::MarkingValidator(GCRuntime* gc)
    4681             :   : gc(gc),
    4682           0 :     initialized(false)
    4683             : {}
    4684           0 : 
    4685           0 : js::gc::MarkingValidator::~MarkingValidator()
    4686             : {
    4687           0 :     if (!map.initialized())
    4688             :         return;
    4689           0 : 
    4690             :     for (BitmapMap::Range r(map.all()); !r.empty(); r.popFront())
    4691             :         js_delete(r.front().value());
    4692           0 : }
    4693           0 : 
    4694           0 : void
    4695             : js::gc::MarkingValidator::nonIncrementalMark(AutoTraceSession& session)
    4696             : {
    4697           0 :     /*
    4698             :      * Perform a non-incremental mark for all collecting zones and record
    4699             :      * the results for later comparison.
    4700             :      *
    4701             :      * Currently this does not validate gray marking.
    4702             :      */
    4703             : 
    4704             :     if (!map.init())
    4705             :         return;
    4706           0 : 
    4707           0 :     JSRuntime* runtime = gc->rt;
    4708             :     GCMarker* gcmarker = &gc->marker;
    4709           0 : 
    4710           0 :     gc->waitBackgroundSweepEnd();
    4711             : 
    4712           0 :     /* Wait for off-thread parsing which can allocate. */
    4713             :     HelperThreadState().waitForAllThreads();
    4714             : 
    4715           0 :     /* Save existing mark bits. */
    4716             :     {
    4717             :         AutoLockGC lock(runtime);
    4718             :         for (auto chunk = gc->allNonEmptyChunks(lock); !chunk.done(); chunk.next()) {
    4719           0 :             ChunkBitmap* bitmap = &chunk->bitmap;
    4720           0 :             ChunkBitmap* entry = js_new<ChunkBitmap>();
    4721           0 :             if (!entry)
    4722           0 :                 return;
    4723           0 : 
    4724           0 :             memcpy((void*)entry->bitmap, (void*)bitmap->bitmap, sizeof(bitmap->bitmap));
    4725             :             if (!map.putNew(chunk, entry))
    4726           0 :                 return;
    4727           0 :         }
    4728             :     }
    4729             : 
    4730             :     /*
    4731             :      * Temporarily clear the weakmaps' mark flags for the compartments we are
    4732             :      * collecting.
    4733             :      */
    4734             : 
    4735             :     WeakMapSet markedWeakMaps;
    4736             :     if (!markedWeakMaps.init())
    4737           0 :         return;
    4738           0 : 
    4739             :     /*
    4740             :      * For saving, smush all of the keys into one big table and split them back
    4741             :      * up into per-zone tables when restoring.
    4742             :      */
    4743             :     gc::WeakKeyTable savedWeakKeys(SystemAllocPolicy(), runtime->randomHashCodeScrambler());
    4744             :     if (!savedWeakKeys.init())
    4745           0 :         return;
    4746           0 : 
    4747             :     for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
    4748             :         if (!WeakMapBase::saveZoneMarkedWeakMaps(zone, markedWeakMaps))
    4749           0 :             return;
    4750           0 : 
    4751           0 :         AutoEnterOOMUnsafeRegion oomUnsafe;
    4752             :         for (gc::WeakKeyTable::Range r = zone->gcWeakKeys().all(); !r.empty(); r.popFront()) {
    4753           0 :             if (!savedWeakKeys.put(std::move(r.front().key), std::move(r.front().value)))
    4754           0 :                 oomUnsafe.crash("saving weak keys table for validator");
    4755           0 :         }
    4756           0 : 
    4757             :         if (!zone->gcWeakKeys().clear())
    4758             :             oomUnsafe.crash("clearing weak keys table for validator");
    4759           0 :     }
    4760           0 : 
    4761             :     /*
    4762             :      * After this point, the function should run to completion, so we shouldn't
    4763             :      * do anything fallible.
    4764             :      */
    4765             :     initialized = true;
    4766             : 
    4767           0 :     /* Re-do all the marking, but non-incrementally. */
    4768             :     js::gc::State state = gc->incrementalState;
    4769             :     gc->incrementalState = State::MarkRoots;
    4770           0 : 
    4771           0 :     {
    4772             :         gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::PREPARE);
    4773             : 
    4774           0 :         {
    4775             :             gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::UNMARK);
    4776             : 
    4777           0 :             for (GCZonesIter zone(runtime); !zone.done(); zone.next())
    4778             :                 WeakMapBase::unmarkZone(zone);
    4779           0 : 
    4780           0 :             MOZ_ASSERT(gcmarker->isDrained());
    4781             :             gcmarker->reset();
    4782           0 : 
    4783           0 :             AutoLockGC lock(runtime);
    4784             :             for (auto chunk = gc->allNonEmptyChunks(lock); !chunk.done(); chunk.next())
    4785           0 :                 chunk->bitmap.clear();
    4786           0 :         }
    4787           0 :     }
    4788             : 
    4789             :     {
    4790             :         gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::MARK);
    4791             : 
    4792           0 :         gc->traceRuntimeForMajorGC(gcmarker, session);
    4793             : 
    4794           0 :         gc->incrementalState = State::Mark;
    4795             :         auto unlimited = SliceBudget::unlimited();
    4796           0 :         MOZ_RELEASE_ASSERT(gc->marker.drainMarkStack(unlimited));
    4797             :     }
    4798           0 : 
    4799             :     gc->incrementalState = State::Sweep;
    4800             :     {
    4801           0 :         gcstats::AutoPhase ap1(gc->stats(), gcstats::PhaseKind::SWEEP);
    4802             :         gcstats::AutoPhase ap2(gc->stats(), gcstats::PhaseKind::SWEEP_MARK);
    4803           0 : 
    4804           0 :         gc->markAllWeakReferences(gcstats::PhaseKind::SWEEP_MARK_WEAK);
    4805             : 
    4806           0 :         /* Update zone state for gray marking. */
    4807             :         for (GCZonesIter zone(runtime); !zone.done(); zone.next())
    4808             :             zone->changeGCState(Zone::Mark, Zone::MarkGray);
    4809           0 :         gc->marker.setMarkColorGray();
    4810           0 : 
    4811           0 :         gc->markAllGrayReferences(gcstats::PhaseKind::SWEEP_MARK_GRAY);
    4812             :         gc->markAllWeakReferences(gcstats::PhaseKind::SWEEP_MARK_GRAY_WEAK);
    4813           0 : 
    4814           0 :         /* Restore zone state. */
    4815             :         for (GCZonesIter zone(runtime); !zone.done(); zone.next())
    4816             :             zone->changeGCState(Zone::MarkGray, Zone::Mark);
    4817           0 :         MOZ_ASSERT(gc->marker.isDrained());
    4818           0 :         gc->marker.setMarkColorBlack();
    4819           0 :     }
    4820           0 : 
    4821             :     /* Take a copy of the non-incremental mark state and restore the original. */
    4822             :     {
    4823             :         AutoLockGC lock(runtime);
    4824             :         for (auto chunk = gc->allNonEmptyChunks(lock); !chunk.done(); chunk.next()) {
    4825           0 :             ChunkBitmap* bitmap = &chunk->bitmap;
    4826           0 :             ChunkBitmap* entry = map.lookup(chunk)->value();
    4827           0 :             Swap(*entry, *bitmap);
    4828           0 :         }
    4829           0 :     }
    4830             : 
    4831             :     for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
    4832             :         WeakMapBase::unmarkZone(zone);
    4833           0 :         AutoEnterOOMUnsafeRegion oomUnsafe;
    4834           0 :         if (!zone->gcWeakKeys().clear())
    4835           0 :             oomUnsafe.crash("clearing weak keys table for validator");
    4836           0 :     }
    4837           0 : 
    4838             :     WeakMapBase::restoreMarkedWeakMaps(markedWeakMaps);
    4839             : 
    4840           0 :     for (gc::WeakKeyTable::Range r = savedWeakKeys.all(); !r.empty(); r.popFront()) {
    4841             :         AutoEnterOOMUnsafeRegion oomUnsafe;
    4842           0 :         Zone* zone = gc::TenuredCell::fromPointer(r.front().key.asCell())->zone();
    4843           0 :         if (!zone->gcWeakKeys().put(std::move(r.front().key), std::move(r.front().value)))
    4844           0 :             oomUnsafe.crash("restoring weak keys table for validator");
    4845           0 :     }
    4846           0 : 
    4847             :     gc->incrementalState = state;
    4848             : }
    4849           0 : 
    4850             : void
    4851             : js::gc::MarkingValidator::validate()
    4852             : {
    4853           0 :     /*
    4854             :      * Validates the incremental marking for a single compartment by comparing
    4855             :      * the mark bits to those previously recorded for a non-incremental mark.
    4856             :      */
    4857             : 
    4858             :     if (!initialized)
    4859             :         return;
    4860           0 : 
    4861           0 :     gc->waitBackgroundSweepEnd();
    4862             : 
    4863           0 :     AutoLockGC lock(gc->rt);
    4864             :     for (auto chunk = gc->allNonEmptyChunks(lock); !chunk.done(); chunk.next()) {
    4865           0 :         BitmapMap::Ptr ptr = map.lookup(chunk);
    4866           0 :         if (!ptr)
    4867           0 :             continue;  /* Allocated after we did the non-incremental mark. */
    4868           0 : 
    4869           0 :         ChunkBitmap* bitmap = ptr->value();
    4870             :         ChunkBitmap* incBitmap = &chunk->bitmap;
    4871           0 : 
    4872           0 :         for (size_t i = 0; i < ArenasPerChunk; i++) {
    4873             :             if (chunk->decommittedArenas.get(i))
    4874           0 :                 continue;
    4875           0 :             Arena* arena = &chunk->arenas[i];
    4876             :             if (!arena->allocated())
    4877           0 :                 continue;
    4878           0 :             if (!arena->zone->isGCSweeping())
    4879             :                 continue;
    4880           0 : 
    4881             :             AllocKind kind = arena->getAllocKind();
    4882             :             uintptr_t thing = arena->thingsStart();
    4883           0 :             uintptr_t end = arena->thingsEnd();
    4884           0 :             while (thing < end) {
    4885           0 :                 auto cell = reinterpret_cast<TenuredCell*>(thing);
    4886           0 : 
    4887           0 :                 /*
    4888             :                  * If a non-incremental GC wouldn't have collected a cell, then
    4889             :                  * an incremental GC won't collect it.
    4890             :                  */
    4891             :                 if (bitmap->isMarkedAny(cell))
    4892             :                     MOZ_RELEASE_ASSERT(incBitmap->isMarkedAny(cell));
    4893           0 : 
    4894           0 :                 /*
    4895             :                  * If the cycle collector isn't allowed to collect an object
    4896             :                  * after a non-incremental GC has run, then it isn't allowed to
    4897             :                  * collected it after an incremental GC.
    4898             :                  */
    4899             :                 if (!bitmap->isMarkedGray(cell))
    4900             :                     MOZ_RELEASE_ASSERT(!incBitmap->isMarkedGray(cell));
    4901           0 : 
    4902           0 :                 thing += Arena::thingSize(kind);
    4903             :             }
    4904           0 :         }
    4905             :     }
    4906             : }
    4907             : 
    4908             : #endif // JS_GC_ZEAL
    4909             : 
    4910             : void
    4911             : GCRuntime::computeNonIncrementalMarkingForValidation(AutoTraceSession& session)
    4912             : {
    4913           0 : #ifdef JS_GC_ZEAL
    4914             :     MOZ_ASSERT(!markingValidator);
    4915             :     if (isIncremental && hasZealMode(ZealMode::IncrementalMarkingValidator))
    4916           0 :         markingValidator = js_new<MarkingValidator>(this);
    4917           0 :     if (markingValidator)
    4918           0 :         markingValidator->nonIncrementalMark(session);
    4919           0 : #endif
    4920           0 : }
    4921             : 
    4922           0 : void
    4923             : GCRuntime::validateIncrementalMarking()
    4924             : {
    4925           0 : #ifdef JS_GC_ZEAL
    4926             :     if (markingValidator)
    4927             :         markingValidator->validate();
    4928           0 : #endif
    4929           0 : }
    4930             : 
    4931           0 : void
    4932             : GCRuntime::finishMarkingValidation()
    4933             : {
    4934           0 : #ifdef JS_GC_ZEAL
    4935             :     js_delete(markingValidator.ref());
    4936             :     markingValidator = nullptr;
    4937           0 : #endif
    4938           0 : }
    4939             : 
    4940           0 : static void
    4941             : DropStringWrappers(JSRuntime* rt)
    4942             : {
    4943           0 :     /*
    4944             :      * String "wrappers" are dropped on GC because their presence would require
    4945             :      * us to sweep the wrappers in all compartments every time we sweep a
    4946             :      * compartment group.
    4947             :      */
    4948             :     for (CompartmentsIter c(rt); !c.done(); c.next()) {
    4949             :         for (Compartment::StringWrapperEnum e(c); !e.empty(); e.popFront()) {
    4950           0 :             MOZ_ASSERT(e.front().key().is<JSString*>());
    4951           0 :             e.removeFront();
    4952           0 :         }
    4953           0 :     }
    4954             : }
    4955             : 
    4956           0 : /*
    4957             :  * Group zones that must be swept at the same time.
    4958             :  *
    4959             :  * If compartment A has an edge to an unmarked object in compartment B, then we
    4960             :  * must not sweep A in a later slice than we sweep B. That's because a write
    4961             :  * barrier in A could lead to the unmarked object in B becoming marked.
    4962             :  * However, if we had already swept that object, we would be in trouble.
    4963             :  *
    4964             :  * If we consider these dependencies as a graph, then all the compartments in
    4965             :  * any strongly-connected component of this graph must be swept in the same
    4966             :  * slice.
    4967             :  *
    4968             :  * Tarjan's algorithm is used to calculate the components.
    4969             :  */
    4970             : namespace {
    4971             : struct AddOutgoingEdgeFunctor {
    4972             :     bool needsEdge_;
    4973             :     ZoneComponentFinder& finder_;
    4974             : 
    4975             :     AddOutgoingEdgeFunctor(bool needsEdge, ZoneComponentFinder& finder)
    4976             :       : needsEdge_(needsEdge), finder_(finder)
    4977             :     {}
    4978           0 : 
    4979             :     template <typename T>
    4980             :     void operator()(T tp) {
    4981             :         TenuredCell& other = (*tp)->asTenured();
    4982           0 : 
    4983           0 :         /*
    4984             :          * Add edge to wrapped object compartment if wrapped object is not
    4985             :          * marked black to indicate that wrapper compartment not be swept
    4986             :          * after wrapped compartment.
    4987             :          */
    4988             :         if (needsEdge_) {
    4989             :             JS::Zone* zone = other.zone();
    4990           0 :             if (zone->isGCMarking())
    4991           0 :                 finder_.addEdgeTo(zone);
    4992           0 :         }
    4993           0 :     }
    4994             : };
    4995           0 : } // namespace (anonymous)
    4996             : 
    4997             : void
    4998             : Compartment::findOutgoingEdges(ZoneComponentFinder& finder)
    4999             : {
    5000           0 :     for (js::WrapperMap::Enum e(crossCompartmentWrappers); !e.empty(); e.popFront()) {
    5001             :         CrossCompartmentKey& key = e.front().mutableKey();
    5002           0 :         MOZ_ASSERT(!key.is<JSString*>());
    5003           0 :         bool needsEdge = true;
    5004           0 :         if (key.is<JSObject*>()) {
    5005           0 :             TenuredCell& other = key.as<JSObject*>()->asTenured();
    5006           0 :             needsEdge = !other.isMarkedBlack();
    5007           0 :         }
    5008           0 :         key.applyToWrapped(AddOutgoingEdgeFunctor(needsEdge, finder));
    5009             :     }
    5010           0 : }
    5011             : 
    5012           0 : void
    5013             : Zone::findOutgoingEdges(ZoneComponentFinder& finder)
    5014             : {
    5015           0 :     /*
    5016             :      * Any compartment may have a pointer to an atom in the atoms
    5017             :      * compartment, and these aren't in the cross compartment map.
    5018             :      */
    5019             :     if (Zone* zone = finder.maybeAtomsZone) {
    5020             :         MOZ_ASSERT(zone->isCollecting());
    5021           0 :         finder.addEdgeTo(zone);
    5022           0 :     }
    5023           0 : 
    5024             :     for (CompartmentsInZoneIter comp(this); !comp.done(); comp.next())
    5025             :         comp->findOutgoingEdges(finder);
    5026           0 : 
    5027           0 :     for (ZoneSet::Range r = gcSweepGroupEdges().all(); !r.empty(); r.popFront()) {
    5028             :         if (r.front()->isGCMarking())
    5029           0 :             finder.addEdgeTo(r.front());
    5030           0 :     }
    5031           0 : 
    5032             :     Debugger::findZoneEdges(this, finder);
    5033             : }
    5034           0 : 
    5035           0 : bool
    5036             : GCRuntime::findInterZoneEdges()
    5037             : {
    5038           0 :     /*
    5039             :      * Weakmaps which have keys with delegates in a different zone introduce the
    5040             :      * need for zone edges from the delegate's zone to the weakmap zone.
    5041             :      *
    5042             :      * Since the edges point into and not away from the zone the weakmap is in
    5043             :      * we must find these edges in advance and store them in a set on the Zone.
    5044             :      * If we run out of memory, we fall back to sweeping everything in one
    5045             :      * group.
    5046             :      */
    5047             : 
    5048             :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    5049             :         if (!WeakMapBase::findInterZoneEdges(zone))
    5050           0 :             return false;
    5051           0 :     }
    5052           0 : 
    5053             :     return true;
    5054             : }
    5055           0 : 
    5056             : void
    5057             : GCRuntime::groupZonesForSweeping(JS::gcreason::Reason reason)
    5058             : {
    5059           0 : #ifdef DEBUG
    5060             :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    5061             :         MOZ_ASSERT(zone->gcSweepGroupEdges().empty());
    5062           0 : #endif
    5063           0 : 
    5064             :     JSContext* cx = rt->mainContextFromOwnThread();
    5065             :     Zone* maybeAtomsZone = atomsZone->wasGCStarted() ? atomsZone.ref() : nullptr;
    5066           0 :     ZoneComponentFinder finder(cx->nativeStackLimit[JS::StackForSystemCode], maybeAtomsZone);
    5067           0 :     if (!isIncremental || !findInterZoneEdges())
    5068           0 :         finder.useOneComponent();
    5069           0 : 
    5070           0 : #ifdef JS_GC_ZEAL
    5071             :     // Use one component for two-slice zeal modes.
    5072             :     if (useZeal && hasIncrementalTwoSliceZealMode())
    5073             :         finder.useOneComponent();
    5074           0 : #endif
    5075           0 : 
    5076             :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    5077             :         MOZ_ASSERT(zone->isGCMarking());
    5078           0 :         finder.addNode(zone);
    5079           0 :     }
    5080           0 :     sweepGroups = finder.getResultsList();
    5081             :     currentSweepGroup = sweepGroups;
    5082           0 :     sweepGroupIndex = 0;
    5083           0 : 
    5084           0 :     for (GCZonesIter zone(rt); !zone.done(); zone.next())
    5085             :         zone->gcSweepGroupEdges().clear();
    5086           0 : 
    5087           0 : #ifdef DEBUG
    5088             :     for (Zone* head = currentSweepGroup; head; head = head->nextGroup()) {
    5089             :         for (Zone* zone = head; zone; zone = zone->nextNodeInGroup())
    5090           0 :             MOZ_ASSERT(zone->isGCMarking());
    5091           0 :     }
    5092           0 : 
    5093             :     MOZ_ASSERT_IF(!isIncremental, !currentSweepGroup->nextGroup());
    5094             :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    5095           0 :         MOZ_ASSERT(zone->gcSweepGroupEdges().empty());
    5096           0 : #endif
    5097           0 : }
    5098             : 
    5099           0 : static void
    5100             : ResetGrayList(Compartment* comp);
    5101             : 
    5102             : void
    5103             : GCRuntime::getNextSweepGroup()
    5104             : {
    5105           0 :     currentSweepGroup = currentSweepGroup->nextGroup();
    5106             :     ++sweepGroupIndex;
    5107           0 :     if (!currentSweepGroup) {
    5108           0 :         abortSweepAfterCurrentGroup = false;
    5109           0 :         return;
    5110           0 :     }
    5111           0 : 
    5112             :     for (Zone* zone = currentSweepGroup; zone; zone = zone->nextNodeInGroup()) {
    5113             :         MOZ_ASSERT(zone->isGCMarking());
    5114           0 :         MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());
    5115           0 :     }
    5116           0 : 
    5117             :     if (!isIncremental)
    5118             :         ZoneComponentFinder::mergeGroups(currentSweepGroup);
    5119           0 : 
    5120           0 :     if (abortSweepAfterCurrentGroup) {
    5121             :         MOZ_ASSERT(!isIncremental);
    5122           0 :         for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
    5123           0 :             MOZ_ASSERT(!zone->gcNextGraphComponent);
    5124           0 :             zone->setNeedsIncrementalBarrier(false);
    5125           0 :             zone->changeGCState(Zone::Mark, Zone::NoGC);
    5126           0 :             zone->gcGrayRoots().clearAndFree();
    5127           0 :         }
    5128           0 : 
    5129             :         for (SweepGroupCompartmentsIter comp(rt); !comp.done(); comp.next())
    5130             :             ResetGrayList(comp);
    5131           0 : 
    5132           0 :         abortSweepAfterCurrentGroup = false;
    5133             :         currentSweepGroup = nullptr;
    5134           0 :     }
    5135           0 : }
    5136             : 
    5137             : /*
    5138             :  * Gray marking:
    5139             :  *
    5140             :  * At the end of collection, anything reachable from a gray root that has not
    5141             :  * otherwise been marked black must be marked gray.
    5142             :  *
    5143             :  * This means that when marking things gray we must not allow marking to leave
    5144             :  * the current compartment group, as that could result in things being marked
    5145             :  * grey when they might subsequently be marked black.  To achieve this, when we
    5146             :  * find a cross compartment pointer we don't mark the referent but add it to a
    5147             :  * singly-linked list of incoming gray pointers that is stored with each
    5148             :  * compartment.
    5149             :  *
    5150             :  * The list head is stored in Compartment::gcIncomingGrayPointers and contains
    5151             :  * cross compartment wrapper objects. The next pointer is stored in the second
    5152             :  * extra slot of the cross compartment wrapper.
    5153             :  *
    5154             :  * The list is created during gray marking when one of the
    5155             :  * MarkCrossCompartmentXXX functions is called for a pointer that leaves the
    5156             :  * current compartent group.  This calls DelayCrossCompartmentGrayMarking to
    5157             :  * push the referring object onto the list.
    5158             :  *
    5159             :  * The list is traversed and then unlinked in
    5160             :  * MarkIncomingCrossCompartmentPointers.
    5161             :  */
    5162             : 
    5163             : static bool
    5164             : IsGrayListObject(JSObject* obj)
    5165             : {
    5166           0 :     MOZ_ASSERT(obj);
    5167             :     return obj->is<CrossCompartmentWrapperObject>() && !IsDeadProxyObject(obj);
    5168           0 : }
    5169           0 : 
    5170             : /* static */ unsigned
    5171             : ProxyObject::grayLinkReservedSlot(JSObject* obj)
    5172             : {
    5173          49 :     MOZ_ASSERT(IsGrayListObject(obj));
    5174             :     return CrossCompartmentWrapperObject::GrayLinkReservedSlot;
    5175           0 : }
    5176           0 : 
    5177             : #ifdef DEBUG
    5178             : static void
    5179             : AssertNotOnGrayList(JSObject* obj)
    5180             : {
    5181           0 :     MOZ_ASSERT_IF(IsGrayListObject(obj),
    5182             :                   GetProxyReservedSlot(obj, ProxyObject::grayLinkReservedSlot(obj)).isUndefined());
    5183           0 : }
    5184             : #endif
    5185           0 : 
    5186             : static void
    5187             : AssertNoWrappersInGrayList(JSRuntime* rt)
    5188             : {
    5189           0 : #ifdef DEBUG
    5190             :     for (CompartmentsIter c(rt); !c.done(); c.next()) {
    5191             :         MOZ_ASSERT(!c->gcIncomingGrayPointers);
    5192           0 :         for (Compartment::NonStringWrapperEnum e(c); !e.empty(); e.popFront())
    5193           0 :             AssertNotOnGrayList(&e.front().value().unbarrieredGet().toObject());
    5194           0 :     }
    5195           0 : #endif
    5196             : }
    5197             : 
    5198           0 : static JSObject*
    5199             : CrossCompartmentPointerReferent(JSObject* obj)
    5200             : {
    5201           0 :     MOZ_ASSERT(IsGrayListObject(obj));
    5202             :     return &obj->as<ProxyObject>().private_().toObject();
    5203           0 : }
    5204           0 : 
    5205             : static JSObject*
    5206             : NextIncomingCrossCompartmentPointer(JSObject* prev, bool unlink)
    5207             : {
    5208           0 :     unsigned slot = ProxyObject::grayLinkReservedSlot(prev);
    5209             :     JSObject* next = GetProxyReservedSlot(prev, slot).toObjectOrNull();
    5210           0 :     MOZ_ASSERT_IF(next, IsGrayListObject(next));
    5211           0 : 
    5212           0 :     if (unlink)
    5213             :         SetProxyReservedSlot(prev, slot, UndefinedValue());
    5214           0 : 
    5215           0 :     return next;
    5216             : }
    5217           0 : 
    5218             : void
    5219             : js::gc::DelayCrossCompartmentGrayMarking(JSObject* src)
    5220             : {
    5221           0 :     MOZ_ASSERT(IsGrayListObject(src));
    5222             :     MOZ_ASSERT(src->isMarkedGray());
    5223           0 : 
    5224           0 :     AutoTouchingGrayThings tgt;
    5225             : 
    5226           0 :     /* Called from MarkCrossCompartmentXXX functions. */
    5227             :     unsigned slot = ProxyObject::grayLinkReservedSlot(src);
    5228             :     JSObject* dest = CrossCompartmentPointerReferent(src);
    5229           0 :     Compartment* comp = dest->compartment();
    5230           0 : 
    5231           0 :     if (GetProxyReservedSlot(src, slot).isUndefined()) {
    5232             :         SetProxyReservedSlot(src, slot, ObjectOrNullValue(comp->gcIncomingGrayPointers));
    5233           0 :         comp->gcIncomingGrayPointers = src;
    5234           0 :     } else {
    5235           0 :         MOZ_ASSERT(GetProxyReservedSlot(src, slot).isObjectOrNull());
    5236             :     }
    5237           0 : 
    5238             : #ifdef DEBUG
    5239             :     /*
    5240             :      * Assert that the object is in our list, also walking the list to check its
    5241             :      * integrity.
    5242             :      */
    5243             :     JSObject* obj = comp->gcIncomingGrayPointers;
    5244             :     bool found = false;
    5245           0 :     while (obj) {
    5246           0 :         if (obj == src)
    5247           0 :             found = true;
    5248           0 :         obj = NextIncomingCrossCompartmentPointer(obj, false);
    5249           0 :     }
    5250           0 :     MOZ_ASSERT(found);
    5251             : #endif
    5252           0 : }
    5253             : 
    5254           0 : static void
    5255             : MarkIncomingCrossCompartmentPointers(JSRuntime* rt, MarkColor color)
    5256             : {
    5257           0 :     MOZ_ASSERT(color == MarkColor::Black || color == MarkColor::Gray);
    5258             : 
    5259           0 :     static const gcstats::PhaseKind statsPhases[] = {
    5260             :         gcstats::PhaseKind::SWEEP_MARK_INCOMING_BLACK,
    5261             :         gcstats::PhaseKind::SWEEP_MARK_INCOMING_GRAY
    5262             :     };
    5263             :     gcstats::AutoPhase ap1(rt->gc.stats(), statsPhases[unsigned(color)]);
    5264             : 
    5265           0 :     bool unlinkList = color == MarkColor::Gray;
    5266             : 
    5267           0 :     for (SweepGroupCompartmentsIter c(rt); !c.done(); c.next()) {
    5268             :         MOZ_ASSERT_IF(color == MarkColor::Gray, c->zone()->isGCMarkingGray());
    5269           0 :         MOZ_ASSERT_IF(color == MarkColor::Black, c->zone()->isGCMarkingBlack());
    5270           0 :         MOZ_ASSERT_IF(c->gcIncomingGrayPointers, IsGrayListObject(c->gcIncomingGrayPointers));
    5271           0 : 
    5272           0 :         for (JSObject* src = c->gcIncomingGrayPointers;
    5273             :              src;
    5274           0 :              src = NextIncomingCrossCompartmentPointer(src, unlinkList))
    5275           0 :         {
    5276           0 :             JSObject* dst = CrossCompartmentPointerReferent(src);
    5277             :             MOZ_ASSERT(dst->compartment() == c);
    5278           0 : 
    5279           0 :             if (color == MarkColor::Gray) {
    5280             :                 if (IsMarkedUnbarriered(rt, &src) && src->asTenured().isMarkedGray())
    5281           0 :                     TraceManuallyBarrieredEdge(&rt->gc.marker, &dst,
    5282           0 :                                                "cross-compartment gray pointer");
    5283           0 :             } else {
    5284             :                 if (IsMarkedUnbarriered(rt, &src) && !src->asTenured().isMarkedGray())
    5285             :                     TraceManuallyBarrieredEdge(&rt->gc.marker, &dst,
    5286           0 :                                                "cross-compartment black pointer");
    5287           0 :             }
    5288             :         }
    5289             : 
    5290             :         if (unlinkList)
    5291             :             c->gcIncomingGrayPointers = nullptr;
    5292           0 :     }
    5293           0 : 
    5294             :     auto unlimited = SliceBudget::unlimited();
    5295             :     MOZ_RELEASE_ASSERT(rt->gc.marker.drainMarkStack(unlimited));
    5296             : }
    5297           0 : 
    5298           0 : static bool
    5299             : RemoveFromGrayList(JSObject* wrapper)
    5300             : {
    5301          55 :     AutoTouchingGrayThings tgt;
    5302             : 
    5303          55 :     if (!IsGrayListObject(wrapper))
    5304             :         return false;
    5305           0 : 
    5306             :     unsigned slot = ProxyObject::grayLinkReservedSlot(wrapper);
    5307             :     if (GetProxyReservedSlot(wrapper, slot).isUndefined())
    5308          49 :         return false;  /* Not on our list. */
    5309          98 : 
    5310             :     JSObject* tail = GetProxyReservedSlot(wrapper, slot).toObjectOrNull();
    5311             :     SetProxyReservedSlot(wrapper, slot, UndefinedValue());
    5312           0 : 
    5313           0 :     Compartment* comp = CrossCompartmentPointerReferent(wrapper)->compartment();
    5314             :     JSObject* obj = comp->gcIncomingGrayPointers;
    5315           0 :     if (obj == wrapper) {
    5316           0 :         comp->gcIncomingGrayPointers = tail;
    5317           0 :         return true;
    5318           0 :     }
    5319           0 : 
    5320             :     while (obj) {
    5321             :         unsigned slot = ProxyObject::grayLinkReservedSlot(obj);
    5322           0 :         JSObject* next = GetProxyReservedSlot(obj, slot).toObjectOrNull();
    5323           0 :         if (next == wrapper) {
    5324           0 :             js::detail::SetProxyReservedSlotUnchecked(obj, slot, ObjectOrNullValue(tail));
    5325           0 :             return true;
    5326           0 :         }
    5327           0 :         obj = next;
    5328             :     }
    5329             : 
    5330             :     MOZ_CRASH("object not found in gray link list");
    5331             : }
    5332           0 : 
    5333             : static void
    5334             : ResetGrayList(Compartment* comp)
    5335             : {
    5336             :     JSObject* src = comp->gcIncomingGrayPointers;
    5337             :     while (src)
    5338           0 :         src = NextIncomingCrossCompartmentPointer(src, true);
    5339           0 :     comp->gcIncomingGrayPointers = nullptr;
    5340           0 : }
    5341           0 : 
    5342             : void
    5343             : js::NotifyGCNukeWrapper(JSObject* obj)
    5344             : {
    5345          43 :     /*
    5346             :      * References to target of wrapper are being removed, we no longer have to
    5347             :      * remember to mark it.
    5348             :      */
    5349             :     RemoveFromGrayList(obj);
    5350             : }
    5351          43 : 
    5352          43 : enum {
    5353             :     JS_GC_SWAP_OBJECT_A_REMOVED = 1 << 0,
    5354             :     JS_GC_SWAP_OBJECT_B_REMOVED = 1 << 1
    5355             : };
    5356             : 
    5357             : unsigned
    5358             : js::NotifyGCPreSwap(JSObject* a, JSObject* b)
    5359             : {
    5360           0 :     /*
    5361             :      * Two objects in the same compartment are about to have had their contents
    5362             :      * swapped.  If either of them are in our gray pointer list, then we remove
    5363             :      * them from the lists, returning a bitset indicating what happened.
    5364             :      */
    5365             :     return (RemoveFromGrayList(a) ? JS_GC_SWAP_OBJECT_A_REMOVED : 0) |
    5366             :            (RemoveFromGrayList(b) ? JS_GC_SWAP_OBJECT_B_REMOVED : 0);
    5367          12 : }
    5368           0 : 
    5369             : void
    5370             : js::NotifyGCPostSwap(JSObject* a, JSObject* b, unsigned removedFlags)
    5371             : {
    5372           0 :     /*
    5373             :      * Two objects in the same compartment have had their contents swapped.  If
    5374             :      * either of them were in our gray pointer list, we re-add them again.
    5375             :      */
    5376             :     if (removedFlags & JS_GC_SWAP_OBJECT_A_REMOVED)
    5377             :         DelayCrossCompartmentGrayMarking(b);
    5378           6 :     if (removedFlags & JS_GC_SWAP_OBJECT_B_REMOVED)
    5379           0 :         DelayCrossCompartmentGrayMarking(a);
    5380           0 : }
    5381           0 : 
    5382           6 : IncrementalProgress
    5383             : GCRuntime::endMarkingSweepGroup(FreeOp* fop, SliceBudget& budget)
    5384             : {
    5385           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_MARK);
    5386             : 
    5387           0 :     /*
    5388             :      * Mark any incoming black pointers from previously swept compartments
    5389             :      * whose referents are not marked. This can occur when gray cells become
    5390             :      * black by the action of UnmarkGray.
    5391             :      */
    5392             :     MarkIncomingCrossCompartmentPointers(rt, MarkColor::Black);
    5393             :     markWeakReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_WEAK);
    5394           0 : 
    5395           0 :     /*
    5396             :      * Change state of current group to MarkGray to restrict marking to this
    5397             :      * group.  Note that there may be pointers to the atoms zone, and
    5398             :      * these will be marked through, as they are not marked with
    5399             :      * TraceCrossCompartmentEdge.
    5400             :      */
    5401             :     for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next())
    5402             :         zone->changeGCState(Zone::Mark, Zone::MarkGray);
    5403           0 :     marker.setMarkColorGray();
    5404           0 : 
    5405           0 :     /* Mark incoming gray pointers from previously swept compartments. */
    5406             :     MarkIncomingCrossCompartmentPointers(rt, MarkColor::Gray);
    5407             : 
    5408           0 :     /* Mark gray roots and mark transitively inside the current compartment group. */
    5409             :     markGrayReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_GRAY);
    5410             :     markWeakReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_GRAY_WEAK);
    5411           0 : 
    5412           0 :     /* Restore marking state. */
    5413             :     for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next())
    5414             :         zone->changeGCState(Zone::MarkGray, Zone::Mark);
    5415           0 :     MOZ_ASSERT(marker.isDrained());
    5416           0 :     marker.setMarkColorBlack();
    5417           0 : 
    5418           0 :     /* We must not yield after this point before we start sweeping the group. */
    5419             :     safeToYield = false;
    5420             : 
    5421           0 :     return Finished;
    5422             : }
    5423           0 : 
    5424             : // Causes the given WeakCache to be swept when run.
    5425             : class ImmediateSweepWeakCacheTask : public GCParallelTaskHelper<ImmediateSweepWeakCacheTask>
    5426             : {
    5427           0 :     JS::detail::WeakCacheBase& cache;
    5428             : 
    5429             :     ImmediateSweepWeakCacheTask(const ImmediateSweepWeakCacheTask&) = delete;
    5430             : 
    5431             :   public:
    5432             :     ImmediateSweepWeakCacheTask(JSRuntime* rt, JS::detail::WeakCacheBase& wc)
    5433             :       : GCParallelTaskHelper(rt), cache(wc)
    5434             :     {}
    5435           0 : 
    5436             :     ImmediateSweepWeakCacheTask(ImmediateSweepWeakCacheTask&& other)
    5437             :       : GCParallelTaskHelper(std::move(other)), cache(other.cache)
    5438             :     {}
    5439           0 : 
    5440             :     void run() {
    5441             :         cache.sweep();
    5442             :     }
    5443           0 : };
    5444             : 
    5445             : static void
    5446             : UpdateAtomsBitmap(JSRuntime* runtime)
    5447             : {
    5448           0 :     DenseBitmap marked;
    5449             :     if (runtime->gc.atomMarking.computeBitmapFromChunkMarkBits(runtime, marked)) {
    5450           0 :         for (GCZonesIter zone(runtime); !zone.done(); zone.next())
    5451           0 :             runtime->gc.atomMarking.refineZoneBitmapForCollectedZone(zone, marked);
    5452           0 :     } else {
    5453           0 :         // Ignore OOM in computeBitmapFromChunkMarkBits. The
    5454             :         // refineZoneBitmapForCollectedZone call can only remove atoms from the
    5455             :         // zone bitmap, so it is conservative to just not call it.
    5456             :     }
    5457             : 
    5458             :     runtime->gc.atomMarking.markAtomsUsedByUncollectedZones(runtime);
    5459             : 
    5460           0 :     // For convenience sweep these tables non-incrementally as part of bitmap
    5461             :     // sweeping; they are likely to be much smaller than the main atoms table.
    5462             :     runtime->unsafeSymbolRegistry().sweep();
    5463             :     for (RealmsIter realm(runtime); !realm.done(); realm.next())
    5464           0 :         realm->sweepVarNames();
    5465           0 : }
    5466           0 : 
    5467           0 : static void
    5468             : SweepCCWrappers(GCParallelTask* task)
    5469             : {
    5470           0 :     JSRuntime* runtime = task->runtime();
    5471             :     for (SweepGroupCompartmentsIter c(runtime); !c.done(); c.next())
    5472           0 :         c->sweepCrossCompartmentWrappers();
    5473           0 : }
    5474           0 : 
    5475           0 : static void
    5476             : SweepObjectGroups(GCParallelTask* task)
    5477             : {
    5478           0 :     JSRuntime* runtime = task->runtime();
    5479             :     for (SweepGroupRealmsIter r(runtime); !r.done(); r.next())
    5480           0 :         r->sweepObjectGroups();
    5481           0 : }
    5482           0 : 
    5483           0 : static void
    5484             : SweepMisc(GCParallelTask* task)
    5485             : {
    5486           0 :     JSRuntime* runtime = task->runtime();
    5487             :     for (SweepGroupRealmsIter r(runtime); !r.done(); r.next()) {
    5488           0 :         r->sweepGlobalObject();
    5489           0 :         r->sweepTemplateObjects();
    5490           0 :         r->sweepSavedStacks();
    5491           0 :         r->sweepSelfHostingScriptSource();
    5492           0 :         r->sweepObjectRealm();
    5493           0 :         r->sweepRegExps();
    5494           0 :     }
    5495           0 : }
    5496             : 
    5497           0 : static void
    5498             : SweepCompressionTasks(GCParallelTask* task)
    5499             : {
    5500           0 :     JSRuntime* runtime = task->runtime();
    5501             : 
    5502           0 :     AutoLockHelperThreadState lock;
    5503             : 
    5504           0 :     // Attach finished compression tasks.
    5505             :     auto& finished = HelperThreadState().compressionFinishedList(lock);
    5506             :     for (size_t i = 0; i < finished.length(); i++) {
    5507           0 :         if (finished[i]->runtimeMatches(runtime)) {
    5508           0 :             UniquePtr<SourceCompressionTask> compressionTask(std::move(finished[i]));
    5509           0 :             HelperThreadState().remove(finished, &i);
    5510           0 :             compressionTask->complete();
    5511           0 :         }
    5512           0 :     }
    5513             : 
    5514             :     // Sweep pending tasks that are holding onto should-be-dead ScriptSources.
    5515             :     auto& pending = HelperThreadState().compressionPendingList(lock);
    5516             :     for (size_t i = 0; i < pending.length(); i++) {
    5517           0 :         if (pending[i]->shouldCancel())
    5518           0 :             HelperThreadState().remove(pending, &i);
    5519           0 :     }
    5520           0 : }
    5521             : 
    5522           0 : static void
    5523             : SweepWeakMaps(GCParallelTask* task)
    5524             : {
    5525           0 :     JSRuntime* runtime = task->runtime();
    5526             :     for (SweepGroupZonesIter zone(runtime); !zone.done(); zone.next()) {
    5527           0 :         /* Clear all weakrefs that point to unmarked things. */
    5528           0 :         for (auto edge : zone->gcWeakRefs()) {
    5529             :             /* Edges may be present multiple times, so may already be nulled. */
    5530           0 :             if (*edge && IsAboutToBeFinalizedDuringSweep(**edge))
    5531             :                 *edge = nullptr;
    5532           0 :         }
    5533           0 :         zone->gcWeakRefs().clear();
    5534             : 
    5535           0 :         /* No need to look up any more weakmap keys from this sweep group. */
    5536             :         AutoEnterOOMUnsafeRegion oomUnsafe;
    5537             :         if (!zone->gcWeakKeys().clear())
    5538           0 :             oomUnsafe.crash("clearing weak keys in beginSweepingSweepGroup()");
    5539           0 : 
    5540           0 :         zone->sweepWeakMaps();
    5541             :     }
    5542           0 : }
    5543             : 
    5544           0 : static void
    5545             : SweepUniqueIds(GCParallelTask* task)
    5546             : {
    5547           0 :     for (SweepGroupZonesIter zone(task->runtime()); !zone.done(); zone.next())
    5548             :         zone->sweepUniqueIds();
    5549           0 : }
    5550           0 : 
    5551           0 : void
    5552             : GCRuntime::startTask(GCParallelTask& task, gcstats::PhaseKind phase,
    5553             :                      AutoLockHelperThreadState& locked)
    5554           0 : {
    5555             :     if (!task.startWithLockHeld(locked)) {
    5556             :         AutoUnlockHelperThreadState unlock(locked);
    5557           0 :         gcstats::AutoPhase ap(stats(), phase);
    5558           0 :         task.runFromMainThread(rt);
    5559           0 :     }
    5560           0 : }
    5561             : 
    5562           0 : void
    5563             : GCRuntime::joinTask(GCParallelTask& task, gcstats::PhaseKind phase,
    5564             :                     AutoLockHelperThreadState& locked)
    5565           0 : {
    5566             :     {
    5567             :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::JOIN_PARALLEL_TASKS);
    5568             :         task.joinWithLockHeld(locked);
    5569           0 :     }
    5570           0 :     stats().recordParallelPhase(phase, task.duration());
    5571             : }
    5572           0 : 
    5573           0 : void
    5574             : GCRuntime::sweepDebuggerOnMainThread(FreeOp* fop)
    5575             : {
    5576           0 :     // Detach unreachable debuggers and global objects from each other.
    5577             :     // This can modify weakmaps and so must happen before weakmap sweeping.
    5578             :     Debugger::sweepAll(fop);
    5579             : 
    5580           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_COMPARTMENTS);
    5581             : 
    5582           0 :     // Sweep debug environment information. This performs lookups in the Zone's
    5583             :     // unique IDs table and so must not happen in parallel with sweeping that
    5584             :     // table.
    5585             :     {
    5586             :         gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::SWEEP_MISC);
    5587             :         for (SweepGroupRealmsIter r(rt); !r.done(); r.next())
    5588           0 :             r->sweepDebugEnvironments();
    5589           0 :     }
    5590           0 : 
    5591             :     // Sweep breakpoints. This is done here to be with the other debug sweeping,
    5592             :     // although note that it can cause JIT code to be patched.
    5593             :     {
    5594             :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_BREAKPOINT);
    5595             :         for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next())
    5596           0 :             zone->sweepBreakpoints(fop);
    5597           0 :     }
    5598           0 : }
    5599             : 
    5600           0 : void
    5601             : GCRuntime::sweepJitDataOnMainThread(FreeOp* fop)
    5602             : {
    5603           0 :     {
    5604             :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_JIT_DATA);
    5605             : 
    5606           0 :         if (initialState != State::NotActive) {
    5607             :             // Cancel any active or pending off thread compilations. We also did
    5608           0 :             // this before marking (in DiscardJITCodeForGC) so this is a no-op
    5609             :             // for non-incremental GCs.
    5610             :             js::CancelOffThreadIonCompile(rt, JS::Zone::Sweep);
    5611             :         }
    5612           0 : 
    5613             :         for (SweepGroupRealmsIter r(rt); !r.done(); r.next())
    5614             :             r->sweepJitRealm();
    5615           0 : 
    5616           0 :         for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
    5617             :             if (jit::JitZone* jitZone = zone->jitZone())
    5618           0 :                 jitZone->sweep();
    5619           0 :         }
    5620           0 : 
    5621             :         // Bug 1071218: the following method has not yet been refactored to
    5622             :         // work on a single zone-group at once.
    5623             : 
    5624             :         // Sweep entries containing about-to-be-finalized JitCode and
    5625             :         // update relocated TypeSet::Types inside the JitcodeGlobalTable.
    5626             :         jit::JitRuntime::SweepJitcodeGlobalTable(rt);
    5627             :     }
    5628           0 : 
    5629             :     if (initialState != State::NotActive) {
    5630             :         gcstats::AutoPhase apdc(stats(), gcstats::PhaseKind::SWEEP_DISCARD_CODE);
    5631           0 :         for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next())
    5632           0 :             zone->discardJitCode(fop);
    5633           0 :     }
    5634           0 : 
    5635             :     {
    5636             :         gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP_TYPES);
    5637             :         gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::SWEEP_TYPES_BEGIN);
    5638           0 :         for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next())
    5639           0 :             zone->beginSweepTypes(releaseObservedTypes && !zone->isPreservingCode());
    5640           0 :     }
    5641           0 : }
    5642             : 
    5643           0 : using WeakCacheTaskVector = mozilla::Vector<ImmediateSweepWeakCacheTask, 0, SystemAllocPolicy>;
    5644             : 
    5645             : enum WeakCacheLocation
    5646             : {
    5647             :     RuntimeWeakCache,
    5648             :     ZoneWeakCache
    5649             : };
    5650             : 
    5651             : // Call a functor for all weak caches that need to be swept in the current
    5652             : // sweep group.
    5653             : template <typename Functor>
    5654             : static inline bool
    5655             : IterateWeakCaches(JSRuntime* rt, Functor f)
    5656             : {
    5657           0 :     for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
    5658             :         for (JS::detail::WeakCacheBase* cache : zone->weakCaches()) {
    5659           0 :             if (!f(cache, ZoneWeakCache))
    5660           0 :                 return false;
    5661           0 :         }
    5662           0 :     }
    5663             : 
    5664             :     for (JS::detail::WeakCacheBase* cache : rt->weakCaches()) {
    5665             :         if (!f(cache, RuntimeWeakCache))
    5666           0 :             return false;
    5667           0 :     }
    5668             : 
    5669             :     return true;
    5670             : }
    5671             : 
    5672             : static bool
    5673             : PrepareWeakCacheTasks(JSRuntime* rt, WeakCacheTaskVector* immediateTasks)
    5674             : {
    5675           0 :     // Start incremental sweeping for caches that support it or add to a vector
    5676             :     // of sweep tasks to run on a helper thread.
    5677             : 
    5678             :     MOZ_ASSERT(immediateTasks->empty());
    5679             : 
    5680           0 :     bool ok = IterateWeakCaches(rt, [&] (JS::detail::WeakCacheBase* cache,
    5681             :                                          WeakCacheLocation location)
    5682           0 :     {
    5683           0 :         if (!cache->needsSweep())
    5684             :             return true;
    5685           0 : 
    5686             :         // Caches that support incremental sweeping will be swept later.
    5687             :         if (location == ZoneWeakCache && cache->setNeedsIncrementalBarrier(true))
    5688             :             return true;
    5689           0 : 
    5690             :         return immediateTasks->emplaceBack(rt, *cache);
    5691             :     });
    5692           0 : 
    5693           0 :     if (!ok)
    5694             :         immediateTasks->clearAndFree();
    5695           0 : 
    5696           0 :     return ok;
    5697             : }
    5698           0 : 
    5699             : static void
    5700             : SweepWeakCachesOnMainThread(JSRuntime* rt)
    5701             : {
    5702           0 :     // If we ran out of memory, do all the work on the main thread.
    5703             :     gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::SWEEP_WEAK_CACHES);
    5704             :     IterateWeakCaches(rt, [&] (JS::detail::WeakCacheBase* cache, WeakCacheLocation location) {
    5705           0 :         if (cache->needsIncrementalBarrier())
    5706           0 :             cache->setNeedsIncrementalBarrier(false);
    5707           0 :         cache->sweep();
    5708           0 :         return true;
    5709           0 :     });
    5710           0 : }
    5711           0 : 
    5712           0 : IncrementalProgress
    5713             : GCRuntime::beginSweepingSweepGroup(FreeOp* fop, SliceBudget& budget)
    5714             : {
    5715           0 :     /*
    5716             :      * Begin sweeping the group of zones in currentSweepGroup, performing
    5717             :      * actions that must be done before yielding to caller.
    5718             :      */
    5719             : 
    5720             :     using namespace gcstats;
    5721             : 
    5722             :     AutoSCC scc(stats(), sweepGroupIndex);
    5723             : 
    5724           0 :     bool sweepingAtoms = false;
    5725             :     for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
    5726           0 :         /* Set the GC state to sweeping. */
    5727           0 :         zone->changeGCState(Zone::Mark, Zone::Sweep);
    5728             : 
    5729           0 :         /* Purge the ArenaLists before sweeping. */
    5730             :         zone->arenas.unmarkPreMarkedFreeCells();
    5731             :         zone->arenas.clearFreeLists();
    5732           0 : 
    5733           0 :         if (zone->isAtomsZone())
    5734           0 :             sweepingAtoms = true;
    5735             : 
    5736           0 : #ifdef DEBUG
    5737           0 :         zone->gcLastSweepGroupIndex = sweepGroupIndex;
    5738             : #endif
    5739             :     }
    5740           0 : 
    5741             :     validateIncrementalMarking();
    5742             : 
    5743             :     {
    5744           0 :         AutoPhase ap(stats(), PhaseKind::FINALIZE_START);
    5745             :         callFinalizeCallbacks(fop, JSFINALIZE_GROUP_PREPARE);
    5746             :         {
    5747           0 :             AutoPhase ap2(stats(), PhaseKind::WEAK_ZONES_CALLBACK);
    5748           0 :             callWeakPointerZonesCallbacks();
    5749             :         }
    5750           0 :         {
    5751           0 :             AutoPhase ap2(stats(), PhaseKind::WEAK_COMPARTMENT_CALLBACK);
    5752             :             for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
    5753             :                 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
    5754           0 :                     callWeakPointerCompartmentCallbacks(comp);
    5755           0 :             }
    5756           0 :         }
    5757           0 :         callFinalizeCallbacks(fop, JSFINALIZE_GROUP_START);
    5758             :     }
    5759             : 
    5760           0 :     // Updating the atom marking bitmaps. This marks atoms referenced by
    5761             :     // uncollected zones so cannot be done in parallel with the other sweeping
    5762             :     // work below.
    5763             :     if (sweepingAtoms) {
    5764             :         AutoPhase ap(stats(), PhaseKind::UPDATE_ATOMS_BITMAP);
    5765             :         UpdateAtomsBitmap(rt);
    5766           0 :     }
    5767           0 : 
    5768           0 :     sweepDebuggerOnMainThread(fop);
    5769             : 
    5770             :     {
    5771           0 :         AutoLockHelperThreadState lock;
    5772             : 
    5773             :         AutoPhase ap(stats(), PhaseKind::SWEEP_COMPARTMENTS);
    5774           0 : 
    5775             :         AutoRunParallelTask sweepCCWrappers(rt, SweepCCWrappers, PhaseKind::SWEEP_CC_WRAPPER, lock);
    5776           0 :         AutoRunParallelTask sweepObjectGroups(rt, SweepObjectGroups, PhaseKind::SWEEP_TYPE_OBJECT, lock);
    5777             :         AutoRunParallelTask sweepMisc(rt, SweepMisc, PhaseKind::SWEEP_MISC, lock);
    5778           0 :         AutoRunParallelTask sweepCompTasks(rt, SweepCompressionTasks, PhaseKind::SWEEP_COMPRESSION, lock);
    5779           0 :         AutoRunParallelTask sweepWeakMaps(rt, SweepWeakMaps, PhaseKind::SWEEP_WEAKMAPS, lock);
    5780           0 :         AutoRunParallelTask sweepUniqueIds(rt, SweepUniqueIds, PhaseKind::SWEEP_UNIQUEIDS, lock);
    5781           0 : 
    5782           0 :         WeakCacheTaskVector sweepCacheTasks;
    5783           0 :         if (!PrepareWeakCacheTasks(rt, &sweepCacheTasks))
    5784             :             SweepWeakCachesOnMainThread(rt);
    5785           0 : 
    5786           0 :         for (auto& task : sweepCacheTasks)
    5787           0 :             startTask(task, PhaseKind::SWEEP_WEAK_CACHES, lock);
    5788             : 
    5789           0 :         {
    5790           0 :             AutoUnlockHelperThreadState unlock(lock);
    5791             :             sweepJitDataOnMainThread(fop);
    5792             :         }
    5793           0 : 
    5794           0 :         for (auto& task : sweepCacheTasks)
    5795             :             joinTask(task, PhaseKind::SWEEP_WEAK_CACHES, lock);
    5796             :     }
    5797           0 : 
    5798           0 :     if (sweepingAtoms)
    5799             :         startSweepingAtomsTable();
    5800             : 
    5801           0 :     // Queue all GC things in all zones for sweeping, either on the foreground
    5802           0 :     // or on the background thread.
    5803             : 
    5804             :     for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
    5805             : 
    5806             :         zone->arenas.queueForForegroundSweep(fop, ForegroundObjectFinalizePhase);
    5807           0 :         zone->arenas.queueForForegroundSweep(fop, ForegroundNonObjectFinalizePhase);
    5808             :         for (unsigned i = 0; i < ArrayLength(BackgroundFinalizePhases); ++i)
    5809           0 :             zone->arenas.queueForBackgroundSweep(fop, BackgroundFinalizePhases[i]);
    5810           0 : 
    5811           0 :         zone->arenas.queueForegroundThingsForSweep();
    5812           0 :     }
    5813             : 
    5814           0 :     sweepCache = nullptr;
    5815             :     safeToYield = true;
    5816             : 
    5817           0 :     return Finished;
    5818           0 : }
    5819             : 
    5820           0 : #ifdef JS_GC_ZEAL
    5821             : 
    5822             : bool
    5823             : GCRuntime::shouldYieldForZeal(ZealMode mode)
    5824             : {
    5825             :     return useZeal && isIncremental && hasZealMode(mode);
    5826           0 : }
    5827             : 
    5828           0 : IncrementalProgress
    5829             : GCRuntime::maybeYieldForSweepingZeal(FreeOp* fop, SliceBudget& budget)
    5830             : {
    5831             :     /*
    5832           0 :      * Check whether we need to yield for GC zeal. We always yield when running
    5833             :      * in incremental multi-slice zeal mode so RunDebugGC can reset the slice
    5834             :      * budget.
    5835             :      */
    5836             :     if (initialState != State::Sweep && shouldYieldForZeal(ZealMode::IncrementalMultipleSlices))
    5837             :         return NotFinished;
    5838             : 
    5839           0 :     return Finished;
    5840             : }
    5841             : 
    5842           0 : #endif
    5843             : 
    5844             : IncrementalProgress
    5845             : GCRuntime::endSweepingSweepGroup(FreeOp* fop, SliceBudget& budget)
    5846             : {
    5847             :     {
    5848           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::FINALIZE_END);
    5849             :         FreeOp fop(rt);
    5850             :         callFinalizeCallbacks(&fop, JSFINALIZE_GROUP_END);
    5851           0 :     }
    5852           0 : 
    5853           0 :     /* Update the GC state for zones we have swept. */
    5854             :     for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
    5855             :         AutoLockGC lock(rt);
    5856             :         zone->changeGCState(Zone::Sweep, Zone::Finished);
    5857           0 :         zone->threshold.updateAfterGC(zone->usage.gcBytes(), invocationKind, tunables,
    5858           0 :                                       schedulingState, lock);
    5859           0 :         zone->updateAllGCMallocCountersOnGCEnd(lock);
    5860           0 :         zone->arenas.unmarkPreMarkedFreeCells();
    5861           0 :     }
    5862           0 : 
    5863           0 :     /*
    5864           0 :      * Start background thread to sweep zones if required, sweeping the atoms
    5865             :      * zone last if present.
    5866             :      */
    5867             :     bool sweepAtomsZone = false;
    5868             :     ZoneList zones;
    5869             :     for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
    5870             :         if (zone->isAtomsZone())
    5871           0 :             sweepAtomsZone = true;
    5872           0 :         else
    5873           0 :             zones.append(zone);
    5874           0 :     }
    5875             :     if (sweepAtomsZone)
    5876             :         zones.append(atomsZone);
    5877           0 : 
    5878             :     if (sweepOnBackgroundThread)
    5879           0 :         queueZonesForBackgroundSweep(zones);
    5880           0 :     else
    5881             :         sweepBackgroundThings(zones, blocksToFreeAfterSweeping.ref());
    5882           0 : 
    5883           0 :     return Finished;
    5884             : }
    5885           0 : 
    5886             : void
    5887           0 : GCRuntime::beginSweepPhase(JS::gcreason::Reason reason, AutoTraceSession& session)
    5888             : {
    5889             :     /*
    5890             :      * Sweep phase.
    5891           0 :      *
    5892             :      * Finalize as we sweep, outside of lock but with RuntimeHeapIsBusy()
    5893             :      * true so that any attempt to allocate a GC-thing from a finalizer will
    5894             :      * fail, rather than nest badly and leave the unmarked newborn to be swept.
    5895             :      */
    5896             : 
    5897             :     MOZ_ASSERT(!abortSweepAfterCurrentGroup);
    5898             : 
    5899             :     AutoSetThreadIsSweeping threadIsSweeping;
    5900             : 
    5901           0 :     releaseHeldRelocatedArenas();
    5902             : 
    5903           0 :     computeNonIncrementalMarkingForValidation(session);
    5904             : 
    5905           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP);
    5906             : 
    5907           0 :     sweepOnBackgroundThread =
    5908             :         reason != JS::gcreason::DESTROY_RUNTIME &&
    5909           0 :         !gcTracer.traceEnabled() &&
    5910             :         CanUseExtraThreads();
    5911             : 
    5912             :     releaseObservedTypes = shouldReleaseObservedTypes();
    5913           0 : 
    5914           0 :     AssertNoWrappersInGrayList(rt);
    5915             :     DropStringWrappers(rt);
    5916           0 : 
    5917             :     groupZonesForSweeping(reason);
    5918           0 : 
    5919           0 :     sweepActions->assertFinished();
    5920             : 
    5921           0 :     // We must not yield after this point until we start sweeping the first sweep
    5922             :     // group.
    5923           0 :     safeToYield = false;
    5924             : }
    5925             : 
    5926             : bool
    5927           0 : ArenaLists::foregroundFinalize(FreeOp* fop, AllocKind thingKind, SliceBudget& sliceBudget,
    5928           0 :                                SortedArenaList& sweepList)
    5929             : {
    5930             :     if (!arenaListsToSweep(thingKind) && incrementalSweptArenas.ref().isEmpty())
    5931           0 :         return true;
    5932             : 
    5933             :     // Empty object arenas are not released until all foreground GC things have
    5934           0 :     // been swept.
    5935             :     KeepArenasEnum keepArenas = IsObjectAllocKind(thingKind) ? KEEP_ARENAS : RELEASE_ARENAS;
    5936             : 
    5937             :     if (!FinalizeArenas(fop, &arenaListsToSweep(thingKind), sweepList,
    5938             :                         thingKind, sliceBudget, keepArenas))
    5939           0 :     {
    5940             :         incrementalSweptArenaKind = thingKind;
    5941           0 :         incrementalSweptArenas = sweepList.toArenaList();
    5942             :         return false;
    5943             :     }
    5944           0 : 
    5945           0 :     // Clear any previous incremental sweep state we may have saved.
    5946           0 :     incrementalSweptArenas.ref().clear();
    5947             : 
    5948             :     if (IsObjectAllocKind(thingKind))
    5949             :       sweepList.extractEmpty(&savedEmptyArenas.ref());
    5950           0 : 
    5951             :     ArenaList finalized = sweepList.toArenaList();
    5952           0 :     arenaLists(thingKind) = finalized.insertListWithCursorAtEnd(arenaLists(thingKind));
    5953           0 : 
    5954             :     return true;
    5955           0 : }
    5956           0 : 
    5957             : IncrementalProgress
    5958           0 : GCRuntime::drainMarkStack(SliceBudget& sliceBudget, gcstats::PhaseKind phase)
    5959             : {
    5960             :     /* Run a marking slice and return whether the stack is now empty. */
    5961             :     gcstats::AutoPhase ap(stats(), phase);
    5962           0 :     return marker.drainMarkStack(sliceBudget) ? Finished : NotFinished;
    5963             : }
    5964             : 
    5965           0 : static void
    5966           0 : SweepThing(Shape* shape)
    5967             : {
    5968             :     if (!shape->isMarkedAny())
    5969             :         shape->sweep();
    5970           0 : }
    5971             : 
    5972           0 : static void
    5973           0 : SweepThing(JSScript* script, AutoClearTypeInferenceStateOnOOM* oom)
    5974           0 : {
    5975             :     AutoSweepTypeScript sweep(script, oom);
    5976             : }
    5977             : 
    5978             : static void
    5979           0 : SweepThing(ObjectGroup* group, AutoClearTypeInferenceStateOnOOM* oom)
    5980             : {
    5981             :     AutoSweepObjectGroup sweep(group, oom);
    5982             : }
    5983             : 
    5984             : template <typename T, typename... Args>
    5985           0 : static bool
    5986             : SweepArenaList(Arena** arenasToSweep, SliceBudget& sliceBudget, Args... args)
    5987             : {
    5988             :     while (Arena* arena = *arenasToSweep) {
    5989             :         for (ArenaCellIterUnderGC i(arena); !i.done(); i.next())
    5990           0 :             SweepThing(i.get<T>(), args...);
    5991             : 
    5992           0 :         *arenasToSweep = (*arenasToSweep)->next;
    5993           0 :         AllocKind kind = MapTypeToFinalizeKind<T>::kind;
    5994           0 :         sliceBudget.step(Arena::thingsPerArena(kind));
    5995             :         if (sliceBudget.isOverBudget())
    5996           0 :             return false;
    5997           0 :     }
    5998           0 : 
    5999           0 :     return true;
    6000             : }
    6001             : 
    6002             : IncrementalProgress
    6003             : GCRuntime::sweepTypeInformation(FreeOp* fop, SliceBudget& budget, Zone* zone)
    6004             : {
    6005             :     // Sweep dead type information stored in scripts and object groups, but
    6006             :     // don't finalize them yet. We have to sweep dead information from both live
    6007           0 :     // and dead scripts and object groups, so that no dead references remain in
    6008             :     // them. Type inference can end up crawling these zones again, such as for
    6009             :     // TypeCompartment::markSetsUnknown, and if this happens after sweeping for
    6010             :     // the sweep group finishes we won't be able to determine which things in
    6011             :     // the zone are live.
    6012             : 
    6013             :     gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP_COMPARTMENTS);
    6014             :     gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::SWEEP_TYPES);
    6015             : 
    6016             :     ArenaLists& al = zone->arenas;
    6017           0 : 
    6018           0 :     AutoClearTypeInferenceStateOnOOM oom(zone);
    6019             : 
    6020           0 :     if (!SweepArenaList<JSScript>(&al.gcScriptArenasToUpdate.ref(), budget, &oom))
    6021             :         return NotFinished;
    6022           0 : 
    6023             :     if (!SweepArenaList<ObjectGroup>(&al.gcObjectGroupArenasToUpdate.ref(), budget, &oom))
    6024           0 :         return NotFinished;
    6025             : 
    6026             :     // Finish sweeping type information in the zone.
    6027           0 :     {
    6028             :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_TYPES_END);
    6029             :         zone->types.endSweep(rt);
    6030             :     }
    6031             : 
    6032           0 :     return Finished;
    6033           0 : }
    6034             : 
    6035             : IncrementalProgress
    6036           0 : GCRuntime::releaseSweptEmptyArenas(FreeOp* fop, SliceBudget& budget, Zone* zone)
    6037             : {
    6038             :     // Foreground finalized objects have already been finalized, and now their
    6039             :     // arenas can be reclaimed by freeing empty ones and making non-empty ones
    6040           0 :     // available for allocation.
    6041             : 
    6042             :     zone->arenas.releaseForegroundSweptEmptyArenas();
    6043             :     return Finished;
    6044             : }
    6045             : 
    6046           0 : void
    6047           0 : GCRuntime::startSweepingAtomsTable()
    6048             : {
    6049             :     auto& maybeAtoms = maybeAtomsToSweep.ref();
    6050             :     MOZ_ASSERT(maybeAtoms.isNothing());
    6051           0 : 
    6052             :     AtomSet* atomsTable = rt->atomsForSweeping();
    6053           0 :     if (!atomsTable)
    6054           0 :         return;
    6055             : 
    6056           0 :     // Create a secondary table to hold new atoms added while we're sweeping
    6057           0 :     // the main table incrementally.
    6058             :     if (!rt->createAtomsAddedWhileSweepingTable()) {
    6059             :         atomsTable->sweep();
    6060             :         return;
    6061             :     }
    6062           0 : 
    6063           0 :     // Initialize remaining atoms to sweep.
    6064           0 :     maybeAtoms.emplace(*atomsTable);
    6065             : }
    6066             : 
    6067             : IncrementalProgress
    6068           0 : GCRuntime::sweepAtomsTable(FreeOp* fop, SliceBudget& budget)
    6069             : {
    6070             :     if (!atomsZone->isGCSweeping())
    6071             :         return Finished;
    6072           0 : 
    6073             :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_ATOMS_TABLE);
    6074           0 : 
    6075             :     auto& maybeAtoms = maybeAtomsToSweep.ref();
    6076             :     if (!maybeAtoms)
    6077           0 :         return Finished;
    6078             : 
    6079           0 :     MOZ_ASSERT(rt->atomsAddedWhileSweeping());
    6080           0 : 
    6081             :     // Sweep the table incrementally until we run out of work or budget.
    6082             :     auto& atomsToSweep = *maybeAtoms;
    6083           0 :     while (!atomsToSweep.empty()) {
    6084             :         budget.step();
    6085             :         if (budget.isOverBudget())
    6086           0 :             return NotFinished;
    6087           0 : 
    6088           0 :         JSAtom* atom = atomsToSweep.front().asPtrUnbarriered();
    6089           0 :         if (IsAboutToBeFinalizedUnbarriered(&atom))
    6090           0 :             atomsToSweep.removeFront();
    6091             :         atomsToSweep.popFront();
    6092           0 :     }
    6093           0 : 
    6094             :     MergeAtomsAddedWhileSweeping(rt);
    6095           0 :     rt->destroyAtomsAddedWhileSweepingTable();
    6096             : 
    6097             :     maybeAtoms.reset();
    6098           0 :     return Finished;
    6099           0 : }
    6100             : 
    6101           0 : class js::gc::WeakCacheSweepIterator
    6102           0 : {
    6103             :     JS::Zone*& sweepZone;
    6104             :     JS::detail::WeakCacheBase*& sweepCache;
    6105             : 
    6106             :   public:
    6107             :     explicit WeakCacheSweepIterator(GCRuntime* gc)
    6108             :       : sweepZone(gc->sweepZone.ref()), sweepCache(gc->sweepCache.ref())
    6109             :     {
    6110             :         // Initialize state when we start sweeping a sweep group.
    6111           0 :         if (!sweepZone) {
    6112           0 :             sweepZone = gc->currentSweepGroup;
    6113             :             MOZ_ASSERT(!sweepCache);
    6114             :             sweepCache = sweepZone->weakCaches().getFirst();
    6115           0 :             settle();
    6116           0 :         }
    6117           0 : 
    6118           0 :         checkState();
    6119           0 :     }
    6120             : 
    6121             :     bool empty(AutoLockHelperThreadState& lock) {
    6122           0 :         return !sweepZone;
    6123           0 :     }
    6124             : 
    6125             :     JS::detail::WeakCacheBase* next(AutoLockHelperThreadState& lock) {
    6126           0 :         if (empty(lock))
    6127             :             return nullptr;
    6128             : 
    6129           0 :         JS::detail::WeakCacheBase* result = sweepCache;
    6130           0 :         sweepCache = sweepCache->getNext();
    6131             :         settle();
    6132             :         checkState();
    6133           0 :         return result;
    6134           0 :     }
    6135           0 : 
    6136           0 :     void settle() {
    6137             :         while (sweepZone) {
    6138             :             while (sweepCache && !sweepCache->needsIncrementalBarrier())
    6139             :                 sweepCache = sweepCache->getNext();
    6140           0 : 
    6141           0 :             if (sweepCache)
    6142           0 :                 break;
    6143           0 : 
    6144             :             sweepZone = sweepZone->nextNodeInGroup();
    6145           0 :             if (sweepZone)
    6146             :                 sweepCache = sweepZone->weakCaches().getFirst();
    6147             :         }
    6148           0 :     }
    6149           0 : 
    6150           0 :   private:
    6151             :     void checkState() {
    6152           0 :         MOZ_ASSERT((!sweepZone && !sweepCache) ||
    6153             :                    (sweepCache && sweepCache->needsIncrementalBarrier()));
    6154             :     }
    6155           0 : };
    6156           0 : 
    6157             : class IncrementalSweepWeakCacheTask : public GCParallelTaskHelper<IncrementalSweepWeakCacheTask>
    6158           0 : {
    6159             :     WeakCacheSweepIterator& work_;
    6160             :     SliceBudget& budget_;
    6161             :     AutoLockHelperThreadState& lock_;
    6162             :     JS::detail::WeakCacheBase* cache_;
    6163             : 
    6164             :   public:
    6165             :     IncrementalSweepWeakCacheTask(JSRuntime* rt, WeakCacheSweepIterator& work, SliceBudget& budget,
    6166             :                                   AutoLockHelperThreadState& lock)
    6167             :       : GCParallelTaskHelper(rt), work_(work), budget_(budget), lock_(lock),
    6168             :         cache_(work.next(lock))
    6169           0 :     {
    6170             :         MOZ_ASSERT(cache_);
    6171           0 :         runtime()->gc.startTask(*this, gcstats::PhaseKind::SWEEP_WEAK_CACHES, lock_);
    6172           0 :     }
    6173             : 
    6174           0 :     ~IncrementalSweepWeakCacheTask() {
    6175           0 :         runtime()->gc.joinTask(*this, gcstats::PhaseKind::SWEEP_WEAK_CACHES, lock_);
    6176           0 :     }
    6177             : 
    6178           0 :     void run() {
    6179           0 :         do {
    6180           0 :             MOZ_ASSERT(cache_->needsIncrementalBarrier());
    6181             :             size_t steps = cache_->sweep();
    6182           0 :             cache_->setNeedsIncrementalBarrier(false);
    6183             : 
    6184           0 :             AutoLockHelperThreadState lock;
    6185           0 :             budget_.step(steps);
    6186           0 :             if (budget_.isOverBudget())
    6187             :                 break;
    6188           0 : 
    6189           0 :             cache_ = work_.next(lock);
    6190           0 :         } while(cache_);
    6191             :     }
    6192             : };
    6193           0 : 
    6194           0 : static const size_t MaxWeakCacheSweepTasks = 8;
    6195           0 : 
    6196             : static size_t
    6197             : WeakCacheSweepTaskCount()
    6198             : {
    6199             :     size_t targetTaskCount = HelperThreadState().cpuCount;
    6200             :     return Min(targetTaskCount, MaxWeakCacheSweepTasks);
    6201             : }
    6202             : 
    6203           0 : IncrementalProgress
    6204           0 : GCRuntime::sweepWeakCaches(FreeOp* fop, SliceBudget& budget)
    6205             : {
    6206             :     WeakCacheSweepIterator work(this);
    6207             : 
    6208           0 :     {
    6209             :         AutoLockHelperThreadState lock;
    6210           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_COMPARTMENTS);
    6211             : 
    6212             :         Maybe<IncrementalSweepWeakCacheTask> tasks[MaxWeakCacheSweepTasks];
    6213           0 :         for (size_t i = 0; !work.empty(lock) && i < WeakCacheSweepTaskCount(); i++)
    6214           0 :             tasks[i].emplace(rt, work, budget, lock);
    6215             : 
    6216           0 :         // Tasks run until budget or work is exhausted.
    6217           0 :     }
    6218           0 : 
    6219             :     AutoLockHelperThreadState lock;
    6220             :     return work.empty(lock) ? Finished : NotFinished;
    6221             : }
    6222             : 
    6223           0 : IncrementalProgress
    6224           0 : GCRuntime::finalizeAllocKind(FreeOp* fop, SliceBudget& budget, Zone* zone, AllocKind kind)
    6225             : {
    6226             :     // Set the number of things per arena for this AllocKind.
    6227             :     size_t thingsPerArena = Arena::thingsPerArena(kind);
    6228           0 :     auto& sweepList = incrementalSweepList.ref();
    6229             :     sweepList.setThingsPerArena(thingsPerArena);
    6230             : 
    6231           0 :     if (!zone->arenas.foregroundFinalize(fop, kind, budget, sweepList))
    6232           0 :         return NotFinished;
    6233           0 : 
    6234             :     // Reset the slots of the sweep list that we used.
    6235           0 :     sweepList.reset(thingsPerArena);
    6236             : 
    6237             :     return Finished;
    6238             : }
    6239             : 
    6240             : IncrementalProgress
    6241             : GCRuntime::sweepShapeTree(FreeOp* fop, SliceBudget& budget, Zone* zone)
    6242             : {
    6243             :     // Remove dead shapes from the shape tree, but don't finalize them yet.
    6244             : 
    6245           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_SHAPE);
    6246             : 
    6247             :     ArenaLists& al = zone->arenas;
    6248             : 
    6249           0 :     if (!SweepArenaList<Shape>(&al.gcShapeArenasToUpdate.ref(), budget))
    6250             :         return NotFinished;
    6251           0 : 
    6252             :     if (!SweepArenaList<AccessorShape>(&al.gcAccessorShapeArenasToUpdate.ref(), budget))
    6253           0 :         return NotFinished;
    6254             : 
    6255             :     return Finished;
    6256           0 : }
    6257             : 
    6258             : // An iterator for a standard container that provides an STL-like begin()/end()
    6259           0 : // interface. This iterator provides a done()/get()/next() style interface.
    6260             : template <typename Container>
    6261             : class ContainerIter
    6262             : {
    6263             :     using Iter = decltype(mozilla::DeclVal<const Container>().begin());
    6264             :     using Elem = decltype(*mozilla::DeclVal<Iter>());
    6265           0 : 
    6266             :     Iter iter;
    6267             :     const Iter end;
    6268             : 
    6269             :   public:
    6270             :     explicit ContainerIter(const Container& container)
    6271             :       : iter(container.begin()), end(container.end())
    6272             :     {}
    6273             : 
    6274           0 :     bool done() const {
    6275           0 :         return iter == end;
    6276           0 :     }
    6277             : 
    6278             :     Elem get() const {
    6279           0 :         return *iter;
    6280             :     }
    6281             : 
    6282             :     void next() {
    6283           0 :         MOZ_ASSERT(!done());
    6284             :         ++iter;
    6285             :     }
    6286           0 : };
    6287           0 : 
    6288           0 : // IncrementalIter is a template class that makes a normal iterator into one
    6289           0 : // that can be used to perform incremental work by using external state that
    6290             : // persists between instantiations. The state is only initialised on the first
    6291             : // use and subsequent uses carry on from the previous state.
    6292             : template <typename Iter>
    6293             : struct IncrementalIter
    6294             : {
    6295             :     using State = Maybe<Iter>;
    6296             :     using Elem = decltype(mozilla::DeclVal<Iter>().get());
    6297             : 
    6298             :   private:
    6299             :     State& maybeIter;
    6300             : 
    6301             :   public:
    6302             :     template <typename... Args>
    6303             :     explicit IncrementalIter(State& maybeIter, Args&&... args)
    6304             :       : maybeIter(maybeIter)
    6305             :     {
    6306             :         if (maybeIter.isNothing())
    6307           0 :             maybeIter.emplace(std::forward<Args>(args)...);
    6308           0 :     }
    6309             : 
    6310           0 :     ~IncrementalIter() {
    6311           0 :         if (done())
    6312             :             maybeIter.reset();
    6313             :     }
    6314           0 : 
    6315           0 :     bool done() const {
    6316           0 :         return maybeIter.ref().done();
    6317           0 :     }
    6318             : 
    6319           0 :     Elem get() const {
    6320           0 :         return maybeIter.ref().get();
    6321             :     }
    6322             : 
    6323             :     void next() {
    6324           0 :         maybeIter.ref().next();
    6325             :     }
    6326             : };
    6327             : 
    6328           0 : // Iterate through the sweep groups created by GCRuntime::groupZonesForSweeping().
    6329             : class js::gc::SweepGroupsIter
    6330             : {
    6331             :     GCRuntime* gc;
    6332             : 
    6333             :   public:
    6334             :     explicit SweepGroupsIter(JSRuntime* rt)
    6335             :       : gc(&rt->gc)
    6336             :     {
    6337             :         MOZ_ASSERT(gc->currentSweepGroup);
    6338           0 :     }
    6339           0 : 
    6340             :     bool done() const {
    6341           0 :         return !gc->currentSweepGroup;
    6342           0 :     }
    6343             : 
    6344             :     Zone* get() const {
    6345           0 :         return gc->currentSweepGroup;
    6346             :     }
    6347             : 
    6348             :     void next() {
    6349             :         MOZ_ASSERT(!done());
    6350             :         gc->getNextSweepGroup();
    6351             :     }
    6352           0 : };
    6353           0 : 
    6354           0 : namespace sweepaction {
    6355           0 : 
    6356             : // Implementation of the SweepAction interface that calls a method on GCRuntime.
    6357             : template <typename... Args>
    6358             : class SweepActionCall final : public SweepAction<GCRuntime*, Args...>
    6359             : {
    6360             :     using Method = IncrementalProgress (GCRuntime::*)(Args...);
    6361             : 
    6362           0 :     Method method;
    6363             : 
    6364             :   public:
    6365             :     explicit SweepActionCall(Method m) : method(m) {}
    6366             :     IncrementalProgress run(GCRuntime* gc, Args... args) override {
    6367             :         return (gc->*method)(args...);
    6368             :     }
    6369          44 :     void assertFinished() const override { }
    6370           0 : };
    6371           0 : 
    6372             : #ifdef JS_GC_ZEAL
    6373           0 : // Implementation of the SweepAction interface that yields in a specified zeal
    6374             : // mode and then calls another action.
    6375             : template <typename... Args>
    6376             : class SweepActionMaybeYield final : public SweepAction<GCRuntime*, Args...>
    6377             : {
    6378             :     using Action = SweepAction<GCRuntime*, Args...>;
    6379             : 
    6380           0 :     ZealMode mode;
    6381             :     UniquePtr<Action> action;
    6382             :     bool triggered;
    6383             : 
    6384             :   public:
    6385             :     SweepActionMaybeYield(UniquePtr<Action> action, ZealMode mode)
    6386             :       : mode(mode), action(std::move(action)), triggered(false) {}
    6387             : 
    6388             :     IncrementalProgress run(GCRuntime* gc, Args... args) override {
    6389          24 :         if (!triggered && gc->shouldYieldForZeal(mode)) {
    6390          48 :             triggered = true;
    6391             :             return NotFinished;
    6392           0 :         }
    6393           0 : 
    6394           0 :         triggered = false;
    6395           0 :         return action->run(gc, args...);
    6396             :     }
    6397             : 
    6398           0 :     void assertFinished() const override {
    6399           0 :         MOZ_ASSERT(!triggered);
    6400             :     }
    6401             : };
    6402           0 : #endif
    6403           0 : 
    6404           0 : // Implementation of the SweepAction interface that calls a list of actions in
    6405             : // sequence.
    6406             : template <typename... Args>
    6407             : class SweepActionSequence final : public SweepAction<Args...>
    6408             : {
    6409             :     using Action = SweepAction<Args...>;
    6410             :     using ActionVector = Vector<UniquePtr<Action>, 0, SystemAllocPolicy>;
    6411           0 :     using Iter = IncrementalIter<ContainerIter<ActionVector>>;
    6412             : 
    6413             :     ActionVector actions;
    6414             :     typename Iter::State iterState;
    6415             : 
    6416             :   public:
    6417             :     bool init(UniquePtr<Action>* acts, size_t count) {
    6418             :         for (size_t i = 0; i < count; i++) {
    6419             :             if (!actions.emplaceBack(std::move(acts[i])))
    6420             :                 return false;
    6421             :         }
    6422          56 :         return true;
    6423          48 :     }
    6424             : 
    6425             :     IncrementalProgress run(Args... args) override {
    6426             :         for (Iter iter(iterState, actions); !iter.done(); iter.next()) {
    6427             :             if (iter.get()->run(args...) == NotFinished)
    6428             :                 return NotFinished;
    6429           0 :         }
    6430           0 :         return Finished;
    6431           0 :     }
    6432           0 : 
    6433             :     void assertFinished() const override {
    6434           0 :         MOZ_ASSERT(iterState.isNothing());
    6435             :         for (const auto& action : actions)
    6436             :             action->assertFinished();
    6437           0 :     }
    6438           0 : };
    6439           0 : 
    6440           0 : template <typename Iter, typename Init, typename... Args>
    6441           0 : class SweepActionForEach final : public SweepAction<Args...>
    6442             : {
    6443             :     using Elem = decltype(mozilla::DeclVal<Iter>().get());
    6444             :     using Action = SweepAction<Args..., Elem>;
    6445           0 :     using IncrIter = IncrementalIter<Iter>;
    6446             : 
    6447             :     Init iterInit;
    6448             :     UniquePtr<Action> action;
    6449             :     typename IncrIter::State iterState;
    6450             : 
    6451             :   public:
    6452             :     SweepActionForEach(const Init& init, UniquePtr<Action> action)
    6453             :       : iterInit(init), action(std::move(action))
    6454             :     {}
    6455             : 
    6456          12 :     IncrementalProgress run(Args... args) override {
    6457          44 :         for (IncrIter iter(iterState, iterInit); !iter.done(); iter.next()) {
    6458             :             if (action->run(args..., iter.get()) == NotFinished)
    6459             :                 return NotFinished;
    6460           0 :         }
    6461           0 :         return Finished;
    6462           0 :     }
    6463           0 : 
    6464             :     void assertFinished() const override {
    6465           0 :         MOZ_ASSERT(iterState.isNothing());
    6466             :         action->assertFinished();
    6467             :     }
    6468           0 : };
    6469           0 : 
    6470           0 : template <typename Iter, typename Init, typename... Args>
    6471           0 : class SweepActionRepeatFor final : public SweepAction<Args...>
    6472             : {
    6473             :   protected:
    6474             :     using Action = SweepAction<Args...>;
    6475           0 :     using IncrIter = IncrementalIter<Iter>;
    6476             : 
    6477             :     Init iterInit;
    6478             :     UniquePtr<Action> action;
    6479             :     typename IncrIter::State iterState;
    6480             : 
    6481             :   public:
    6482             :     SweepActionRepeatFor(const Init& init, UniquePtr<Action> action)
    6483             :       : iterInit(init), action(std::move(action))
    6484             :     {}
    6485             : 
    6486           4 :     IncrementalProgress run(Args... args) override {
    6487           0 :         for (IncrIter iter(iterState, iterInit); !iter.done(); iter.next()) {
    6488             :             if (action->run(args...) == NotFinished)
    6489             :                 return NotFinished;
    6490           0 :         }
    6491           0 :         return Finished;
    6492           0 :     }
    6493           0 : 
    6494             :     void assertFinished() const override {
    6495           0 :         MOZ_ASSERT(iterState.isNothing());
    6496             :         action->assertFinished();
    6497             :     }
    6498           0 : };
    6499           0 : 
    6500           0 : // Helper class to remove the last template parameter from the instantiation of
    6501           0 : // a variadic template. For example:
    6502             : //
    6503             : //   RemoveLastTemplateParameter<Foo<X, Y, Z>>::Type ==> Foo<X, Y>
    6504             : //
    6505             : // This works by recursively instantiating the Impl template with the contents
    6506             : // of the parameter pack so long as there are at least two parameters. The
    6507             : // specialization that matches when only one parameter remains discards it and
    6508             : // instantiates the target template with parameters previously processed.
    6509             : template <typename T>
    6510             : class RemoveLastTemplateParameter {};
    6511             : 
    6512             : template <template <typename...> class Target, typename... Args>
    6513             : class RemoveLastTemplateParameter<Target<Args...>>
    6514             : {
    6515             :     template <typename... Ts>
    6516             :     struct List {};
    6517             : 
    6518             :     template <typename R, typename... Ts>
    6519             :     struct Impl {};
    6520             : 
    6521             :     template <typename... Rs, typename T>
    6522             :     struct Impl<List<Rs...>, T>
    6523             :     {
    6524             :         using Type = Target<Rs...>;
    6525             :     };
    6526             : 
    6527             :     template <typename... Rs, typename H, typename T, typename... Ts>
    6528             :     struct Impl<List<Rs...>, H, T, Ts...>
    6529             :     {
    6530             :         using Type = typename Impl<List<Rs..., H>, T, Ts...>::Type;
    6531             :     };
    6532             : 
    6533             :   public:
    6534             :     using Type = typename Impl<List<>, Args...>::Type;
    6535             : };
    6536             : 
    6537             : template <typename... Args>
    6538             : static UniquePtr<SweepAction<GCRuntime*, Args...>>
    6539             : Call(IncrementalProgress (GCRuntime::*method)(Args...)) {
    6540             :    return MakeUnique<SweepActionCall<Args...>>(method);
    6541             : }
    6542             : 
    6543             : template <typename... Args>
    6544           0 : static UniquePtr<SweepAction<GCRuntime*, Args...>>
    6545             : MaybeYield(ZealMode zealMode, UniquePtr<SweepAction<GCRuntime*, Args...>> action) {
    6546             : #ifdef JS_GC_ZEAL
    6547             :     return js::MakeUnique<SweepActionMaybeYield<Args...>>(std::move(action), zealMode);
    6548             : #else
    6549             :     return action;
    6550             : #endif
    6551          96 : }
    6552             : 
    6553             : template <typename... Args, typename... Rest>
    6554             : static UniquePtr<SweepAction<Args...>>
    6555             : Sequence(UniquePtr<SweepAction<Args...>> first, Rest... rest)
    6556             : {
    6557             :     UniquePtr<SweepAction<Args...>> actions[] = { std::move(first), std::move(rest)... };
    6558             :     auto seq = MakeUnique<SweepActionSequence<Args...>>();
    6559           0 :     if (!seq || !seq->init(actions, ArrayLength(actions)))
    6560             :         return nullptr;
    6561           0 : 
    6562          16 :     return UniquePtr<SweepAction<Args...>>(std::move(seq));
    6563           0 : }
    6564             : 
    6565             : template <typename... Args>
    6566           0 : static UniquePtr<SweepAction<Args...>>
    6567             : RepeatForSweepGroup(JSRuntime* rt, UniquePtr<SweepAction<Args...>> action)
    6568             : {
    6569             :     if (!action)
    6570             :         return nullptr;
    6571           0 : 
    6572             :     using Action = SweepActionRepeatFor<SweepGroupsIter, JSRuntime*, Args...>;
    6573           0 :     return js::MakeUnique<Action>(rt, std::move(action));
    6574             : }
    6575             : 
    6576             : template <typename... Args>
    6577           0 : static UniquePtr<typename RemoveLastTemplateParameter<SweepAction<Args...>>::Type>
    6578             : ForEachZoneInSweepGroup(JSRuntime* rt, UniquePtr<SweepAction<Args...>> action)
    6579             : {
    6580             :     if (!action)
    6581             :         return nullptr;
    6582           0 : 
    6583             :     using Action = typename RemoveLastTemplateParameter<
    6584           4 :         SweepActionForEach<SweepGroupZonesIter, JSRuntime*, Args...>>::Type;
    6585             :     return js::MakeUnique<Action>(rt, std::move(action));
    6586             : }
    6587             : 
    6588             : template <typename... Args>
    6589          12 : static UniquePtr<typename RemoveLastTemplateParameter<SweepAction<Args...>>::Type>
    6590             : ForEachAllocKind(AllocKinds kinds, UniquePtr<SweepAction<Args...>> action)
    6591             : {
    6592             :     if (!action)
    6593             :         return nullptr;
    6594           8 : 
    6595             :     using Action = typename RemoveLastTemplateParameter<
    6596           0 :         SweepActionForEach<ContainerIter<AllocKinds>, AllocKinds, Args...>>::Type;
    6597             :     return js::MakeUnique<Action>(kinds, std::move(action));
    6598             : }
    6599             : 
    6600             : } // namespace sweepaction
    6601          24 : 
    6602             : bool
    6603             : GCRuntime::initSweepActions()
    6604             : {
    6605             :     using namespace sweepaction;
    6606             :     using sweepaction::Call;
    6607           0 : 
    6608             :     sweepActions.ref() =
    6609             :         RepeatForSweepGroup(rt,
    6610             :             Sequence(
    6611             :                 Call(&GCRuntime::endMarkingSweepGroup),
    6612           4 :                 Call(&GCRuntime::beginSweepingSweepGroup),
    6613           8 : #ifdef JS_GC_ZEAL
    6614           8 :                 Call(&GCRuntime::maybeYieldForSweepingZeal),
    6615           4 : #endif
    6616           4 :                 MaybeYield(ZealMode::YieldBeforeSweepingAtoms,
    6617             :                            Call(&GCRuntime::sweepAtomsTable)),
    6618           4 :                 MaybeYield(ZealMode::YieldBeforeSweepingCaches,
    6619             :                            Call(&GCRuntime::sweepWeakCaches)),
    6620           8 :                 ForEachZoneInSweepGroup(rt,
    6621           0 :                     Sequence(
    6622           0 :                         MaybeYield(ZealMode::YieldBeforeSweepingTypes,
    6623           0 :                                    Call(&GCRuntime::sweepTypeInformation)),
    6624           0 :                         MaybeYield(ZealMode::YieldBeforeSweepingObjects,
    6625           8 :                                    ForEachAllocKind(ForegroundObjectFinalizePhase.kinds,
    6626           0 :                                                     Call(&GCRuntime::finalizeAllocKind))),
    6627           4 :                         MaybeYield(ZealMode::YieldBeforeSweepingNonObjects,
    6628           8 :                                    ForEachAllocKind(ForegroundNonObjectFinalizePhase.kinds,
    6629           8 :                                                     Call(&GCRuntime::finalizeAllocKind))),
    6630           0 :                         MaybeYield(ZealMode::YieldBeforeSweepingShapeTrees,
    6631           8 :                                    Call(&GCRuntime::sweepShapeTree)),
    6632           8 :                         Call(&GCRuntime::releaseSweptEmptyArenas))),
    6633           8 :                 Call(&GCRuntime::endSweepingSweepGroup)));
    6634           0 : 
    6635           4 :     return sweepActions != nullptr;
    6636           0 : }
    6637           8 : 
    6638             : IncrementalProgress
    6639           8 : GCRuntime::performSweepActions(SliceBudget& budget)
    6640             : {
    6641             :     AutoSetThreadIsSweeping threadIsSweeping;
    6642             : 
    6643           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP);
    6644             :     FreeOp fop(rt);
    6645           0 : 
    6646             :     // Drain the mark stack, except in the first sweep slice where we must not
    6647           0 :     // yield to the mutator until we've starting sweeping a sweep group.
    6648           0 :     MOZ_ASSERT(initialState <= State::Sweep);
    6649             :     if (initialState != State::Sweep) {
    6650             :         MOZ_ASSERT(marker.isDrained());
    6651             :     } else {
    6652           0 :         if (drainMarkStack(budget, gcstats::PhaseKind::SWEEP_MARK) == NotFinished)
    6653           0 :             return NotFinished;
    6654           0 :     }
    6655             : 
    6656           0 :     return sweepActions->run(this, &fop, budget);
    6657             : }
    6658             : 
    6659             : bool
    6660           0 : GCRuntime::allCCVisibleZonesWereCollected() const
    6661             : {
    6662             :     // Calculate whether the gray marking state is now valid.
    6663             :     //
    6664           0 :     // The gray bits change from invalid to valid if we finished a full GC from
    6665             :     // the point of view of the cycle collector. We ignore the following:
    6666             :     //
    6667             :     //  - Helper thread zones, as these are not reachable from the main heap.
    6668             :     //  - The atoms zone, since strings and symbols are never marked gray.
    6669             :     //  - Empty zones.
    6670             :     //
    6671             :     // These exceptions ensure that when the CC requests a full GC the gray mark
    6672             :     // state ends up valid even it we don't collect all of the zones.
    6673             : 
    6674             :     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
    6675             :         if (!zone->isCollecting() &&
    6676             :             !zone->usedByHelperThread() &&
    6677             :             !zone->arenas.arenaListsAreEmpty())
    6678           0 :         {
    6679           0 :             return false;
    6680           0 :         }
    6681           0 :     }
    6682             : 
    6683           0 :     return true;
    6684             : }
    6685             : 
    6686             : void
    6687           0 : GCRuntime::endSweepPhase(bool destroyingRuntime)
    6688             : {
    6689             :     sweepActions->assertFinished();
    6690             : 
    6691           0 :     AutoSetThreadIsSweeping threadIsSweeping;
    6692             : 
    6693           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP);
    6694             :     FreeOp fop(rt);
    6695           0 : 
    6696             :     MOZ_ASSERT_IF(destroyingRuntime, !sweepOnBackgroundThread);
    6697           0 : 
    6698           0 :     // Update the runtime malloc counter only if we were doing a full GC.
    6699             :     if (isFull) {
    6700           0 :         AutoLockGC lock(rt);
    6701             :         mallocCounter.updateOnGCEnd(tunables, lock);
    6702             :     }
    6703           0 : 
    6704           0 :     {
    6705           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::DESTROY);
    6706             : 
    6707             :         /*
    6708             :          * Sweep script filenames after sweeping functions in the generic loop
    6709           0 :          * above. In this way when a scripted function's finalizer destroys the
    6710             :          * script and calls rt->destroyScriptHook, the hook can still access the
    6711             :          * script's filename. See bug 323267.
    6712             :          */
    6713             :         SweepScriptData(rt);
    6714             : 
    6715             :         /* Clear out any small pools that we're hanging on to. */
    6716             :         if (rt->hasJitRuntime())
    6717           0 :             rt->jitRuntime()->execAlloc().purge();
    6718             :     }
    6719             : 
    6720           0 :     {
    6721           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::FINALIZE_END);
    6722             :         callFinalizeCallbacks(&fop, JSFINALIZE_COLLECTION_END);
    6723             : 
    6724             :         if (allCCVisibleZonesWereCollected())
    6725           0 :             grayBitsValid = true;
    6726           0 :     }
    6727             : 
    6728           0 :     finishMarkingValidation();
    6729           0 : 
    6730             : #ifdef DEBUG
    6731             :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6732           0 :         for (auto i : AllAllocKinds()) {
    6733             :             MOZ_ASSERT_IF(!IsBackgroundFinalized(i) ||
    6734             :                           !sweepOnBackgroundThread,
    6735           0 :                           !zone->arenas.arenaListsToSweep(i));
    6736           0 :         }
    6737           0 :     }
    6738             : #endif
    6739             : 
    6740             :     AssertNoWrappersInGrayList(rt);
    6741             : }
    6742             : 
    6743             : void
    6744           0 : GCRuntime::beginCompactPhase()
    6745           0 : {
    6746             :     MOZ_ASSERT(!isBackgroundSweeping());
    6747             : 
    6748           0 :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT);
    6749             : 
    6750           0 :     MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());
    6751             :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    6752           0 :         if (CanRelocateZone(zone))
    6753             :             zonesToMaybeCompact.ref().append(zone);
    6754           0 :     }
    6755           0 : 
    6756           0 :     MOZ_ASSERT(!relocatedArenasToRelease);
    6757           0 :     startedCompacting = true;
    6758             : }
    6759             : 
    6760           0 : IncrementalProgress
    6761           0 : GCRuntime::compactPhase(JS::gcreason::Reason reason, SliceBudget& sliceBudget,
    6762           0 :                         AutoTraceSession& session)
    6763             : {
    6764             :     assertBackgroundSweepingFinished();
    6765           0 :     MOZ_ASSERT(startedCompacting);
    6766             : 
    6767             :     gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT);
    6768           0 : 
    6769           0 :     // TODO: JSScripts can move. If the sampler interrupts the GC in the
    6770             :     // middle of relocating an arena, invalid JSScript pointers may be
    6771           0 :     // accessed. Suppress all sampling until a finer-grained solution can be
    6772             :     // found. See bug 1295775.
    6773             :     AutoSuppressProfilerSampling suppressSampling(rt->mainContextFromOwnThread());
    6774             : 
    6775             :     ZoneList relocatedZones;
    6776             :     Arena* relocatedArenas = nullptr;
    6777           0 :     while (!zonesToMaybeCompact.ref().isEmpty()) {
    6778             : 
    6779           0 :         Zone* zone = zonesToMaybeCompact.ref().front();
    6780           0 :         zonesToMaybeCompact.ref().removeFront();
    6781           0 : 
    6782             :         MOZ_ASSERT(nursery().isEmpty());
    6783           0 :         zone->changeGCState(Zone::Finished, Zone::Compact);
    6784           0 : 
    6785             :         if (relocateArenas(zone, reason, relocatedArenas, sliceBudget)) {
    6786           0 :             updateZonePointersToRelocatedCells(zone);
    6787           0 :             relocatedZones.append(zone);
    6788             :         } else {
    6789           0 :             zone->changeGCState(Zone::Compact, Zone::Finished);
    6790           0 :         }
    6791           0 : 
    6792             :         if (sliceBudget.isOverBudget())
    6793           0 :             break;
    6794             :     }
    6795             : 
    6796           0 :     if (!relocatedZones.isEmpty()) {
    6797             :         updateRuntimePointersToRelocatedCells(session);
    6798             : 
    6799             :         do {
    6800           0 :             Zone* zone = relocatedZones.front();
    6801           0 :             relocatedZones.removeFront();
    6802             :             zone->changeGCState(Zone::Compact, Zone::Finished);
    6803           0 :         }
    6804           0 :         while (!relocatedZones.isEmpty());
    6805           0 :     }
    6806           0 : 
    6807             :     if (ShouldProtectRelocatedArenas(reason))
    6808           0 :         protectAndHoldArenas(relocatedArenas);
    6809             :     else
    6810             :         releaseRelocatedArenas(relocatedArenas);
    6811           0 : 
    6812           0 :     // Clear caches that can contain cell pointers.
    6813             :     rt->caches().purgeForCompaction();
    6814           0 : 
    6815             : #ifdef DEBUG
    6816             :     CheckHashTablesAfterMovingGC(rt);
    6817           0 : #endif
    6818             : 
    6819             :     return zonesToMaybeCompact.ref().isEmpty() ? Finished : NotFinished;
    6820           0 : }
    6821             : 
    6822             : void
    6823           0 : GCRuntime::endCompactPhase()
    6824             : {
    6825             :     startedCompacting = false;
    6826             : }
    6827           0 : 
    6828             : void
    6829           0 : GCRuntime::finishCollection()
    6830           0 : {
    6831             :     assertBackgroundSweepingFinished();
    6832             :     MOZ_ASSERT(marker.isDrained());
    6833           0 :     marker.stop();
    6834             :     clearBufferedGrayRoots();
    6835           0 : 
    6836           0 :     uint64_t currentTime = PRMJ_Now();
    6837           0 :     schedulingState.updateHighFrequencyMode(lastGCTime, currentTime, tunables);
    6838           0 : 
    6839             :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    6840           0 :         if (zone->isCollecting()) {
    6841           0 :             zone->changeGCState(Zone::Finished, Zone::NoGC);
    6842             :             zone->notifyObservingDebuggers();
    6843           0 :         }
    6844           0 : 
    6845           0 :         MOZ_ASSERT(!zone->isCollectingFromAnyThread());
    6846           0 :         MOZ_ASSERT(!zone->wasGCStarted());
    6847             :     }
    6848             : 
    6849           0 :     MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());
    6850           0 : 
    6851             :     lastGCTime = currentTime;
    6852             : }
    6853           0 : 
    6854             : static const char*
    6855           0 : HeapStateToLabel(JS::HeapState heapState)
    6856           0 : {
    6857             :     switch (heapState) {
    6858             :       case JS::HeapState::MinorCollecting:
    6859           9 :         return "js::Nursery::collect";
    6860             :       case JS::HeapState::MajorCollecting:
    6861           9 :         return "js::GCRuntime::collect";
    6862             :       case JS::HeapState::Tracing:
    6863             :         return "JS_IterateCompartments";
    6864             :       case JS::HeapState::Idle:
    6865           0 :       case JS::HeapState::CycleCollecting:
    6866             :         MOZ_CRASH("Should never have an Idle or CC heap state when pushing GC profiling stack frames!");
    6867           0 :     }
    6868             :     MOZ_ASSERT_UNREACHABLE("Should have exhausted every JS::HeapState variant!");
    6869             :     return nullptr;
    6870           0 : }
    6871             : 
    6872           0 : /* Start a new heap session. */
    6873             : AutoTraceSession::AutoTraceSession(JSRuntime* rt, JS::HeapState heapState)
    6874             :   : runtime(rt),
    6875             :     prevState(rt->heapState_),
    6876             :     profilingStackFrame(rt->mainContextFromOwnThread(), HeapStateToLabel(heapState),
    6877           9 :                         ProfilingStackFrame::Category::GCCC)
    6878             : {
    6879           9 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    6880             :     MOZ_ASSERT(prevState == JS::HeapState::Idle);
    6881          45 :     MOZ_ASSERT(heapState != JS::HeapState::Idle);
    6882             :     MOZ_ASSERT_IF(heapState == JS::HeapState::MajorCollecting, rt->gc.nursery().isEmpty());
    6883           9 : 
    6884           9 :     // Session always begins with lock held, see comment in class definition.
    6885           9 :     maybeLock.emplace(rt);
    6886             : 
    6887             :     rt->heapState_ = heapState;
    6888           0 : }
    6889             : 
    6890          18 : AutoTraceSession::~AutoTraceSession()
    6891           9 : {
    6892             :     MOZ_ASSERT(JS::RuntimeHeapIsBusy());
    6893          27 :     runtime->heapState_ = prevState;
    6894             : }
    6895           0 : 
    6896           0 : JS_PUBLIC_API(JS::HeapState)
    6897           9 : JS::RuntimeHeapState()
    6898             : {
    6899             :     return TlsContext.get()->runtime()->heapState();
    6900     1389664 : }
    6901             : 
    6902    10812712 : GCRuntime::IncrementalResult
    6903             : GCRuntime::resetIncrementalGC(gc::AbortReason reason, AutoTraceSession& session)
    6904             : {
    6905             :     MOZ_ASSERT(reason != gc::AbortReason::None);
    6906           0 : 
    6907             :     switch (incrementalState) {
    6908           0 :       case State::NotActive:
    6909             :           return IncrementalResult::Ok;
    6910           0 : 
    6911             :       case State::MarkRoots:
    6912             :         MOZ_CRASH("resetIncrementalGC did not expect MarkRoots state");
    6913             :         break;
    6914             : 
    6915           0 :       case State::Mark: {
    6916             :         /* Cancel any ongoing marking. */
    6917             :         marker.reset();
    6918             :         marker.stop();
    6919             :         clearBufferedGrayRoots();
    6920           0 : 
    6921           0 :         for (GCCompartmentsIter c(rt); !c.done(); c.next())
    6922           0 :             ResetGrayList(c);
    6923             : 
    6924           0 :         for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    6925           0 :             zone->setNeedsIncrementalBarrier(false);
    6926             :             zone->changeGCState(Zone::Mark, Zone::NoGC);
    6927           0 :             zone->arenas.unmarkPreMarkedFreeCells();
    6928           0 :         }
    6929           0 : 
    6930           0 :         blocksToFreeAfterSweeping.ref().freeAll();
    6931             : 
    6932             :         incrementalState = State::NotActive;
    6933           0 : 
    6934             :         MOZ_ASSERT(!marker.shouldCheckCompartments());
    6935           0 : 
    6936             :         break;
    6937           0 :       }
    6938             : 
    6939             :       case State::Sweep: {
    6940             :         marker.reset();
    6941             : 
    6942             :         for (CompartmentsIter c(rt); !c.done(); c.next())
    6943           0 :             c->gcState.scheduledForDestruction = false;
    6944             : 
    6945           0 :         for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    6946           0 :             if (zone->isGCMarking())
    6947             :                 zone->arenas.unmarkPreMarkedFreeCells();
    6948             :         }
    6949           0 : 
    6950             :         /* Finish sweeping the current sweep group, then abort. */
    6951             :         abortSweepAfterCurrentGroup = true;
    6952           0 : 
    6953           0 :         /* Don't perform any compaction after sweeping. */
    6954             :         bool wasCompacting = isCompacting;
    6955             :         isCompacting = false;
    6956           0 : 
    6957             :         auto unlimited = SliceBudget::unlimited();
    6958           0 :         incrementalCollectSlice(unlimited, JS::gcreason::RESET, session);
    6959             : 
    6960             :         isCompacting = wasCompacting;
    6961           0 : 
    6962           0 :         {
    6963             :             gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    6964             :             rt->gc.waitBackgroundSweepOrAllocEnd();
    6965             :         }
    6966             :         break;
    6967             :       }
    6968             : 
    6969           0 :       case State::Finalize: {
    6970           0 :         {
    6971             :             gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    6972             :             rt->gc.waitBackgroundSweepOrAllocEnd();
    6973           0 :         }
    6974           0 : 
    6975             :         bool wasCompacting = isCompacting;
    6976             :         isCompacting = false;
    6977           0 : 
    6978             :         auto unlimited = SliceBudget::unlimited();
    6979           0 :         incrementalCollectSlice(unlimited, JS::gcreason::RESET, session);
    6980             : 
    6981             :         isCompacting = wasCompacting;
    6982             : 
    6983             :         break;
    6984             :       }
    6985           0 : 
    6986             :       case State::Compact: {
    6987           0 :         bool wasCompacting = isCompacting;
    6988           0 : 
    6989           0 :         isCompacting = true;
    6990             :         startedCompacting = true;
    6991             :         zonesToMaybeCompact.ref().clear();
    6992           0 : 
    6993             :         auto unlimited = SliceBudget::unlimited();
    6994           0 :         incrementalCollectSlice(unlimited, JS::gcreason::RESET, session);
    6995             : 
    6996             :         isCompacting = wasCompacting;
    6997             :         break;
    6998             :       }
    6999             : 
    7000           0 :       case State::Decommit: {
    7001             :         auto unlimited = SliceBudget::unlimited();
    7002             :         incrementalCollectSlice(unlimited, JS::gcreason::RESET, session);
    7003             :         break;
    7004             :       }
    7005           0 :     }
    7006             : 
    7007             :     stats().reset(reason);
    7008           0 : 
    7009           0 : #ifdef DEBUG
    7010           0 :     assertBackgroundSweepingFinished();
    7011           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    7012           0 :         MOZ_ASSERT(!zone->isCollectingFromAnyThread());
    7013             :         MOZ_ASSERT(!zone->needsIncrementalBarrier());
    7014           0 :         MOZ_ASSERT(!zone->isOnList());
    7015           0 :     }
    7016             :     MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty());
    7017             :     MOZ_ASSERT(incrementalState == State::NotActive);
    7018             : #endif
    7019             : 
    7020             :     return IncrementalResult::Reset;
    7021             : }
    7022             : 
    7023             : namespace {
    7024             : 
    7025             : /*
    7026             :  * Temporarily disable barriers during GC slices.
    7027             :  */
    7028             : class AutoDisableBarriers {
    7029             :   public:
    7030             :     explicit AutoDisableBarriers(JSRuntime* rt);
    7031             :     ~AutoDisableBarriers();
    7032             : 
    7033             :   private:
    7034             :     JSRuntime* runtime;
    7035             :     AutoSetThreadIsPerformingGC performingGC;
    7036             : };
    7037             : 
    7038           0 : } /* anonymous namespace */
    7039           0 : 
    7040             : AutoDisableBarriers::AutoDisableBarriers(JSRuntime* rt)
    7041           0 :   : runtime(rt)
    7042             : {
    7043             :     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
    7044             :         /*
    7045             :          * Clear needsIncrementalBarrier early so we don't do any write
    7046             :          * barriers during GC. We don't need to update the Ion barriers (which
    7047             :          * is expensive) because Ion code doesn't run during GC. If need be,
    7048           0 :          * we'll update the Ion barriers in ~AutoDisableBarriers.
    7049           0 :          */
    7050           0 :         if (zone->isGCMarking()) {
    7051             :             MOZ_ASSERT(zone->needsIncrementalBarrier());
    7052           0 :             zone->setNeedsIncrementalBarrier(false);
    7053             :         }
    7054           0 :         MOZ_ASSERT(!zone->needsIncrementalBarrier());
    7055             :     }
    7056           0 : }
    7057             : 
    7058             : AutoDisableBarriers::~AutoDisableBarriers()
    7059           0 : {
    7060           0 :     /* We can't use GCZonesIter if this is the end of the last slice. */
    7061           0 :     for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next()) {
    7062           0 :         MOZ_ASSERT(!zone->needsIncrementalBarrier());
    7063             :         if (zone->isGCMarking())
    7064           0 :             zone->setNeedsIncrementalBarrier(true);
    7065             :     }
    7066             : }
    7067           0 : 
    7068             : void
    7069             : GCRuntime::pushZealSelectedObjects()
    7070             : {
    7071           0 : #ifdef JS_GC_ZEAL
    7072           0 :     /* Push selected objects onto the mark stack and clear the list. */
    7073             :     for (JSObject** obj = selectedForMarking.ref().begin(); obj != selectedForMarking.ref().end(); obj++)
    7074           0 :         TraceManuallyBarrieredEdge(&marker, obj, "selected obj");
    7075             : #endif
    7076             : }
    7077           0 : 
    7078             : static bool
    7079           0 : IsShutdownGC(JS::gcreason::Reason reason)
    7080             : {
    7081           0 :     return reason == JS::gcreason::SHUTDOWN_CC || reason == JS::gcreason::DESTROY_RUNTIME;
    7082           0 : }
    7083           0 : 
    7084             : static bool
    7085           0 : ShouldCleanUpEverything(JS::gcreason::Reason reason, JSGCInvocationKind gckind)
    7086             : {
    7087             :     // During shutdown, we must clean everything up, for the sake of leak
    7088             :     // detection. When a runtime has no contexts, or we're doing a GC before a
    7089             :     // shutdown CC, those are strong indications that we're shutting down.
    7090           0 :     return IsShutdownGC(reason) || gckind == GC_SHRINK;
    7091             : }
    7092             : 
    7093             : void
    7094             : GCRuntime::incrementalCollectSlice(SliceBudget& budget, JS::gcreason::Reason reason,
    7095             :                                    AutoTraceSession& session)
    7096             : {
    7097             :     /*
    7098             :      * Drop the exclusive access lock if we are in an incremental collection
    7099           0 :      * that does not touch the atoms zone.
    7100             :      */
    7101             :     if (isIncrementalGCInProgress() && !atomsZone->isCollecting())
    7102             :         session.maybeLock.reset();
    7103           0 : 
    7104             :     AutoDisableBarriers disableBarriers(rt);
    7105             : 
    7106             :     bool destroyingRuntime = (reason == JS::gcreason::DESTROY_RUNTIME);
    7107             : 
    7108             :     initialState = incrementalState;
    7109             : 
    7110           0 : #ifdef JS_GC_ZEAL
    7111           0 :     /*
    7112             :      * Do the incremental collection type specified by zeal mode if the
    7113           0 :      * collection was triggered by runDebugGC() and incremental GC has not been
    7114             :      * cancelled by resetIncrementalGC().
    7115           0 :      */
    7116             :     useZeal = reason == JS::gcreason::DEBUG_GC && !budget.isUnlimited();
    7117           0 : #else
    7118             :     bool useZeal = false;
    7119             : #endif
    7120             : 
    7121             : #ifdef DEBUG
    7122             :     {
    7123             :         char budgetBuffer[32];
    7124             :         budget.describe(budgetBuffer, 32);
    7125           0 :         stats().writeLogMessage("Incremental: %d, useZeal: %d, budget: %s",
    7126             :             bool(isIncremental), bool(useZeal), budgetBuffer);
    7127             :     }
    7128             : #endif
    7129             :     MOZ_ASSERT_IF(isIncrementalGCInProgress(), isIncremental);
    7130             : 
    7131             :     isIncremental = !budget.isUnlimited();
    7132             : 
    7133           0 :     if (useZeal && hasIncrementalTwoSliceZealMode()) {
    7134           0 :         /*
    7135           0 :          * Yields between slices occurs at predetermined points in these modes;
    7136             :          * the budget is not used.
    7137             :          */
    7138           0 :         stats().writeLogMessage(
    7139           0 :             "Using unlimited budget for two-slice zeal mode");
    7140           0 :         budget.makeUnlimited();
    7141             :     }
    7142           0 : 
    7143             :     switch (incrementalState) {
    7144           0 :       case State::NotActive:
    7145             :         initialReason = reason;
    7146             :         cleanUpEverything = ShouldCleanUpEverything(reason, invocationKind);
    7147             :         isCompacting = shouldCompact();
    7148             :         lastMarkSlice = false;
    7149           0 :         rootsRemoved = false;
    7150           0 : 
    7151           0 :         incrementalState = State::MarkRoots;
    7152             : 
    7153             :         MOZ_FALLTHROUGH;
    7154           0 : 
    7155             :       case State::MarkRoots:
    7156           0 :         if (!beginMarkPhase(reason, session)) {
    7157           0 :             incrementalState = State::NotActive;
    7158           0 :             return;
    7159           0 :         }
    7160           0 : 
    7161             :         if (!destroyingRuntime)
    7162           0 :             pushZealSelectedObjects();
    7163             : 
    7164             :         incrementalState = State::Mark;
    7165             : 
    7166             :         if (isIncremental && useZeal && hasZealMode(ZealMode::YieldBeforeMarking))
    7167           0 :             break;
    7168           0 : 
    7169           0 :         MOZ_FALLTHROUGH;
    7170             : 
    7171             :       case State::Mark:
    7172           0 :         AutoGCRooter::traceAllWrappers(rt->mainContextFromOwnThread(), &marker);
    7173           0 : 
    7174             :         /* If we needed delayed marking for gray roots, then collect until done. */
    7175           0 :         if (isIncremental && !hasValidGrayRootsBuffer()) {
    7176             :             budget.makeUnlimited();
    7177           0 :             isIncremental = false;
    7178             :             stats().nonincremental(AbortReason::GrayRootBufferingFailed);
    7179             :         }
    7180             : 
    7181             :         if (drainMarkStack(budget, gcstats::PhaseKind::MARK) == NotFinished)
    7182             :             break;
    7183           0 : 
    7184             :         MOZ_ASSERT(marker.isDrained());
    7185             : 
    7186           0 :         /*
    7187           0 :          * In incremental GCs where we have already performed more than one
    7188           0 :          * slice we yield after marking with the aim of starting the sweep in
    7189           0 :          * the next slice, since the first slice of sweeping can be expensive.
    7190             :          *
    7191             :          * This is modified by the various zeal modes.  We don't yield in
    7192           0 :          * YieldBeforeMarking mode and we always yield in YieldBeforeSweeping
    7193             :          * mode.
    7194             :          *
    7195           0 :          * We will need to mark anything new on the stack when we resume, so
    7196             :          * we stay in Mark state.
    7197             :          */
    7198             :         if (!lastMarkSlice && isIncremental &&
    7199             :             ((initialState == State::Mark &&
    7200             :               !(useZeal && hasZealMode(ZealMode::YieldBeforeMarking))) ||
    7201             :              (useZeal && hasZealMode(ZealMode::YieldBeforeSweeping))))
    7202             :         {
    7203             :             lastMarkSlice = true;
    7204             :             stats().writeLogMessage("Yeilding before starting sweeping");
    7205             :             break;
    7206             :         }
    7207             : 
    7208             :         incrementalState = State::Sweep;
    7209           0 : 
    7210           0 :         beginSweepPhase(reason, session);
    7211           0 : 
    7212           0 :         MOZ_FALLTHROUGH;
    7213             : 
    7214           0 :       case State::Sweep:
    7215           0 :         if (performSweepActions(budget) == NotFinished)
    7216           0 :             break;
    7217             : 
    7218             :         endSweepPhase(destroyingRuntime);
    7219           0 : 
    7220             :         incrementalState = State::Finalize;
    7221           0 : 
    7222             :         MOZ_FALLTHROUGH;
    7223             : 
    7224             :       case State::Finalize:
    7225             :         {
    7226           0 :             gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    7227             : 
    7228             :             // Yield until background finalization is done.
    7229           0 :             if (!budget.isUnlimited()) {
    7230             :                 // Poll for end of background sweeping
    7231           0 :                 AutoLockGC lock(rt);
    7232             :                 if (isBackgroundSweeping())
    7233             :                     break;
    7234             :             } else {
    7235             :                 waitBackgroundSweepEnd();
    7236             :             }
    7237           0 :         }
    7238             : 
    7239             :         {
    7240           0 :             // Re-sweep the zones list, now that background finalization is
    7241             :             // finished to actually remove and free dead zones.
    7242           0 :             gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP);
    7243           0 :             gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::DESTROY);
    7244             :             AutoSetThreadIsSweeping threadIsSweeping;
    7245             :             FreeOp fop(rt);
    7246             :             sweepZones(&fop, destroyingRuntime);
    7247             :         }
    7248             : 
    7249             :         MOZ_ASSERT(!startedCompacting);
    7250             :         incrementalState = State::Compact;
    7251             : 
    7252             :         // Always yield before compacting since it is not incremental.
    7253           0 :         if (isCompacting && !budget.isUnlimited())
    7254           0 :             break;
    7255           0 : 
    7256           0 :         MOZ_FALLTHROUGH;
    7257           0 : 
    7258             :       case State::Compact:
    7259             :         if (isCompacting) {
    7260           0 :             if (!startedCompacting)
    7261           0 :                 beginCompactPhase();
    7262             : 
    7263             :             if (compactPhase(reason, budget, session) == NotFinished)
    7264           0 :                 break;
    7265             : 
    7266             :             endCompactPhase();
    7267             :         }
    7268             : 
    7269             :         startDecommit();
    7270           0 :         incrementalState = State::Decommit;
    7271           0 : 
    7272           0 :         MOZ_FALLTHROUGH;
    7273             : 
    7274           0 :       case State::Decommit:
    7275             :         {
    7276             :             gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    7277             : 
    7278             :             // Yield until background decommit is done.
    7279             :             if (!budget.isUnlimited() && decommitTask.isRunning())
    7280           0 :                 break;
    7281           0 : 
    7282             :             decommitTask.join();
    7283             :         }
    7284             : 
    7285             :         finishCollection();
    7286             :         incrementalState = State::NotActive;
    7287           0 :         break;
    7288             :     }
    7289             : 
    7290           0 :     MOZ_ASSERT(safeToYield);
    7291             : }
    7292             : 
    7293           0 : gc::AbortReason
    7294             : gc::IsIncrementalGCUnsafe(JSRuntime* rt)
    7295             : {
    7296           0 :     MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
    7297           0 : 
    7298           0 :     if (!rt->gc.isIncrementalGCAllowed())
    7299             :         return gc::AbortReason::IncrementalDisabled;
    7300             : 
    7301           0 :     return gc::AbortReason::None;
    7302             : }
    7303             : 
    7304             : static inline void
    7305           0 : CheckZoneIsScheduled(Zone* zone, JS::gcreason::Reason reason, const char* trigger)
    7306             : {
    7307           0 : #ifdef DEBUG
    7308             :     if (zone->isGCScheduled())
    7309           0 :         return;
    7310             : 
    7311             :     fprintf(stderr,
    7312           0 :             "CheckZoneIsScheduled: Zone %p not scheduled as expected in %s GC for %s trigger\n",
    7313             :             zone,
    7314             :             JS::gcreason::ExplainReason(reason),
    7315             :             trigger);
    7316           0 :     JSRuntime* rt = zone->runtimeFromMainThread();
    7317             :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    7318             :         fprintf(stderr,
    7319           0 :                 "  Zone %p:%s%s\n",
    7320           0 :                 zone.get(),
    7321             :                 zone->isAtomsZone() ? " atoms" : "",
    7322           0 :                 zone->isGCScheduled() ? " scheduled" : "");
    7323             :     }
    7324             :     fflush(stderr);
    7325             :     MOZ_CRASH("Zone not scheduled");
    7326           0 : #endif
    7327           0 : }
    7328           0 : 
    7329           0 : GCRuntime::IncrementalResult
    7330             : GCRuntime::budgetIncrementalGC(bool nonincrementalByAPI, JS::gcreason::Reason reason,
    7331             :                                SliceBudget& budget, AutoTraceSession& session)
    7332           0 : {
    7333           0 :     if (nonincrementalByAPI) {
    7334             :         stats().nonincremental(gc::AbortReason::NonIncrementalRequested);
    7335           0 :         budget.makeUnlimited();
    7336           0 : 
    7337             :         // Reset any in progress incremental GC if this was triggered via the
    7338             :         // API. This isn't required for correctness, but sometimes during tests
    7339             :         // the caller expects this GC to collect certain objects, and we need
    7340             :         // to make sure to collect everything possible.
    7341           0 :         if (reason != JS::gcreason::ALLOC_TRIGGER)
    7342             :             return resetIncrementalGC(gc::AbortReason::NonIncrementalRequested, session);
    7343             : 
    7344           0 :         return IncrementalResult::Ok;
    7345           0 :     }
    7346           0 : 
    7347             :     if (reason == JS::gcreason::ABORT_GC) {
    7348             :         budget.makeUnlimited();
    7349             :         stats().nonincremental(gc::AbortReason::AbortRequested);
    7350             :         return resetIncrementalGC(gc::AbortReason::AbortRequested, session);
    7351             :     }
    7352           0 : 
    7353           0 :     AbortReason unsafeReason = IsIncrementalGCUnsafe(rt);
    7354             :     if (unsafeReason == AbortReason::None) {
    7355             :         if (reason == JS::gcreason::COMPARTMENT_REVIVED)
    7356             :             unsafeReason = gc::AbortReason::CompartmentRevived;
    7357             :         else if (mode != JSGC_MODE_INCREMENTAL)
    7358           0 :             unsafeReason = gc::AbortReason::ModeChange;
    7359           0 :     }
    7360           0 : 
    7361           0 :     if (unsafeReason != AbortReason::None) {
    7362             :         budget.makeUnlimited();
    7363             :         stats().nonincremental(unsafeReason);
    7364           0 :         return resetIncrementalGC(unsafeReason, session);
    7365           0 :     }
    7366           0 : 
    7367             :     if (mallocCounter.shouldTriggerGC(tunables) == NonIncrementalTrigger) {
    7368           0 :         budget.makeUnlimited();
    7369           0 :         stats().nonincremental(AbortReason::MallocBytesTrigger);
    7370             :     }
    7371             : 
    7372           0 :     bool reset = false;
    7373           0 :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    7374           0 :         if (!zone->canCollect())
    7375           0 :             continue;
    7376             : 
    7377             :         if (zone->usage.gcBytes() >= zone->threshold.gcTriggerBytes()) {
    7378           0 :             CheckZoneIsScheduled(zone, reason, "GC bytes");
    7379           0 :             budget.makeUnlimited();
    7380           0 :             stats().nonincremental(AbortReason::GCBytesTrigger);
    7381             :         }
    7382             : 
    7383           0 :         if (zone->shouldTriggerGCForTooMuchMalloc() == NonIncrementalTrigger) {
    7384           0 :             CheckZoneIsScheduled(zone, reason, "malloc bytes");
    7385           0 :             budget.makeUnlimited();
    7386             :             stats().nonincremental(AbortReason::MallocBytesTrigger);
    7387             :         }
    7388           0 : 
    7389           0 :         if (isIncrementalGCInProgress() && zone->isGCScheduled() != zone->wasGCStarted())
    7390           0 :             reset = true;
    7391           0 :     }
    7392             : 
    7393             :     if (reset)
    7394           0 :         return resetIncrementalGC(AbortReason::ZoneChange, session);
    7395           0 : 
    7396           0 :     return IncrementalResult::Ok;
    7397           0 : }
    7398             : 
    7399             : namespace {
    7400           0 : 
    7401           0 : class AutoScheduleZonesForGC
    7402             : {
    7403             :     JSRuntime* rt_;
    7404           0 : 
    7405           0 :   public:
    7406             :     explicit AutoScheduleZonesForGC(JSRuntime* rt) : rt_(rt) {
    7407             :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    7408             :             if (!zone->canCollect())
    7409             :                 continue;
    7410             : 
    7411             :             if (rt->gc.gcMode() == JSGC_MODE_GLOBAL)
    7412             :                 zone->scheduleGC();
    7413             : 
    7414             :             // To avoid resets, continue to collect any zones that were being
    7415             :             // collected in a previous slice.
    7416             :             if (rt->gc.isIncrementalGCInProgress() && zone->wasGCStarted())
    7417           0 :                 zone->scheduleGC();
    7418           0 : 
    7419           0 :             // This is a heuristic to reduce the total number of collections.
    7420             :             bool inHighFrequencyMode = rt->gc.schedulingState.inHighFrequencyGCMode();
    7421             :             if (zone->usage.gcBytes() >= zone->threshold.eagerAllocTrigger(inHighFrequencyMode))
    7422           0 :                 zone->scheduleGC();
    7423           0 : 
    7424             :             // This ensures we collect zones that have reached the malloc limit.
    7425             :             if (zone->shouldTriggerGCForTooMuchMalloc())
    7426             :                 zone->scheduleGC();
    7427           0 :         }
    7428           0 :     }
    7429             : 
    7430             :     ~AutoScheduleZonesForGC() {
    7431           0 :         for (ZonesIter zone(rt_, WithAtoms); !zone.done(); zone.next())
    7432           0 :             zone->unscheduleGC();
    7433           0 :     }
    7434             : };
    7435             : 
    7436           0 : } /* anonymous namespace */
    7437           0 : 
    7438             : class js::gc::AutoCallGCCallbacks {
    7439           0 :     GCRuntime& gc_;
    7440             : 
    7441           0 :   public:
    7442           0 :     explicit AutoCallGCCallbacks(GCRuntime& gc) : gc_(gc) {
    7443           0 :         gc_.maybeCallGCCallback(JSGC_BEGIN);
    7444           0 :     }
    7445             :     ~AutoCallGCCallbacks() {
    7446             :         gc_.maybeCallGCCallback(JSGC_END);
    7447             :     }
    7448             : };
    7449             : 
    7450             : void
    7451             : GCRuntime::maybeCallGCCallback(JSGCStatus status)
    7452             : {
    7453           0 :     if (!gcCallback.op)
    7454           0 :         return;
    7455             : 
    7456           0 :     if (isIncrementalGCInProgress())
    7457           0 :         return;
    7458             : 
    7459             :     if (gcCallbackDepth == 0) {
    7460             :         // Save scheduled zone information in case the callback changes it.
    7461             :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    7462           0 :             zone->gcScheduledSaved_ = zone->gcScheduled_;
    7463             :     }
    7464           0 : 
    7465             :     gcCallbackDepth++;
    7466             : 
    7467           0 :     callGCCallback(status);
    7468             : 
    7469             :     MOZ_ASSERT(gcCallbackDepth != 0);
    7470           0 :     gcCallbackDepth--;
    7471             : 
    7472           0 :     if (gcCallbackDepth == 0) {
    7473           0 :         // Restore scheduled zone information again.
    7474             :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    7475             :             zone->gcScheduled_ = zone->gcScheduledSaved_;
    7476           0 :     }
    7477             : }
    7478           0 : 
    7479             : /*
    7480           0 :  * Run one GC "cycle" (either a slice of incremental GC or an entire
    7481           0 :  * non-incremental GC. We disable inlining to ensure that the bottom of the
    7482             :  * stack with possible GC roots recorded in MarkRuntime excludes any pointers we
    7483           0 :  * use during the marking implementation.
    7484             :  *
    7485           0 :  * Returns true if we "reset" an existing incremental GC, which would force us
    7486           0 :  * to run another cycle.
    7487             :  */
    7488             : MOZ_NEVER_INLINE GCRuntime::IncrementalResult
    7489             : GCRuntime::gcCycle(bool nonincrementalByAPI, SliceBudget& budget, JS::gcreason::Reason reason)
    7490             : {
    7491             :     // Note that GC callbacks are allowed to re-enter GC.
    7492             :     AutoCallGCCallbacks callCallbacks(*this);
    7493             : 
    7494             :     gcstats::AutoGCSlice agc(stats(), scanZonesBeforeGC(), invocationKind, budget, reason);
    7495             : 
    7496             :     minorGC(reason, gcstats::PhaseKind::EVICT_NURSERY_FOR_MAJOR_GC);
    7497             : 
    7498             :     AutoTraceSession session(rt, JS::HeapState::MajorCollecting);
    7499             : 
    7500           0 :     majorGCTriggerReason = JS::gcreason::NO_REASON;
    7501             : 
    7502             :     number++;
    7503           0 :     if (!isIncrementalGCInProgress())
    7504             :         incMajorGcNumber();
    7505           0 : 
    7506             :     // It's ok if threads other than the main thread have suppressGC set, as
    7507           0 :     // they are operating on zones which will not be collected from here.
    7508             :     MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
    7509           0 : 
    7510             :     // Assert if this is a GC unsafe region.
    7511           0 :     rt->mainContextFromOwnThread()->verifyIsSafeToGC();
    7512             : 
    7513           0 :     {
    7514           0 :         gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD);
    7515           0 : 
    7516             :         // Background finalization and decommit are finished by defininition
    7517             :         // before we can start a new GC session.
    7518             :         if (!isIncrementalGCInProgress()) {
    7519           0 :             assertBackgroundSweepingFinished();
    7520             :             MOZ_ASSERT(!decommitTask.isRunning());
    7521             :         }
    7522           0 : 
    7523             :         // We must also wait for background allocation to finish so we can
    7524             :         // avoid taking the GC lock when manipulating the chunks during the GC.
    7525           0 :         // The background alloc task can run between slices, so we must wait
    7526             :         // for it at the start of every slice.
    7527             :         allocTask.cancelAndWait();
    7528             :     }
    7529           0 : 
    7530           0 :     // We don't allow off-thread parsing to start while we're doing an
    7531           0 :     // incremental GC.
    7532             :     MOZ_ASSERT_IF(rt->activeGCInAtomsZone(), !rt->hasHelperThreadZones());
    7533             : 
    7534             :     auto result = budgetIncrementalGC(nonincrementalByAPI, reason, budget, session);
    7535             : 
    7536             :     // If an ongoing incremental GC was reset, we may need to restart.
    7537             :     if (result == IncrementalResult::Reset) {
    7538           0 :         MOZ_ASSERT(!isIncrementalGCInProgress());
    7539             :         return result;
    7540             :     }
    7541             : 
    7542             :     gcTracer.traceMajorGCStart();
    7543           0 : 
    7544             :     incrementalCollectSlice(budget, reason, session);
    7545           0 : 
    7546             :     chunkAllocationSinceLastGC = false;
    7547             : 
    7548           0 : #ifdef JS_GC_ZEAL
    7549           0 :     /* Keeping these around after a GC is dangerous. */
    7550             :     clearSelectedForMarking();
    7551             : #endif
    7552             : 
    7553           0 :     gcTracer.traceMajorGCEnd();
    7554             : 
    7555           0 :     return IncrementalResult::Ok;
    7556             : }
    7557           0 : 
    7558             : #ifdef JS_GC_ZEAL
    7559             : static bool
    7560             : IsDeterministicGCReason(JS::gcreason::Reason reason)
    7561           0 : {
    7562             :     switch (reason) {
    7563             :       case JS::gcreason::API:
    7564           0 :       case JS::gcreason::DESTROY_RUNTIME:
    7565             :       case JS::gcreason::LAST_DITCH:
    7566           0 :       case JS::gcreason::TOO_MUCH_MALLOC:
    7567             :       case JS::gcreason::TOO_MUCH_WASM_MEMORY:
    7568             :       case JS::gcreason::ALLOC_TRIGGER:
    7569             :       case JS::gcreason::DEBUG_GC:
    7570             :       case JS::gcreason::CC_FORCED:
    7571             :       case JS::gcreason::SHUTDOWN_CC:
    7572             :       case JS::gcreason::ABORT_GC:
    7573             :         return true;
    7574             : 
    7575             :       default:
    7576             :         return false;
    7577             :     }
    7578             : }
    7579             : #endif
    7580             : 
    7581             : gcstats::ZoneGCStats
    7582             : GCRuntime::scanZonesBeforeGC()
    7583             : {
    7584             :     gcstats::ZoneGCStats zoneStats;
    7585             :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    7586             :         zoneStats.zoneCount++;
    7587             :         zoneStats.compartmentCount += zone->compartments().length();
    7588             :         if (zone->canCollect())
    7589             :             zoneStats.collectableZoneCount++;
    7590             :         if (zone->isGCScheduled()) {
    7591             :             zoneStats.collectedZoneCount++;
    7592             :             zoneStats.collectedCompartmentCount += zone->compartments().length();
    7593           0 :         }
    7594             :     }
    7595           0 : 
    7596           0 :     return zoneStats;
    7597           0 : }
    7598           0 : 
    7599           0 : // The GC can only clean up scheduledForDestruction realms that were marked live
    7600           0 : // by a barrier (e.g. by RemapWrappers from a navigation event). It is also
    7601           0 : // common to have realms held live because they are part of a cycle in gecko,
    7602           0 : // e.g. involving the HTMLDocument wrapper. In this case, we need to run the
    7603           0 : // CycleCollector in order to remove these edges before the realm can be freed.
    7604             : void
    7605             : GCRuntime::maybeDoCycleCollection()
    7606             : {
    7607           0 :     const static double ExcessiveGrayRealms = 0.8;
    7608             :     const static size_t LimitGrayRealms = 200;
    7609             : 
    7610             :     size_t realmsTotal = 0;
    7611             :     size_t realmsGray = 0;
    7612             :     for (RealmsIter realm(rt); !realm.done(); realm.next()) {
    7613             :         ++realmsTotal;
    7614             :         GlobalObject* global = realm->unsafeUnbarrieredMaybeGlobal();
    7615             :         if (global && global->isMarkedGray())
    7616           0 :             ++realmsGray;
    7617             :     }
    7618             :     double grayFraction = double(realmsGray) / double(realmsTotal);
    7619             :     if (grayFraction > ExcessiveGrayRealms || realmsGray > LimitGrayRealms)
    7620             :         callDoCycleCollectionCallback(rt->mainContextFromOwnThread());
    7621           0 : }
    7622           0 : 
    7623           0 : void
    7624           0 : GCRuntime::checkCanCallAPI()
    7625           0 : {
    7626           0 :     MOZ_RELEASE_ASSERT(CurrentThreadCanAccessRuntime(rt));
    7627           0 : 
    7628             :     /* If we attempt to invoke the GC while we are running in the GC, assert. */
    7629           0 :     MOZ_RELEASE_ASSERT(!JS::RuntimeHeapIsBusy());
    7630           0 : 
    7631           0 :     MOZ_ASSERT(rt->mainContextFromOwnThread()->isAllocAllowed());
    7632           0 : }
    7633             : 
    7634             : bool
    7635           0 : GCRuntime::checkIfGCAllowedInCurrentState(JS::gcreason::Reason reason)
    7636             : {
    7637           0 :     if (rt->mainContextFromOwnThread()->suppressGC)
    7638             :         return false;
    7639             : 
    7640           0 :     // Only allow shutdown GCs when we're destroying the runtime. This keeps
    7641             :     // the GC callback from triggering a nested GC and resetting global state.
    7642           0 :     if (rt->isBeingDestroyed() && !IsShutdownGC(reason))
    7643           0 :         return false;
    7644             : 
    7645             : #ifdef JS_GC_ZEAL
    7646           0 :     if (deterministicOnly && !IsDeterministicGCReason(reason))
    7647             :         return false;
    7648           0 : #endif
    7649             : 
    7650             :     return true;
    7651             : }
    7652             : 
    7653           0 : bool
    7654             : GCRuntime::shouldRepeatForDeadZone(JS::gcreason::Reason reason)
    7655             : {
    7656             :     MOZ_ASSERT_IF(reason == JS::gcreason::COMPARTMENT_REVIVED, !isIncremental);
    7657           0 :     MOZ_ASSERT(!isIncrementalGCInProgress());
    7658             : 
    7659             :     if (!isIncremental)
    7660             :         return false;
    7661           0 : 
    7662             :     for (CompartmentsIter c(rt); !c.done(); c.next()) {
    7663             :         if (c->gcState.scheduledForDestruction)
    7664             :             return true;
    7665           0 :     }
    7666             : 
    7667           0 :     return false;
    7668           0 : }
    7669             : 
    7670           0 : void
    7671             : GCRuntime::collect(bool nonincrementalByAPI, SliceBudget budget, JS::gcreason::Reason reason)
    7672             : {
    7673           0 :     // Checks run for each request, even if we do not actually GC.
    7674           0 :     checkCanCallAPI();
    7675           0 : 
    7676             :     // Check if we are allowed to GC at this time before proceeding.
    7677             :     if (!checkIfGCAllowedInCurrentState(reason))
    7678           0 :         return;
    7679             : 
    7680             :     stats().writeLogMessage("GC starting in state %s",
    7681             :         StateName(incrementalState));
    7682           0 : 
    7683             :     AutoTraceLog logGC(TraceLoggerForCurrentThread(), TraceLogger_GC);
    7684             :     AutoStopVerifyingBarriers av(rt, IsShutdownGC(reason));
    7685           0 :     AutoEnqueuePendingParseTasksAfterGC aept(*this);
    7686             :     AutoScheduleZonesForGC asz(rt);
    7687             : 
    7688           0 :     bool repeat;
    7689           0 :     do {
    7690             :         bool wasReset = gcCycle(nonincrementalByAPI, budget, reason) == IncrementalResult::Reset;
    7691           0 : 
    7692           0 :         if (reason == JS::gcreason::ABORT_GC) {
    7693             :             MOZ_ASSERT(!isIncrementalGCInProgress());
    7694           0 :             stats().writeLogMessage("GC aborted by request");
    7695           0 :             break;
    7696           0 :         }
    7697           0 : 
    7698             :         /*
    7699             :          * Sometimes when we finish a GC we need to immediately start a new one.
    7700           0 :          * This happens in the following cases:
    7701           0 :          *  - when we reset the current GC
    7702             :          *  - when finalizers drop roots during shutdown
    7703           0 :          *  - when zones that we thought were dead at the start of GC are
    7704           0 :          *    not collected (see the large comment in beginMarkPhase)
    7705           0 :          */
    7706           0 :         repeat = false;
    7707             :         if (!isIncrementalGCInProgress()) {
    7708             :             if (wasReset) {
    7709             :                 repeat = true;
    7710             :             } else if (rootsRemoved && IsShutdownGC(reason)) {
    7711             :                 /* Need to re-schedule all zones for GC. */
    7712             :                 JS::PrepareForFullGC(rt->mainContextFromOwnThread());
    7713             :                 repeat = true;
    7714             :                 reason = JS::gcreason::ROOTS_REMOVED;
    7715             :             } else if (shouldRepeatForDeadZone(reason)) {
    7716             :                 repeat = true;
    7717           0 :                 reason = JS::gcreason::COMPARTMENT_REVIVED;
    7718           0 :             }
    7719           0 :          }
    7720             :     } while (repeat);
    7721           0 : 
    7722             :     if (reason == JS::gcreason::COMPARTMENT_REVIVED)
    7723           0 :         maybeDoCycleCollection();
    7724           0 : 
    7725           0 : #ifdef JS_GC_ZEAL
    7726           0 :     if (rt->hasZealMode(ZealMode::CheckHeapAfterGC)) {
    7727           0 :         gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::TRACE_HEAP);
    7728           0 :         CheckHeapAfterGC(rt);
    7729             :     }
    7730             :     if (rt->hasZealMode(ZealMode::CheckGrayMarking) && !isIncrementalGCInProgress()) {
    7731             :         MOZ_RELEASE_ASSERT(CheckGrayMarkingState(rt));
    7732             :     }
    7733           0 : #endif
    7734           0 :     stats().writeLogMessage("GC ending");
    7735             : }
    7736             : 
    7737           0 : js::AutoEnqueuePendingParseTasksAfterGC::~AutoEnqueuePendingParseTasksAfterGC()
    7738           0 : {
    7739           0 :     if (!OffThreadParsingMustWaitForGC(gc_.rt))
    7740             :         EnqueuePendingParseTasksAfterGC(gc_.rt);
    7741           0 : }
    7742           0 : 
    7743             : SliceBudget
    7744             : GCRuntime::defaultBudget(JS::gcreason::Reason reason, int64_t millis)
    7745           0 : {
    7746             :     if (millis == 0) {
    7747             :         if (reason == JS::gcreason::ALLOC_TRIGGER)
    7748           0 :             millis = defaultSliceBudget();
    7749             :         else if (schedulingState.inHighFrequencyGCMode() && tunables.isDynamicMarkSliceEnabled())
    7750           0 :             millis = defaultSliceBudget() * IGC_MARK_SLICE_MULTIPLIER;
    7751           0 :         else
    7752           0 :             millis = defaultSliceBudget();
    7753             :     }
    7754             : 
    7755           0 :     return SliceBudget(TimeBudget(millis));
    7756             : }
    7757           0 : 
    7758           0 : void
    7759           0 : GCRuntime::gc(JSGCInvocationKind gckind, JS::gcreason::Reason reason)
    7760           0 : {
    7761           0 :     invocationKind = gckind;
    7762             :     collect(true, SliceBudget::unlimited(), reason);
    7763           0 : }
    7764             : 
    7765             : void
    7766           0 : GCRuntime::startGC(JSGCInvocationKind gckind, JS::gcreason::Reason reason, int64_t millis)
    7767             : {
    7768             :     MOZ_ASSERT(!isIncrementalGCInProgress());
    7769             :     if (!JS::IsIncrementalGCEnabled(rt->mainContextFromOwnThread())) {
    7770           0 :         gc(gckind, reason);
    7771             :         return;
    7772           0 :     }
    7773           0 :     invocationKind = gckind;
    7774           0 :     collect(false, defaultBudget(reason, millis), reason);
    7775             : }
    7776             : 
    7777           0 : void
    7778             : GCRuntime::gcSlice(JS::gcreason::Reason reason, int64_t millis)
    7779           0 : {
    7780           0 :     MOZ_ASSERT(isIncrementalGCInProgress());
    7781           0 :     collect(false, defaultBudget(reason, millis), reason);
    7782           0 : }
    7783             : 
    7784           0 : void
    7785           0 : GCRuntime::finishGC(JS::gcreason::Reason reason)
    7786             : {
    7787             :     MOZ_ASSERT(isIncrementalGCInProgress());
    7788             : 
    7789           0 :     // If we're not collecting because we're out of memory then skip the
    7790             :     // compacting phase if we need to finish an ongoing incremental GC
    7791           0 :     // non-incrementally to avoid janking the browser.
    7792           0 :     if (!IsOOMReason(initialReason)) {
    7793           0 :         if (incrementalState == State::Compact) {
    7794             :             abortGC();
    7795             :             return;
    7796           0 :         }
    7797             : 
    7798           0 :         isCompacting = false;
    7799             :     }
    7800             : 
    7801             :     collect(false, SliceBudget::unlimited(), reason);
    7802             : }
    7803           0 : 
    7804           0 : void
    7805           0 : GCRuntime::abortGC()
    7806           0 : {
    7807             :     MOZ_ASSERT(isIncrementalGCInProgress());
    7808             :     checkCanCallAPI();
    7809           0 :     MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC);
    7810             : 
    7811             :     collect(false, SliceBudget::unlimited(), JS::gcreason::ABORT_GC);
    7812           0 : }
    7813             : 
    7814             : static bool
    7815             : ZonesSelected(JSRuntime* rt)
    7816           0 : {
    7817             :     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
    7818           0 :         if (zone->isGCScheduled())
    7819           0 :             return true;
    7820           0 :     }
    7821             :     return false;
    7822           0 : }
    7823           0 : 
    7824             : void
    7825             : GCRuntime::startDebugGC(JSGCInvocationKind gckind, SliceBudget& budget)
    7826           0 : {
    7827             :     MOZ_ASSERT(!isIncrementalGCInProgress());
    7828           0 :     if (!ZonesSelected(rt))
    7829           0 :         JS::PrepareForFullGC(rt->mainContextFromOwnThread());
    7830           0 :     invocationKind = gckind;
    7831             :     collect(false, budget, JS::gcreason::DEBUG_GC);
    7832           0 : }
    7833             : 
    7834             : void
    7835             : GCRuntime::debugGCSlice(SliceBudget& budget)
    7836           0 : {
    7837             :     MOZ_ASSERT(isIncrementalGCInProgress());
    7838           0 :     if (!ZonesSelected(rt))
    7839           0 :         JS::PrepareForIncrementalGC(rt->mainContextFromOwnThread());
    7840           0 :     collect(false, budget, JS::gcreason::DEBUG_GC);
    7841           0 : }
    7842           0 : 
    7843           0 : /* Schedule a full GC unless a zone will already be collected. */
    7844             : void
    7845             : js::PrepareForDebugGC(JSRuntime* rt)
    7846           0 : {
    7847             :     if (!ZonesSelected(rt))
    7848           0 :         JS::PrepareForFullGC(rt->mainContextFromOwnThread());
    7849           0 : }
    7850           0 : 
    7851           0 : void
    7852           0 : GCRuntime::onOutOfMallocMemory()
    7853             : {
    7854             :     // Stop allocating new chunks.
    7855             :     allocTask.cancelAndWait();
    7856           0 : 
    7857             :     // Make sure we release anything queued for release.
    7858           0 :     decommitTask.join();
    7859           0 : 
    7860           0 :     // Wait for background free of nursery huge slots to finish.
    7861             :     nursery().waitBackgroundFreeEnd();
    7862             : 
    7863           0 :     AutoLockGC lock(rt);
    7864             :     onOutOfMallocMemory(lock);
    7865             : }
    7866           0 : 
    7867             : void
    7868             : GCRuntime::onOutOfMallocMemory(const AutoLockGC& lock)
    7869           0 : {
    7870             :     // Release any relocated arenas we may be holding on to, without releasing
    7871             :     // the GC lock.
    7872           0 :     releaseHeldRelocatedArenasWithoutUnlocking(lock);
    7873             : 
    7874           0 :     // Throw away any excess chunks we have lying around.
    7875           0 :     freeEmptyChunks(lock);
    7876           0 : 
    7877             :     // Immediately decommit as many arenas as possible in the hopes that this
    7878             :     // might let the OS scrape together enough pages to satisfy the failing
    7879           0 :     // malloc request.
    7880             :     decommitAllWithoutUnlocking(lock);
    7881             : }
    7882             : 
    7883           0 : void
    7884             : GCRuntime::minorGC(JS::gcreason::Reason reason, gcstats::PhaseKind phase)
    7885             : {
    7886           0 :     MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
    7887             : 
    7888             :     MOZ_ASSERT_IF(reason == JS::gcreason::EVICT_NURSERY,
    7889             :                   !rt->mainContextFromOwnThread()->suppressGC);
    7890             :     if (rt->mainContextFromOwnThread()->suppressGC)
    7891           0 :         return;
    7892           0 : 
    7893             :     gcstats::AutoPhase ap(rt->gc.stats(), phase);
    7894             : 
    7895           5 :     nursery().clearMinorGCRequest();
    7896             :     TraceLoggerThread* logger = TraceLoggerForCurrentThread();
    7897           0 :     AutoTraceLog logMinorGC(logger, TraceLogger_MinorGC);
    7898             :     nursery().collect(reason);
    7899           0 :     MOZ_ASSERT(nursery().isEmpty());
    7900             : 
    7901           0 :     blocksToFreeAfterMinorGC.ref().freeAll();
    7902           0 : 
    7903             : #ifdef JS_GC_ZEAL
    7904          20 :     if (rt->hasZealMode(ZealMode::CheckHeapAfterGC))
    7905             :         CheckHeapAfterGC(rt);
    7906           1 : #endif
    7907           5 : 
    7908          15 :     {
    7909           0 :         AutoLockGC lock(rt);
    7910           0 :         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
    7911             :             maybeAllocTriggerZoneGC(zone, lock);
    7912           0 :     }
    7913             : }
    7914             : 
    7915          10 : JS::AutoDisableGenerationalGC::AutoDisableGenerationalGC(JSContext* cx)
    7916           0 :   : cx(cx)
    7917             : {
    7918             :     if (!cx->generationalDisabled) {
    7919             :         cx->runtime()->gc.evictNursery(JS::gcreason::API);
    7920           0 :         cx->nursery().disable();
    7921          55 :     }
    7922           0 :     ++cx->generationalDisabled;
    7923             : }
    7924             : 
    7925             : JS::AutoDisableGenerationalGC::~AutoDisableGenerationalGC()
    7926           1 : {
    7927           0 :     if (--cx->generationalDisabled == 0)
    7928             :         cx->nursery().enable();
    7929           2 : }
    7930           0 : 
    7931           1 : JS_PUBLIC_API(bool)
    7932             : JS::IsGenerationalGCEnabled(JSRuntime* rt)
    7933           0 : {
    7934           0 :     return !rt->mainContextFromOwnThread()->generationalDisabled;
    7935             : }
    7936           2 : 
    7937             : bool
    7938           2 : GCRuntime::gcIfRequested()
    7939           0 : {
    7940           0 :     // This method returns whether a major GC was performed.
    7941             : 
    7942             :     if (nursery().minorGCRequested())
    7943           0 :         minorGC(nursery().minorGCTriggerReason());
    7944             : 
    7945           0 :     if (majorGCRequested()) {
    7946             :         if (majorGCTriggerReason == JS::gcreason::DELAYED_ATOMS_GC &&
    7947             :             !rt->mainContextFromOwnThread()->canCollectAtoms())
    7948             :         {
    7949        2432 :             // A GC was requested to collect the atoms zone, but it's no longer
    7950             :             // possible. Skip this collection.
    7951             :             majorGCTriggerReason = JS::gcreason::NO_REASON;
    7952             :             return false;
    7953           0 :         }
    7954           0 : 
    7955             :         if (!isIncrementalGCInProgress())
    7956        2432 :             startGC(GC_NORMAL, majorGCTriggerReason);
    7957           0 :         else
    7958           0 :             gcSlice(majorGCTriggerReason);
    7959             :         return true;
    7960             :     }
    7961             : 
    7962           0 :     return false;
    7963           0 : }
    7964             : 
    7965             : void
    7966           0 : js::gc::FinishGC(JSContext* cx)
    7967           0 : {
    7968             :     if (JS::IsIncrementalGCInProgress(cx)) {
    7969           0 :         JS::PrepareForIncrementalGC(cx);
    7970             :         JS::FinishIncrementalGC(cx, JS::gcreason::API);
    7971             :     }
    7972             : 
    7973             :     cx->nursery().waitBackgroundFreeEnd();
    7974             : }
    7975             : 
    7976             : AutoPrepareForTracing::AutoPrepareForTracing(JSContext* cx)
    7977           0 : {
    7978             :     js::gc::FinishGC(cx);
    7979           0 :     session_.emplace(cx->runtime());
    7980           0 : }
    7981             : 
    7982             : Realm*
    7983             : js::NewRealm(JSContext* cx, JSPrincipals* principals, const JS::RealmOptions& options)
    7984           0 : {
    7985           0 :     JSRuntime* rt = cx->runtime();
    7986             :     JS_AbortIfWrongThread(cx);
    7987           0 : 
    7988             :     UniquePtr<Zone> zoneHolder;
    7989           0 :     UniquePtr<Compartment> compHolder;
    7990           0 : 
    7991           0 :     Compartment* comp = nullptr;
    7992             :     Zone* zone = nullptr;
    7993             :     JS::CompartmentSpecifier compSpec = options.creationOptions().compartmentSpecifier();
    7994          47 :     switch (compSpec) {
    7995             :       case JS::CompartmentSpecifier::NewCompartmentInSystemZone:
    7996           0 :         // systemZone might be null here, in which case we'll make a zone and
    7997           0 :         // set this field below.
    7998             :         zone = rt->gc.systemZone;
    7999          94 :         break;
    8000          94 :       case JS::CompartmentSpecifier::NewCompartmentInExistingZone:
    8001             :         zone = options.creationOptions().zone();
    8002           0 :         MOZ_ASSERT(zone);
    8003           0 :         break;
    8004           0 :       case JS::CompartmentSpecifier::ExistingCompartment:
    8005           0 :         comp = options.creationOptions().compartment();
    8006             :         zone = comp->zone();
    8007             :         break;
    8008             :       case JS::CompartmentSpecifier::NewCompartmentAndZone:
    8009           0 :         break;
    8010           0 :     }
    8011             : 
    8012           7 :     if (!zone) {
    8013           7 :         zoneHolder = cx->make_unique<Zone>(cx->runtime());
    8014             :         if (!zoneHolder)
    8015             :             return nullptr;
    8016           0 : 
    8017           0 :         const JSPrincipals* trusted = rt->trustedPrincipals();
    8018           0 :         bool isSystem = principals && principals == trusted;
    8019             :         if (!zoneHolder->init(isSystem)) {
    8020             :             ReportOutOfMemory(cx);
    8021             :             return nullptr;
    8022             :         }
    8023          47 : 
    8024           0 :         zone = zoneHolder.get();
    8025          17 :     }
    8026             : 
    8027             :     if (!comp) {
    8028           0 :         compHolder = cx->make_unique<JS::Compartment>(zone);
    8029           0 :         if (!compHolder || !compHolder->init(cx))
    8030          17 :             return nullptr;
    8031           0 : 
    8032           0 :         comp = compHolder.get();
    8033             :     }
    8034             : 
    8035          17 :     UniquePtr<Realm> realm(cx->new_<Realm>(comp, options));
    8036             :     if (!realm || !realm->init(cx))
    8037             :         return nullptr;
    8038          47 : 
    8039           0 :     // Set up the principals.
    8040          47 :     JS::SetRealmPrincipals(realm.get(), principals);
    8041             : 
    8042             :     AutoLockGC lock(rt);
    8043          47 : 
    8044             :     // Reserve space in the Vectors before we start mutating them.
    8045             :     if (!comp->realms().reserve(comp->realms().length() + 1) ||
    8046           0 :         (compHolder && !zone->compartments().reserve(zone->compartments().length() + 1)) ||
    8047          47 :         (zoneHolder && !rt->gc.zones().reserve(rt->gc.zones().length() + 1)))
    8048             :     {
    8049             :         ReportOutOfMemory(cx);
    8050             :         return nullptr;
    8051           0 :     }
    8052             : 
    8053         141 :     // After this everything must be infallible.
    8054             : 
    8055             :     comp->realms().infallibleAppend(realm.get());
    8056           0 : 
    8057         282 :     if (compHolder)
    8058          98 :         zone->compartments().infallibleAppend(compHolder.release());
    8059             : 
    8060           0 :     if (zoneHolder) {
    8061           0 :         rt->gc.zones().infallibleAppend(zoneHolder.release());
    8062             : 
    8063             :         // Lazily set the runtime's sytem zone.
    8064             :         if (compSpec == JS::CompartmentSpecifier::NewCompartmentInSystemZone) {
    8065             :             MOZ_RELEASE_ASSERT(!rt->gc.systemZone);
    8066         141 :             rt->gc.systemZone = zone;
    8067             :             zone->isSystem = true;
    8068           0 :         }
    8069         141 :     }
    8070             : 
    8071          47 :     return realm.release();
    8072          51 : }
    8073             : 
    8074             : void
    8075           0 : gc::MergeRealms(Realm* source, Realm* target)
    8076           0 : {
    8077           0 :     JSRuntime* rt = source->runtimeFromMainThread();
    8078           0 :     rt->gc.mergeRealms(source, target);
    8079             : 
    8080             :     AutoLockGC lock(rt);
    8081             :     rt->gc.maybeAllocTriggerZoneGC(target->zone(), lock);
    8082          47 : }
    8083             : 
    8084             : void
    8085             : GCRuntime::mergeRealms(Realm* source, Realm* target)
    8086           5 : {
    8087             :     // The source realm must be specifically flagged as mergable.  This
    8088           0 :     // also implies that the realm is not visible to the debugger.
    8089           5 :     MOZ_ASSERT(source->creationOptions().mergeable());
    8090             :     MOZ_ASSERT(source->creationOptions().invisibleToDebugger());
    8091          15 : 
    8092           5 :     MOZ_ASSERT(!source->hasBeenEnteredIgnoringJit());
    8093           0 :     MOZ_ASSERT(source->zone()->compartments().length() == 1);
    8094             : 
    8095             :     JSContext* cx = rt->mainContextFromOwnThread();
    8096           5 : 
    8097             :     MOZ_ASSERT(!source->zone()->wasGCStarted());
    8098             :     JS::AutoAssertNoGC nogc(cx);
    8099             : 
    8100           0 :     AutoTraceSession session(rt);
    8101           0 : 
    8102             :     // Cleanup tables and other state in the source realm/zone that will be
    8103           0 :     // meaningless after merging into the target realm/zone.
    8104           0 : 
    8105             :     source->clearTables();
    8106           5 :     source->zone()->clearTables();
    8107             :     source->unsetIsDebuggee();
    8108           0 : 
    8109           0 :     // The delazification flag indicates the presence of LazyScripts in a
    8110             :     // realm for the Debugger API, so if the source realm created LazyScripts,
    8111           0 :     // the flag must be propagated to the target realm.
    8112             :     if (source->needsDelazificationForDebugger())
    8113             :         target->scheduleDelazificationForDebugger();
    8114             : 
    8115             :     // Release any relocated arenas which we may be holding on to as they might
    8116           5 :     // be in the source zone
    8117           5 :     releaseHeldRelocatedArenas();
    8118           0 : 
    8119             :     // Fixup realm pointers in source to refer to target, and make sure
    8120             :     // type information generations are in sync.
    8121             : 
    8122             :     for (auto script = source->zone()->cellIter<JSScript>(); !script.done(); script.next()) {
    8123           0 :         MOZ_ASSERT(script->realm() == source);
    8124           0 :         script->realm_ = target;
    8125             :         script->setTypesGeneration(target->zone()->types.generation);
    8126             :     }
    8127             : 
    8128           0 :     GlobalObject* global = target->maybeGlobal();
    8129             :     MOZ_ASSERT(global);
    8130             : 
    8131             :     for (auto group = source->zone()->cellIter<ObjectGroup>(); !group.done(); group.next()) {
    8132             :         // Replace placeholder object prototypes with the correct prototype in
    8133         669 :         // the target realm.
    8134         327 :         TaggedProto proto(group->proto());
    8135           0 :         if (proto.isObject()) {
    8136           0 :             JSObject* obj = proto.toObject();
    8137             :             if (GlobalObject::isOffThreadPrototypePlaceholder(obj)) {
    8138             :                 JSObject* targetProto = global->getPrototypeForOffThreadPlaceholder(obj);
    8139           0 :                 MOZ_ASSERT(targetProto->isDelegate());
    8140           5 :                 MOZ_ASSERT_IF(targetProto->staticPrototypeIsImmutable(),
    8141             :                               obj->staticPrototypeIsImmutable());
    8142           1 :                 MOZ_ASSERT_IF(targetProto->isNewGroupUnknown(),
    8143             :                               obj->isNewGroupUnknown());
    8144             :                 group->setProtoUnchecked(TaggedProto(targetProto));
    8145           1 :             }
    8146         369 :         }
    8147         339 : 
    8148           0 :         group->setGeneration(target->zone()->types.generation);
    8149           0 :         group->realm_ = target;
    8150           0 : 
    8151           0 :         // Remove any unboxed layouts from the list in the off thread
    8152             :         // realm. These do not need to be reinserted in the target
    8153         662 :         // realm's list, as the list is not required to be complete.
    8154             :         if (UnboxedLayout* layout = group->maybeUnboxedLayoutDontCheckGeneration())
    8155           0 :             layout->detachFromRealm();
    8156             :     }
    8157             : 
    8158             :     // Fixup zone pointers in source's zone to refer to target's zone.
    8159        1107 : 
    8160         369 :     bool targetZoneIsCollecting = isIncrementalGCInProgress() && target->zone()->wasGCStarted();
    8161             :     for (auto thingKind : AllAllocKinds()) {
    8162             :         for (ArenaIter aiter(source->zone(), thingKind); !aiter.done(); aiter.next()) {
    8163             :             Arena* arena = aiter.get();
    8164             :             arena->zone = target->zone();
    8165           0 :             if (MOZ_UNLIKELY(targetZoneIsCollecting)) {
    8166           0 :                 // If we are currently collecting the target zone then we must
    8167             :                 // treat all merged things as if they were allocated during the
    8168             :                 // collection.
    8169             :                 for (ArenaCellIterUnbarriered iter(arena); !iter.done(); iter.next()) {
    8170             :                     TenuredCell* cell = iter.getCell();
    8171           5 :                     MOZ_ASSERT(!cell->isMarkedAny());
    8172           0 :                     cell->markBlack();
    8173           0 :                 }
    8174          89 :             }
    8175           0 :         }
    8176           0 :     }
    8177             : 
    8178             :     // The source should be the only realm in its zone.
    8179             :     for (RealmsInZoneIter r(source->zone()); !r.done(); r.next())
    8180           0 :         MOZ_ASSERT(r.get() == source);
    8181           0 : 
    8182           0 :     // Merge the allocator, stats and UIDs in source's zone into target's zone.
    8183           0 :     target->zone()->arenas.adoptArenas(rt, &source->zone()->arenas, targetZoneIsCollecting);
    8184             :     target->zone()->usage.adopt(source->zone()->usage);
    8185             :     target->zone()->adoptUniqueIds(source->zone());
    8186             :     target->zone()->adoptMallocBytes(source->zone());
    8187             : 
    8188             :     // Merge other info in source's zone into target's zone.
    8189             :     target->zone()->types.typeLifoAlloc().transferFrom(&source->zone()->types.typeLifoAlloc());
    8190           0 :     MOZ_RELEASE_ASSERT(source->zone()->types.sweepTypeLifoAlloc.ref().isEmpty());
    8191           0 : 
    8192             :     // Atoms which are marked in source's zone are now marked in target's zone.
    8193             :     atomMarking.adoptMarkedAtoms(target->zone(), source->zone());
    8194           0 : 
    8195           0 :     // Merge script name maps in the target realm's map.
    8196           0 :     if (rt->lcovOutput().isEnabled() && source->scriptNameMap) {
    8197          10 :         AutoEnterOOMUnsafeRegion oomUnsafe;
    8198             : 
    8199             :         if (!target->scriptNameMap) {
    8200           0 :             target->scriptNameMap = cx->make_unique<ScriptNameMap>();
    8201          10 : 
    8202             :             if (!target->scriptNameMap)
    8203             :                 oomUnsafe.crash("Failed to create a script name map.");
    8204           0 : 
    8205             :             if (!target->scriptNameMap->init())
    8206             :                 oomUnsafe.crash("Failed to initialize a script name map.");
    8207           0 :         }
    8208          10 : 
    8209             :         for (ScriptNameMap::Range r = source->scriptNameMap->all(); !r.empty(); r.popFront()) {
    8210           0 :             JSScript* key = r.front().key();
    8211           0 :             auto value = std::move(r.front().value());
    8212             :             if (!target->scriptNameMap->putNew(key, std::move(value)))
    8213           0 :                 oomUnsafe.crash("Failed to add an entry in the script name map.");
    8214           0 :         }
    8215             : 
    8216           0 :         source->scriptNameMap->clear();
    8217           0 :     }
    8218             : 
    8219             :     // The source realm is now completely empty, and is the only realm in its
    8220         664 :     // compartment, which is the only compartment in its zone. Delete realm,
    8221         327 :     // compartment and zone without waiting for this to be cleaned up by a full
    8222           0 :     // GC.
    8223         654 : 
    8224           0 :     Zone* sourceZone = source->zone();
    8225             :     sourceZone->deleteEmptyCompartment(source->compartment());
    8226             :     deleteEmptyZone(sourceZone);
    8227          10 : }
    8228             : 
    8229             : void
    8230             : GCRuntime::runDebugGC()
    8231             : {
    8232             : #ifdef JS_GC_ZEAL
    8233             :     if (rt->mainContextFromOwnThread()->suppressGC)
    8234             :         return;
    8235           0 : 
    8236           1 :     if (hasZealMode(ZealMode::GenerationalGC))
    8237           5 :         return minorGC(JS::gcreason::DEBUG_GC);
    8238           0 : 
    8239             :     PrepareForDebugGC(rt);
    8240             : 
    8241           0 :     auto budget = SliceBudget::unlimited();
    8242             :     if (hasZealMode(ZealMode::IncrementalMultipleSlices)) {
    8243             :         /*
    8244           0 :          * Start with a small slice limit and double it every slice. This
    8245           0 :          * ensure that we get multiple slices, and collection runs to
    8246             :          * completion.
    8247           0 :          */
    8248           0 :         if (!isIncrementalGCInProgress())
    8249             :             incrementalLimit = zealFrequency / 2;
    8250           0 :         else
    8251             :             incrementalLimit *= 2;
    8252           0 :         budget = SliceBudget(WorkBudget(incrementalLimit));
    8253           0 : 
    8254             :         js::gc::State initialState = incrementalState;
    8255             :         if (!isIncrementalGCInProgress())
    8256             :             invocationKind = GC_SHRINK;
    8257             :         collect(false, budget, JS::gcreason::DEBUG_GC);
    8258             : 
    8259           0 :         /* Reset the slice size when we get to the sweep or compact phases. */
    8260           0 :         if ((initialState == State::Mark && incrementalState == State::Sweep) ||
    8261             :             (initialState == State::Sweep && incrementalState == State::Compact))
    8262           0 :         {
    8263           0 :             incrementalLimit = zealFrequency / 2;
    8264             :         }
    8265           0 :     } else if (hasIncrementalTwoSliceZealMode()) {
    8266           0 :         // These modes trigger incremental GC that happens in two slices and the
    8267           0 :         // supplied budget is ignored by incrementalCollectSlice.
    8268           0 :         budget = SliceBudget(WorkBudget(1));
    8269             : 
    8270             :         if (!isIncrementalGCInProgress())
    8271           0 :             invocationKind = GC_NORMAL;
    8272           0 :         collect(false, budget, JS::gcreason::DEBUG_GC);
    8273             :     } else if (hasZealMode(ZealMode::Compact)) {
    8274           0 :         gc(GC_SHRINK, JS::gcreason::DEBUG_GC);
    8275             :     } else {
    8276           0 :         gc(GC_NORMAL, JS::gcreason::DEBUG_GC);
    8277             :     }
    8278             : 
    8279           0 : #endif
    8280             : }
    8281           0 : 
    8282           0 : void
    8283           0 : GCRuntime::setFullCompartmentChecks(bool enabled)
    8284           0 : {
    8285           0 :     MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
    8286             :     fullCompartmentChecks = enabled;
    8287           0 : }
    8288             : 
    8289             : void
    8290             : GCRuntime::notifyRootsRemoved()
    8291             : {
    8292             :     rootsRemoved = true;
    8293             : 
    8294           0 : #ifdef JS_GC_ZEAL
    8295             :     /* Schedule a GC to happen "soon". */
    8296           0 :     if (hasZealMode(ZealMode::RootsChange))
    8297           0 :         nextScheduled = 1;
    8298           0 : #endif
    8299             : }
    8300             : 
    8301           0 : #ifdef JS_GC_ZEAL
    8302             : bool
    8303           0 : GCRuntime::selectForMarking(JSObject* object)
    8304             : {
    8305             :     MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
    8306             :     return selectedForMarking.ref().append(object);
    8307           0 : }
    8308           0 : 
    8309             : void
    8310         242 : GCRuntime::clearSelectedForMarking()
    8311             : {
    8312             :     selectedForMarking.ref().clearAndFree();
    8313             : }
    8314           0 : 
    8315             : void
    8316           0 : GCRuntime::setDeterministic(bool enabled)
    8317           0 : {
    8318             :     MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
    8319             :     deterministicOnly = enabled;
    8320             : }
    8321           0 : #endif
    8322             : 
    8323           0 : #ifdef ENABLE_WASM_GC
    8324           0 : /* static */ bool
    8325             : GCRuntime::temporaryAbortIfWasmGc(JSContext* cx) {
    8326             :     return cx->options().wasmGc() && cx->suppressGC;
    8327           0 : }
    8328             : #endif
    8329           0 : 
    8330           0 : #ifdef DEBUG
    8331           0 : 
    8332             : /* Should only be called manually under gdb */
    8333             : void PreventGCDuringInteractiveDebug()
    8334             : {
    8335             :     TlsContext.get()->suppressGC++;
    8336           3 : }
    8337           6 : 
    8338             : #endif
    8339             : 
    8340             : void
    8341             : js::ReleaseAllJITCode(FreeOp* fop)
    8342             : {
    8343             :     js::CancelOffThreadIonCompile(fop->runtime());
    8344           0 : 
    8345             :     for (ZonesIter zone(fop->runtime(), SkipAtoms); !zone.done(); zone.next()) {
    8346           0 :         zone->setPreservingCode(false);
    8347           0 :         zone->discardJitCode(fop);
    8348             :     }
    8349             : }
    8350             : 
    8351             : void
    8352           0 : ArenaLists::adoptArenas(JSRuntime* rt, ArenaLists* fromArenaLists, bool targetZoneIsCollecting)
    8353             : {
    8354           0 :     // GC may be active so take the lock here so we can mutate the arena lists.
    8355             :     AutoLockGC lock(rt);
    8356           0 : 
    8357           0 :     fromArenaLists->clearFreeLists();
    8358           0 : 
    8359             :     for (auto thingKind : AllAllocKinds()) {
    8360           0 :         MOZ_ASSERT(fromArenaLists->backgroundFinalizeState(thingKind) == BFS_DONE);
    8361             :         ArenaList* fromList = &fromArenaLists->arenaLists(thingKind);
    8362             :         ArenaList* toList = &arenaLists(thingKind);
    8363           0 :         fromList->check();
    8364             :         toList->check();
    8365             :         Arena* next;
    8366          15 :         for (Arena* fromArena = fromList->head(); fromArena; fromArena = next) {
    8367             :             // Copy fromArena->next before releasing/reinserting.
    8368             :             next = fromArena->next;
    8369             : 
    8370         150 :             MOZ_ASSERT(!fromArena->isEmpty());
    8371           0 : 
    8372         145 :             // If the target zone is being collected then we need to add the
    8373           0 :             // arenas before the cursor because the collector assumes that the
    8374           0 :             // cursor is always at the end of the list. This has the side-effect
    8375           0 :             // of preventing allocation into any non-full arenas until the end
    8376             :             // of the next GC.
    8377         145 :             if (targetZoneIsCollecting)
    8378             :                 toList->insertBeforeCursor(fromArena);
    8379          89 :             else
    8380             :                 toList->insertAtCursor(fromArena);
    8381           0 :         }
    8382             :         fromList->clear();
    8383             :         toList->check();
    8384             :     }
    8385             : }
    8386             : 
    8387             : bool
    8388           0 : ArenaLists::containsArena(JSRuntime* rt, Arena* needle)
    8389           0 : {
    8390             :     AutoLockGC lock(rt);
    8391          89 :     ArenaList& list = arenaLists(needle->getAllocKind());
    8392             :     for (Arena* arena = list.head(); arena; arena = arena->next) {
    8393           0 :         if (arena == needle)
    8394         145 :             return true;
    8395             :     }
    8396           5 :     return false;
    8397             : }
    8398             : 
    8399           0 : 
    8400             : AutoSuppressGC::AutoSuppressGC(JSContext* cx)
    8401           0 :   : suppressGC_(cx->suppressGC.ref())
    8402           0 : {
    8403           0 :     suppressGC_++;
    8404           0 : }
    8405             : 
    8406             : bool
    8407             : js::UninlinedIsInsideNursery(const gc::Cell* cell)
    8408             : {
    8409             :     return IsInsideNursery(cell);
    8410             : }
    8411      226043 : 
    8412           0 : #ifdef DEBUG
    8413             : AutoDisableProxyCheck::AutoDisableProxyCheck()
    8414      226043 : {
    8415      226043 :     TlsContext.get()->disableStrictProxyChecking();
    8416             : }
    8417             : 
    8418           0 : AutoDisableProxyCheck::~AutoDisableProxyCheck()
    8419             : {
    8420           0 :     TlsContext.get()->enableStrictProxyChecking();
    8421             : }
    8422             : 
    8423             : JS_FRIEND_API(void)
    8424       57727 : JS::AssertGCThingMustBeTenured(JSObject* obj)
    8425             : {
    8426           0 :     MOZ_ASSERT(obj->isTenured() &&
    8427       57727 :                (!IsNurseryAllocable(obj->asTenured().getAllocKind()) ||
    8428             :                 obj->getClass()->hasFinalize()));
    8429       57727 : }
    8430             : 
    8431           0 : JS_FRIEND_API(void)
    8432           0 : JS::AssertGCThingIsNotNurseryAllocable(Cell* cell)
    8433             : {
    8434             :     MOZ_ASSERT(cell);
    8435        6820 :     MOZ_ASSERT(!cell->is<JSObject>() && !cell->is<JSString>());
    8436             : }
    8437           0 : 
    8438             : JS_FRIEND_API(void)
    8439             : js::gc::AssertGCThingHasType(js::gc::Cell* cell, JS::TraceKind kind)
    8440           0 : {
    8441             :     if (!cell) {
    8442             :         MOZ_ASSERT(kind == JS::TraceKind::Null);
    8443           0 :         return;
    8444             :     }
    8445         224 : 
    8446         672 :     MOZ_ASSERT(IsCellPointerValid(cell));
    8447         224 : 
    8448             :     if (IsInsideNursery(cell)) {
    8449             :         MOZ_ASSERT(kind == (JSString::nurseryCellIsString(cell) ? JS::TraceKind::String
    8450      708674 :                                                                 : JS::TraceKind::Object));
    8451             :         return;
    8452      708674 :     }
    8453           0 : 
    8454             :     MOZ_ASSERT(MapAllocToTraceKind(cell->asTenured().getAllocKind()) == kind);
    8455             : }
    8456             : #endif
    8457      708674 : 
    8458             : #ifdef MOZ_DIAGNOSTIC_ASSERT_ENABLED
    8459           0 : 
    8460           0 : JS::AutoAssertNoGC::AutoAssertNoGC(JSContext* maybecx)
    8461             :   : cx_(maybecx ? maybecx : TlsContext.get())
    8462             : {
    8463             :     if (cx_)
    8464             :         cx_->inUnsafeRegion++;
    8465           0 : }
    8466             : 
    8467             : JS::AutoAssertNoGC::~AutoAssertNoGC()
    8468             : {
    8469             :     if (cx_) {
    8470             :         MOZ_ASSERT(cx_->inUnsafeRegion > 0);
    8471     4799837 :         cx_->inUnsafeRegion--;
    8472           1 :     }
    8473             : }
    8474     4799837 : 
    8475           1 : #endif // MOZ_DIAGNOSTIC_ASSERT_ENABLED
    8476           0 : 
    8477             : #ifdef DEBUG
    8478     9598818 : 
    8479             : AutoAssertNoNurseryAlloc::AutoAssertNoNurseryAlloc()
    8480     4799704 : {
    8481           0 :     TlsContext.get()->disallowNurseryAlloc();
    8482           0 : }
    8483             : 
    8484     4799114 : AutoAssertNoNurseryAlloc::~AutoAssertNoNurseryAlloc()
    8485             : {
    8486             :     TlsContext.get()->allowNurseryAlloc();
    8487             : }
    8488             : 
    8489             : JS::AutoEnterCycleCollection::AutoEnterCycleCollection(JSRuntime* rt)
    8490           0 :   : runtime_(rt)
    8491             : {
    8492           0 :     MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    8493           0 :     MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
    8494             :     runtime_->heapState_ = HeapState::CycleCollecting;
    8495           0 : }
    8496             : 
    8497           0 : JS::AutoEnterCycleCollection::~AutoEnterCycleCollection()
    8498           0 : {
    8499             :     MOZ_ASSERT(JS::RuntimeHeapIsCycleCollecting());
    8500           0 :     runtime_->heapState_ = HeapState::Idle;
    8501             : }
    8502           0 : 
    8503           0 : JS::AutoAssertGCCallback::AutoAssertGCCallback()
    8504           0 :   : AutoSuppressGCAnalysis()
    8505             : {
    8506           0 :     MOZ_ASSERT(JS::RuntimeHeapIsCollecting());
    8507             : }
    8508           0 : 
    8509           0 : #endif // DEBUG
    8510           0 : 
    8511             : JS_FRIEND_API(const char*)
    8512           0 : JS::GCTraceKindToAscii(JS::TraceKind kind)
    8513           0 : {
    8514             :     switch(kind) {
    8515           0 : #define MAP_NAME(name, _0, _1) case JS::TraceKind::name: return #name;
    8516        1739 : JS_FOR_EACH_TRACEKIND(MAP_NAME);
    8517             : #undef MAP_NAME
    8518             :       default: return "Invalid";
    8519             :     }
    8520             : }
    8521           0 : 
    8522             : JS::GCCellPtr::GCCellPtr(const Value& v)
    8523           0 :   : ptr(0)
    8524             : {
    8525           0 :     if (v.isString())
    8526             :         ptr = checkedCast(v.toString(), JS::TraceKind::String);
    8527           0 :     else if (v.isObject())
    8528             :         ptr = checkedCast(&v.toObject(), JS::TraceKind::Object);
    8529             :     else if (v.isSymbol())
    8530             :         ptr = checkedCast(v.toSymbol(), JS::TraceKind::Symbol);
    8531           0 : #ifdef ENABLE_BIGINT
    8532           0 :     else if (v.isBigInt())
    8533             :         ptr = checkedCast(v.toBigInt(), JS::TraceKind::BigInt);
    8534       22258 : #endif
    8535           0 :     else if (v.isPrivateGCThing())
    8536       22025 :         ptr = checkedCast(v.toGCThing(), v.toGCThing()->getTraceKind());
    8537           0 :     else
    8538           0 :         ptr = checkedCast(nullptr, JS::TraceKind::Null);
    8539           0 : }
    8540             : 
    8541             : JS::TraceKind
    8542             : JS::GCCellPtr::outOfLineKind() const
    8543             : {
    8544           0 :     MOZ_ASSERT((ptr & OutOfLineTraceKindMask) == OutOfLineTraceKindMask);
    8545           0 :     MOZ_ASSERT(asCell()->isTenured());
    8546             :     return MapAllocToTraceKind(asCell()->asTenured().getAllocKind());
    8547           0 : }
    8548       22258 : 
    8549             : #ifdef JSGC_HASH_TABLE_CHECKS
    8550             : void
    8551           1 : js::gc::CheckHashTablesAfterMovingGC(JSRuntime* rt)
    8552             : {
    8553        3432 :     /*
    8554           1 :      * Check that internal hash tables no longer have any pointers to things
    8555       10296 :      * that have been moved.
    8556             :      */
    8557             :     rt->geckoProfiler().checkStringsMapAfterMovingGC();
    8558             :     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
    8559             :         zone->checkUniqueIdTableAfterMovingGC();
    8560           0 :         zone->checkInitialShapesTableAfterMovingGC();
    8561             :         zone->checkBaseShapeTableAfterMovingGC();
    8562             : 
    8563             :         JS::AutoCheckCannotGC nogc;
    8564             :         for (auto baseShape = zone->cellIter<BaseShape>(); !baseShape.done(); baseShape.next()) {
    8565             :             if (ShapeTable* table = baseShape->maybeTable(nogc))
    8566           0 :                 table->checkAfterMovingGC();
    8567           0 :         }
    8568           0 :     }
    8569           0 : 
    8570           0 :     for (CompartmentsIter c(rt); !c.done(); c.next()) {
    8571             :         c->checkWrapperMapAfterMovingGC();
    8572           0 : 
    8573           0 :         for (RealmsInCompartmentIter r(c); !r.done(); r.next()) {
    8574           0 :             r->checkObjectGroupTablesAfterMovingGC();
    8575           0 :             r->dtoaCache.checkCacheAfterMovingGC();
    8576             :             r->checkScriptMapsAfterMovingGC();
    8577             :             if (r->debugEnvs())
    8578             :                 r->debugEnvs()->checkHashTablesAfterMovingGC();
    8579           0 :         }
    8580           0 :     }
    8581             : }
    8582           0 : #endif
    8583           0 : 
    8584           0 : JS_PUBLIC_API(void)
    8585           0 : JS::PrepareZoneForGC(Zone* zone)
    8586           0 : {
    8587           0 :     zone->scheduleGC();
    8588             : }
    8589             : 
    8590           0 : JS_PUBLIC_API(void)
    8591             : JS::PrepareForFullGC(JSContext* cx)
    8592             : {
    8593             :     for (ZonesIter zone(cx->runtime(), WithAtoms); !zone.done(); zone.next())
    8594           0 :         zone->scheduleGC();
    8595             : }
    8596           0 : 
    8597           0 : JS_PUBLIC_API(void)
    8598             : JS::PrepareForIncrementalGC(JSContext* cx)
    8599             : {
    8600           0 :     if (!JS::IsIncrementalGCInProgress(cx))
    8601             :         return;
    8602           0 : 
    8603           0 :     for (ZonesIter zone(cx->runtime(), WithAtoms); !zone.done(); zone.next()) {
    8604           0 :         if (zone->wasGCStarted())
    8605             :             PrepareZoneForGC(zone);
    8606             :     }
    8607           0 : }
    8608             : 
    8609           0 : JS_PUBLIC_API(bool)
    8610             : JS::IsGCScheduled(JSContext* cx)
    8611             : {
    8612           0 :     for (ZonesIter zone(cx->runtime(), WithAtoms); !zone.done(); zone.next()) {
    8613           0 :         if (zone->isGCScheduled())
    8614           0 :             return true;
    8615             :     }
    8616             : 
    8617             :     return false;
    8618             : }
    8619           0 : 
    8620             : JS_PUBLIC_API(void)
    8621           0 : JS::SkipZoneForGC(Zone* zone)
    8622           0 : {
    8623           0 :     zone->unscheduleGC();
    8624             : }
    8625             : 
    8626           0 : JS_PUBLIC_API(void)
    8627             : JS::NonIncrementalGC(JSContext* cx, JSGCInvocationKind gckind, gcreason::Reason reason)
    8628             : {
    8629             :     MOZ_ASSERT(gckind == GC_NORMAL || gckind == GC_SHRINK);
    8630           0 :     cx->runtime()->gc.gc(gckind, reason);
    8631             : }
    8632           0 : 
    8633           0 : JS_PUBLIC_API(void)
    8634             : JS::StartIncrementalGC(JSContext* cx, JSGCInvocationKind gckind, gcreason::Reason reason, int64_t millis)
    8635             : {
    8636           0 :     MOZ_ASSERT(gckind == GC_NORMAL || gckind == GC_SHRINK);
    8637             :     cx->runtime()->gc.startGC(gckind, reason, millis);
    8638           0 : }
    8639           0 : 
    8640           0 : JS_PUBLIC_API(void)
    8641             : JS::IncrementalGCSlice(JSContext* cx, gcreason::Reason reason, int64_t millis)
    8642             : {
    8643           0 :     cx->runtime()->gc.gcSlice(reason, millis);
    8644             : }
    8645           0 : 
    8646           0 : JS_PUBLIC_API(void)
    8647           0 : JS::FinishIncrementalGC(JSContext* cx, gcreason::Reason reason)
    8648             : {
    8649             :     cx->runtime()->gc.finishGC(reason);
    8650           0 : }
    8651             : 
    8652           0 : JS_PUBLIC_API(void)
    8653           0 : JS::AbortIncrementalGC(JSContext* cx)
    8654             : {
    8655             :     if (IsIncrementalGCInProgress(cx))
    8656           0 :         cx->runtime()->gc.abortGC();
    8657             : }
    8658           0 : 
    8659           0 : char16_t*
    8660             : JS::GCDescription::formatSliceMessage(JSContext* cx) const
    8661             : {
    8662           0 :     UniqueChars cstr = cx->runtime()->gc.stats().formatCompactSliceMessage();
    8663             : 
    8664           0 :     size_t nchars = strlen(cstr.get());
    8665           0 :     UniqueTwoByteChars out(js_pod_malloc<char16_t>(nchars + 1));
    8666           0 :     if (!out)
    8667             :         return nullptr;
    8668             :     out.get()[nchars] = 0;
    8669           0 : 
    8670             :     CopyAndInflateChars(out.get(), cstr.get(), nchars);
    8671           0 :     return out.release();
    8672             : }
    8673           0 : 
    8674           0 : char16_t*
    8675           0 : JS::GCDescription::formatSummaryMessage(JSContext* cx) const
    8676             : {
    8677           0 :     UniqueChars cstr = cx->runtime()->gc.stats().formatCompactSummaryMessage();
    8678             : 
    8679           0 :     size_t nchars = strlen(cstr.get());
    8680             :     UniqueTwoByteChars out(js_pod_malloc<char16_t>(nchars + 1));
    8681             :     if (!out)
    8682             :         return nullptr;
    8683             :     out.get()[nchars] = 0;
    8684           0 : 
    8685             :     CopyAndInflateChars(out.get(), cstr.get(), nchars);
    8686           0 :     return out.release();
    8687             : }
    8688           0 : 
    8689           0 : JS::dbg::GarbageCollectionEvent::Ptr
    8690           0 : JS::GCDescription::toGCEvent(JSContext* cx) const
    8691             : {
    8692           0 :     return JS::dbg::GarbageCollectionEvent::Create(cx->runtime(), cx->runtime()->gc.stats(),
    8693             :                                                    cx->runtime()->gc.majorGCCount());
    8694           0 : }
    8695             : 
    8696             : char16_t*
    8697             : JS::GCDescription::formatJSON(JSContext* cx, uint64_t timestamp) const
    8698             : {
    8699           0 :     UniqueChars cstr = cx->runtime()->gc.stats().renderJsonMessage(timestamp);
    8700             : 
    8701           0 :     size_t nchars = strlen(cstr.get());
    8702           0 :     UniqueTwoByteChars out(js_pod_malloc<char16_t>(nchars + 1));
    8703             :     if (!out)
    8704             :         return nullptr;
    8705             :     out.get()[nchars] = 0;
    8706           0 : 
    8707             :     CopyAndInflateChars(out.get(), cstr.get(), nchars);
    8708           0 :     return out.release();
    8709             : }
    8710           0 : 
    8711           0 : TimeStamp
    8712           0 : JS::GCDescription::startTime(JSContext* cx) const
    8713             : {
    8714           0 :     return cx->runtime()->gc.stats().start();
    8715             : }
    8716           0 : 
    8717             : TimeStamp
    8718             : JS::GCDescription::endTime(JSContext* cx) const
    8719             : {
    8720             :     return cx->runtime()->gc.stats().end();
    8721           0 : }
    8722             : 
    8723           0 : TimeStamp
    8724             : JS::GCDescription::lastSliceStart(JSContext* cx) const
    8725             : {
    8726             :     return cx->runtime()->gc.stats().slices().back().start;
    8727           0 : }
    8728             : 
    8729           0 : TimeStamp
    8730             : JS::GCDescription::lastSliceEnd(JSContext* cx) const
    8731             : {
    8732             :     return cx->runtime()->gc.stats().slices().back().end;
    8733           0 : }
    8734             : 
    8735           0 : JS::UniqueChars
    8736             : JS::GCDescription::sliceToJSON(JSContext* cx) const
    8737             : {
    8738             :     size_t slices = cx->runtime()->gc.stats().slices().length();
    8739           0 :     MOZ_ASSERT(slices > 0);
    8740             :     return cx->runtime()->gc.stats().renderJsonSlice(slices - 1);
    8741           0 : }
    8742             : 
    8743             : JS::UniqueChars
    8744             : JS::GCDescription::summaryToJSON(JSContext* cx) const
    8745           0 : {
    8746             :     return cx->runtime()->gc.stats().renderJsonMessage(0, false);
    8747           0 : }
    8748           0 : 
    8749           0 : JS_PUBLIC_API(JS::UniqueChars)
    8750             : JS::MinorGcToJSON(JSContext* cx)
    8751             : {
    8752             :     JSRuntime* rt = cx->runtime();
    8753           0 :     return rt->gc.stats().renderNurseryJson(rt);
    8754             : }
    8755           0 : 
    8756             : JS_PUBLIC_API(JS::GCSliceCallback)
    8757             : JS::SetGCSliceCallback(JSContext* cx, GCSliceCallback callback)
    8758             : {
    8759           0 :     return cx->runtime()->gc.setSliceCallback(callback);
    8760             : }
    8761           0 : 
    8762           0 : JS_PUBLIC_API(JS::DoCycleCollectionCallback)
    8763             : JS::SetDoCycleCollectionCallback(JSContext* cx, JS::DoCycleCollectionCallback callback)
    8764             : {
    8765             :     return cx->runtime()->gc.setDoCycleCollectionCallback(callback);
    8766           0 : }
    8767             : 
    8768          12 : JS_PUBLIC_API(JS::GCNurseryCollectionCallback)
    8769             : JS::SetGCNurseryCollectionCallback(JSContext* cx, GCNurseryCollectionCallback callback)
    8770             : {
    8771             :     return cx->runtime()->gc.setNurseryCollectionCallback(callback);
    8772           0 : }
    8773             : 
    8774           1 : JS_PUBLIC_API(void)
    8775             : JS::DisableIncrementalGC(JSContext* cx)
    8776             : {
    8777             :     cx->runtime()->gc.disallowIncrementalGC();
    8778           0 : }
    8779             : 
    8780           2 : JS_PUBLIC_API(bool)
    8781             : JS::IsIncrementalGCEnabled(JSContext* cx)
    8782             : {
    8783             :     return cx->runtime()->gc.isIncrementalGCEnabled();
    8784           0 : }
    8785             : 
    8786           0 : JS_PUBLIC_API(bool)
    8787           0 : JS::IsIncrementalGCInProgress(JSContext* cx)
    8788             : {
    8789             :     return cx->runtime()->gc.isIncrementalGCInProgress() && !cx->runtime()->gc.isVerifyPreBarriersEnabled();
    8790           0 : }
    8791             : 
    8792           0 : JS_PUBLIC_API(bool)
    8793             : JS::IsIncrementalGCInProgress(JSRuntime* rt)
    8794             : {
    8795             :     return rt->gc.isIncrementalGCInProgress() && !rt->gc.isVerifyPreBarriersEnabled();
    8796           0 : }
    8797             : 
    8798           0 : JS_PUBLIC_API(bool)
    8799             : JS::IsIncrementalBarrierNeeded(JSContext* cx)
    8800             : {
    8801             :     if (JS::RuntimeHeapIsBusy())
    8802           0 :         return false;
    8803             : 
    8804           0 :     auto state = cx->runtime()->gc.state();
    8805             :     return state != gc::State::NotActive && state <= gc::State::Sweep;
    8806             : }
    8807             : 
    8808           0 : JS_PUBLIC_API(void)
    8809             : JS::IncrementalPreWriteBarrier(JSObject* obj)
    8810           0 : {
    8811             :     if (!obj)
    8812             :         return;
    8813           0 : 
    8814           0 :     MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
    8815             :     JSObject::writeBarrierPre(obj);
    8816             : }
    8817             : 
    8818           0 : struct IncrementalReadBarrierFunctor {
    8819             :     template <typename T> void operator()(T* t) { T::readBarrier(t); }
    8820           0 : };
    8821             : 
    8822             : JS_PUBLIC_API(void)
    8823           0 : JS::IncrementalReadBarrier(GCCellPtr thing)
    8824           0 : {
    8825             :     if (!thing)
    8826             :         return;
    8827             : 
    8828           0 :     MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting());
    8829             :     DispatchTyped(IncrementalReadBarrierFunctor(), thing);
    8830             : }
    8831             : 
    8832           0 : JS_PUBLIC_API(bool)
    8833             : JS::WasIncrementalGC(JSRuntime* rt)
    8834           0 : {
    8835             :     return rt->gc.isIncrementalGc();
    8836             : }
    8837           0 : 
    8838           0 : uint64_t
    8839             : js::gc::NextCellUniqueId(JSRuntime* rt)
    8840             : {
    8841             :     return rt->gc.nextCellUniqueId();
    8842           0 : }
    8843             : 
    8844           0 : namespace js {
    8845             : namespace gc {
    8846             : namespace MemInfo {
    8847             : 
    8848           0 : static bool
    8849             : GCBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8850           0 : {
    8851             :     CallArgs args = CallArgsFromVp(argc, vp);
    8852             :     args.rval().setNumber(double(cx->runtime()->gc.usage.gcBytes()));
    8853             :     return true;
    8854             : }
    8855             : 
    8856             : static bool
    8857             : GCMaxBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8858           0 : {
    8859             :     CallArgs args = CallArgsFromVp(argc, vp);
    8860           0 :     args.rval().setNumber(double(cx->runtime()->gc.tunables.gcMaxBytes()));
    8861           0 :     return true;
    8862           0 : }
    8863             : 
    8864             : static bool
    8865             : MallocBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8866           0 : {
    8867             :     CallArgs args = CallArgsFromVp(argc, vp);
    8868           0 :     args.rval().setNumber(double(cx->runtime()->gc.getMallocBytes()));
    8869           0 :     return true;
    8870           0 : }
    8871             : 
    8872             : static bool
    8873             : MaxMallocGetter(JSContext* cx, unsigned argc, Value* vp)
    8874           0 : {
    8875             :     CallArgs args = CallArgsFromVp(argc, vp);
    8876           0 :     args.rval().setNumber(double(cx->runtime()->gc.maxMallocBytesAllocated()));
    8877           0 :     return true;
    8878           0 : }
    8879             : 
    8880             : static bool
    8881             : GCHighFreqGetter(JSContext* cx, unsigned argc, Value* vp)
    8882           0 : {
    8883             :     CallArgs args = CallArgsFromVp(argc, vp);
    8884           0 :     args.rval().setBoolean(cx->runtime()->gc.schedulingState.inHighFrequencyGCMode());
    8885           0 :     return true;
    8886           0 : }
    8887             : 
    8888             : static bool
    8889             : GCNumberGetter(JSContext* cx, unsigned argc, Value* vp)
    8890           0 : {
    8891             :     CallArgs args = CallArgsFromVp(argc, vp);
    8892           0 :     args.rval().setNumber(double(cx->runtime()->gc.gcNumber()));
    8893           0 :     return true;
    8894           0 : }
    8895             : 
    8896             : static bool
    8897             : MajorGCCountGetter(JSContext* cx, unsigned argc, Value* vp)
    8898           0 : {
    8899             :     CallArgs args = CallArgsFromVp(argc, vp);
    8900           0 :     args.rval().setNumber(double(cx->runtime()->gc.majorGCCount()));
    8901           0 :     return true;
    8902           0 : }
    8903             : 
    8904             : static bool
    8905             : MinorGCCountGetter(JSContext* cx, unsigned argc, Value* vp)
    8906           0 : {
    8907             :     CallArgs args = CallArgsFromVp(argc, vp);
    8908           0 :     args.rval().setNumber(double(cx->runtime()->gc.minorGCCount()));
    8909           0 :     return true;
    8910           0 : }
    8911             : 
    8912             : static bool
    8913             : ZoneGCBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8914           0 : {
    8915             :     CallArgs args = CallArgsFromVp(argc, vp);
    8916           0 :     args.rval().setNumber(double(cx->zone()->usage.gcBytes()));
    8917           0 :     return true;
    8918           0 : }
    8919             : 
    8920             : static bool
    8921             : ZoneGCTriggerBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8922           0 : {
    8923             :     CallArgs args = CallArgsFromVp(argc, vp);
    8924           0 :     args.rval().setNumber(double(cx->zone()->threshold.gcTriggerBytes()));
    8925           0 :     return true;
    8926           0 : }
    8927             : 
    8928             : static bool
    8929             : ZoneGCAllocTriggerGetter(JSContext* cx, unsigned argc, Value* vp)
    8930           0 : {
    8931             :     CallArgs args = CallArgsFromVp(argc, vp);
    8932           0 :     bool highFrequency = cx->runtime()->gc.schedulingState.inHighFrequencyGCMode();
    8933           0 :     args.rval().setNumber(double(cx->zone()->threshold.eagerAllocTrigger(highFrequency)));
    8934           0 :     return true;
    8935             : }
    8936             : 
    8937             : static bool
    8938           0 : ZoneMallocBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8939             : {
    8940           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8941           0 :     args.rval().setNumber(double(cx->zone()->GCMallocBytes()));
    8942           0 :     return true;
    8943           0 : }
    8944             : 
    8945             : static bool
    8946             : ZoneMaxMallocGetter(JSContext* cx, unsigned argc, Value* vp)
    8947           0 : {
    8948             :     CallArgs args = CallArgsFromVp(argc, vp);
    8949           0 :     args.rval().setNumber(double(cx->zone()->GCMaxMallocBytes()));
    8950           0 :     return true;
    8951           0 : }
    8952             : 
    8953             : static bool
    8954             : ZoneGCDelayBytesGetter(JSContext* cx, unsigned argc, Value* vp)
    8955           0 : {
    8956             :     CallArgs args = CallArgsFromVp(argc, vp);
    8957           0 :     args.rval().setNumber(double(cx->zone()->gcDelayBytes));
    8958           0 :     return true;
    8959           0 : }
    8960             : 
    8961             : static bool
    8962             : ZoneGCHeapGrowthFactorGetter(JSContext* cx, unsigned argc, Value* vp)
    8963           0 : {
    8964             :     CallArgs args = CallArgsFromVp(argc, vp);
    8965           0 :     AutoLockGC lock(cx->runtime());
    8966           0 :     args.rval().setNumber(cx->zone()->threshold.gcHeapGrowthFactor());
    8967           0 :     return true;
    8968             : }
    8969             : 
    8970             : static bool
    8971           0 : ZoneGCNumberGetter(JSContext* cx, unsigned argc, Value* vp)
    8972             : {
    8973           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8974           0 :     args.rval().setNumber(double(cx->zone()->gcNumber()));
    8975           0 :     return true;
    8976           0 : }
    8977             : 
    8978             : #ifdef JS_MORE_DETERMINISTIC
    8979             : static bool
    8980           0 : DummyGetter(JSContext* cx, unsigned argc, Value* vp)
    8981             : {
    8982           0 :     CallArgs args = CallArgsFromVp(argc, vp);
    8983           0 :     args.rval().setUndefined();
    8984           0 :     return true;
    8985             : }
    8986             : #endif
    8987             : 
    8988             : } /* namespace MemInfo */
    8989             : 
    8990             : JSObject*
    8991             : NewMemoryInfoObject(JSContext* cx)
    8992             : {
    8993             :     RootedObject obj(cx, JS_NewObject(cx, nullptr));
    8994             :     if (!obj)
    8995             :         return nullptr;
    8996             : 
    8997             :     using namespace MemInfo;
    8998             :     struct NamedGetter {
    8999             :         const char* name;
    9000           0 :         JSNative getter;
    9001             :     } getters[] = {
    9002           0 :         { "gcBytes", GCBytesGetter },
    9003           0 :         { "gcMaxBytes", GCMaxBytesGetter },
    9004             :         { "mallocBytesRemaining", MallocBytesGetter },
    9005             :         { "maxMalloc", MaxMallocGetter },
    9006             :         { "gcIsHighFrequencyMode", GCHighFreqGetter },
    9007             :         { "gcNumber", GCNumberGetter },
    9008             :         { "majorGCCount", MajorGCCountGetter },
    9009             :         { "minorGCCount", MinorGCCountGetter }
    9010             :     };
    9011             : 
    9012             :     for (auto pair : getters) {
    9013             : #ifdef JS_MORE_DETERMINISTIC
    9014             :         JSNative getter = DummyGetter;
    9015             : #else
    9016             :         JSNative getter = pair.getter;
    9017             : #endif
    9018             :         if (!JS_DefineProperty(cx, obj, pair.name,
    9019           0 :                                getter, nullptr,
    9020             :                                JSPROP_ENUMERATE))
    9021           0 :         {
    9022             :             return nullptr;
    9023             :         }
    9024             :     }
    9025           0 : 
    9026             :     RootedObject zoneObj(cx, JS_NewObject(cx, nullptr));
    9027           0 :     if (!zoneObj)
    9028             :         return nullptr;
    9029             : 
    9030             :     if (!JS_DefineProperty(cx, obj, "zone", zoneObj, JSPROP_ENUMERATE))
    9031             :         return nullptr;
    9032             : 
    9033             :     struct NamedZoneGetter {
    9034             :         const char* name;
    9035           0 :         JSNative getter;
    9036           0 :     } zoneGetters[] = {
    9037             :         { "gcBytes", ZoneGCBytesGetter },
    9038             :         { "gcTriggerBytes", ZoneGCTriggerBytesGetter },
    9039           0 :         { "gcAllocTrigger", ZoneGCAllocTriggerGetter },
    9040             :         { "mallocBytesRemaining", ZoneMallocBytesGetter },
    9041             :         { "maxMalloc", ZoneMaxMallocGetter },
    9042             :         { "delayBytes", ZoneGCDelayBytesGetter },
    9043             :         { "heapGrowthFactor", ZoneGCHeapGrowthFactorGetter },
    9044             :         { "gcNumber", ZoneGCNumberGetter }
    9045             :     };
    9046             : 
    9047             :     for (auto pair : zoneGetters) {
    9048             :  #ifdef JS_MORE_DETERMINISTIC
    9049             :         JSNative getter = DummyGetter;
    9050             : #else
    9051             :         JSNative getter = pair.getter;
    9052             : #endif
    9053             :         if (!JS_DefineProperty(cx, zoneObj, pair.name,
    9054           0 :                                getter, nullptr,
    9055             :                                JSPROP_ENUMERATE))
    9056           0 :         {
    9057             :             return nullptr;
    9058             :         }
    9059             :     }
    9060           0 : 
    9061             :     return obj;
    9062           0 : }
    9063             : 
    9064             : const char*
    9065             : StateName(State state)
    9066             : {
    9067             :     switch(state) {
    9068             : #define MAKE_CASE(name) case State::name: return #name;
    9069             :       GCSTATES(MAKE_CASE)
    9070           0 : #undef MAKE_CASE
    9071             :     }
    9072             :     MOZ_MAKE_COMPILER_ASSUME_IS_UNREACHABLE("invalide gc::State enum value");
    9073             : }
    9074           0 : 
    9075             : void
    9076           0 : AutoAssertEmptyNursery::checkCondition(JSContext* cx) {
    9077             :     if (!noAlloc)
    9078           0 :         noAlloc.emplace();
    9079             :     this->cx = cx;
    9080             :     MOZ_ASSERT(cx->nursery().isEmpty());
    9081           0 : }
    9082             : 
    9083             : AutoEmptyNursery::AutoEmptyNursery(JSContext* cx)
    9084             :   : AutoAssertEmptyNursery()
    9085           0 : {
    9086           0 :     MOZ_ASSERT(!cx->suppressGC);
    9087           0 :     cx->runtime()->gc.stats().suspendPhases();
    9088           0 :     cx->runtime()->gc.evictNursery(JS::gcreason::EVICT_NURSERY);
    9089           0 :     cx->runtime()->gc.stats().resumePhases();
    9090           0 :     checkCondition(cx);
    9091             : }
    9092           0 : 
    9093           0 : } /* namespace gc */
    9094             : } /* namespace js */
    9095           0 : 
    9096           0 : #ifdef DEBUG
    9097           0 : 
    9098           0 : namespace js {
    9099           0 : 
    9100           0 : // We don't want jsfriendapi.h to depend on GenericPrinter,
    9101             : // so these functions are declared directly in the cpp.
    9102             : 
    9103             : extern JS_FRIEND_API(void)
    9104             : DumpString(JSString* str, js::GenericPrinter& out);
    9105             : 
    9106             : }
    9107             : 
    9108             : void
    9109             : js::gc::Cell::dump(js::GenericPrinter& out) const
    9110             : {
    9111             :     switch (getTraceKind()) {
    9112             :       case JS::TraceKind::Object:
    9113             :         reinterpret_cast<const JSObject*>(this)->dump(out);
    9114             :         break;
    9115             : 
    9116             :       case JS::TraceKind::String:
    9117             :           js::DumpString(reinterpret_cast<JSString*>(const_cast<Cell*>(this)), out);
    9118           0 :         break;
    9119             : 
    9120           0 :       case JS::TraceKind::Shape:
    9121             :         reinterpret_cast<const Shape*>(this)->dump(out);
    9122           0 :         break;
    9123           0 : 
    9124             :       default:
    9125             :         out.printf("%s(%p)\n", JS::GCTraceKindToAscii(getTraceKind()), (void*) this);
    9126           0 :     }
    9127           0 : }
    9128             : 
    9129             : // For use in a debugger.
    9130           0 : void
    9131           0 : js::gc::Cell::dump() const
    9132             : {
    9133             :     js::Fprinter out(stderr);
    9134           0 :     dump(out);
    9135             : }
    9136           0 : #endif
    9137             : 
    9138             : static inline bool
    9139             : CanCheckGrayBits(const Cell* cell)
    9140           0 : {
    9141             :     MOZ_ASSERT(cell);
    9142           0 :     if (!cell->isTenured())
    9143           0 :         return false;
    9144           0 : 
    9145             :     auto tc = &cell->asTenured();
    9146             :     auto rt = tc->runtimeFromAnyThread();
    9147             :     return CurrentThreadCanAccessRuntime(rt) && rt->gc.areGrayBitsValid();
    9148     3582719 : }
    9149             : 
    9150           1 : JS_PUBLIC_API(bool)
    9151     7165487 : js::gc::detail::CellIsMarkedGrayIfKnown(const Cell* cell)
    9152             : {
    9153             :     // We ignore the gray marking state of cells and return false in the
    9154           0 :     // following cases:
    9155     2667493 :     //
    9156           0 :     // 1) When OOM has caused us to clear the gcGrayBitsValid_ flag.
    9157             :     //
    9158             :     // 2) When we are in an incremental GC and examine a cell that is in a zone
    9159             :     // that is not being collected. Gray targets of CCWs that are marked black
    9160       15030 :     // by a barrier will eventually be marked black in the next GC slice.
    9161             :     //
    9162             :     // 3) When we are not on the runtime's main thread. Helper threads might
    9163             :     // call this while parsing, and they are not allowed to inspect the
    9164             :     // runtime's incremental state. The objects being operated on are not able
    9165             :     // to be collected and will not be marked any color.
    9166             : 
    9167             :     if (!CanCheckGrayBits(cell))
    9168             :         return false;
    9169             : 
    9170             :     auto tc = &cell->asTenured();
    9171             :     MOZ_ASSERT(!tc->zoneFromAnyThread()->usedByHelperThread());
    9172             : 
    9173             :     auto rt = tc->runtimeFromMainThread();
    9174             :     if (rt->gc.isIncrementalGCInProgress() && !tc->zone()->wasGCStarted())
    9175             :         return false;
    9176           1 : 
    9177             :     return detail::CellIsMarkedGray(tc);
    9178             : }
    9179           0 : 
    9180           0 : #ifdef DEBUG
    9181             : 
    9182           0 : JS_PUBLIC_API(bool)
    9183           0 : js::gc::detail::CellIsNotGray(const Cell* cell)
    9184             : {
    9185             :     // Check that a cell is not marked gray.
    9186           0 :     //
    9187             :     // Since this is a debug-only check, take account of the eventual mark state
    9188             :     // of cells that will be marked black by the next GC slice in an incremental
    9189             :     // GC. For performance reasons we don't do this in CellIsMarkedGrayIfKnown.
    9190             : 
    9191             :     if (!CanCheckGrayBits(cell))
    9192             :         return true;
    9193             : 
    9194             :     // TODO: I'd like to AssertHeapIsIdle() here, but this ends up getting
    9195             :     // called during GC and while iterating the heap for memory reporting.
    9196             :     MOZ_ASSERT(!JS::RuntimeHeapIsCycleCollecting());
    9197             : 
    9198             :     auto tc = &cell->asTenured();
    9199             :     if (!detail::CellIsMarkedGray(tc))
    9200             :         return true;
    9201             : 
    9202             :     // The cell is gray, but may eventually be marked black if we are in an
    9203             :     // incremental GC and the cell is reachable by something on the mark stack.
    9204             : 
    9205             :     auto rt = tc->runtimeFromAnyThread();
    9206             :     if (!rt->gc.isIncrementalGCInProgress() || tc->zone()->wasGCStarted())
    9207             :         return false;
    9208             : 
    9209             :     Zone* sourceZone = rt->gc.marker.stackContainsCrossZonePointerTo(tc);
    9210             :     if (sourceZone && sourceZone->wasGCStarted())
    9211             :         return true;
    9212             : 
    9213             :     return false;
    9214             : }
    9215             : 
    9216             : extern JS_PUBLIC_API(bool)
    9217             : js::gc::detail::ObjectIsMarkedBlack(const JSObject* obj)
    9218             : {
    9219             :     return obj->isMarkedBlack();
    9220             : }
    9221             : 
    9222             : #endif
    9223             : 
    9224             : js::gc::ClearEdgesTracer::ClearEdgesTracer()
    9225             :   : CallbackTracer(TlsContext.get(), TraceWeakMapKeysValues)
    9226             : {}
    9227             : 
    9228             : template <typename S>
    9229             : inline void
    9230             : js::gc::ClearEdgesTracer::clearEdge(S** thingp)
    9231             : {
    9232             :     InternalBarrierMethods<S*>::preBarrier(*thingp);
    9233             :     InternalBarrierMethods<S*>::postBarrier(thingp, *thingp, nullptr);
    9234             :     *thingp = nullptr;
    9235             : }
    9236             : 
    9237             : void js::gc::ClearEdgesTracer::onObjectEdge(JSObject** objp) { clearEdge(objp); }
    9238             : void js::gc::ClearEdgesTracer::onStringEdge(JSString** strp) { clearEdge(strp); }
    9239             : void js::gc::ClearEdgesTracer::onSymbolEdge(JS::Symbol** symp) { clearEdge(symp); }
    9240             : #ifdef ENABLE_BIGINT
    9241             : void js::gc::ClearEdgesTracer::onBigIntEdge(JS::BigInt** bip) { clearEdge(bip); }
    9242             : #endif
    9243             : void js::gc::ClearEdgesTracer::onScriptEdge(JSScript** scriptp) { clearEdge(scriptp); }
    9244             : void js::gc::ClearEdgesTracer::onShapeEdge(js::Shape** shapep) { clearEdge(shapep); }
    9245             : void js::gc::ClearEdgesTracer::onObjectGroupEdge(js::ObjectGroup** groupp) { clearEdge(groupp); }
    9246             : void js::gc::ClearEdgesTracer::onBaseShapeEdge(js::BaseShape** basep) { clearEdge(basep); }
    9247             : void js::gc::ClearEdgesTracer::onJitCodeEdge(js::jit::JitCode** codep) { clearEdge(codep); }
    9248             : void js::gc::ClearEdgesTracer::onLazyScriptEdge(js::LazyScript** lazyp) { clearEdge(lazyp); }
    9249             : void js::gc::ClearEdgesTracer::onScopeEdge(js::Scope** scopep) { clearEdge(scopep); }
    9250             : void js::gc::ClearEdgesTracer::onRegExpSharedEdge(js::RegExpShared** sharedp) { clearEdge(sharedp); }
    9251             : void js::gc::ClearEdgesTracer::onChild(const JS::GCCellPtr& thing) { MOZ_CRASH(); }

Generated by: LCOV version 1.13-14-ga5dd952