MOMENT CONDITIONS FOR RANDOM COEFFICIENT AR( ∞ ) UNDER NON-NEGATIVITY ASSUMPTIONS

. We consider random coeﬃcient autoregressive models of inﬁnite order (AR( ∞ )) under the assumption of non-negativity of the coeﬃcients. We develop novel methods yielding suﬃcient or necessary conditions for ﬁniteness of moments, based on combinatorial expressions of ﬁrst and second moments. The methods based on ﬁrst moments recover previous suﬃcient conditions by Doukhan and Wintenberger [6] in our setting. The second moment method provides in particular a necessary and suﬃcient condition for ﬁniteness of second moments which is diﬀerent, but shown to be equivalent to the classical criterion of Nicholls and Quinn [11] in the case of ﬁnite order. We further illustrate our results through two examples.

Let (A j ) j⩾1 be a sequence of non-negative random variables and B a non-negative random variable.We allow for arbitrary dependencies among them.Consider the recurrence (0.1) where ((A t,j ) j⩾1 , B t ) t∈Z is an iid sequence with generic element ((A j ) j⩾1 , B).
In the case where there exists p ∈ N such that A j = 0 for all j > p, this recurrence is known in the literature under the name random coefficient autoregressive model of p-th order, symbolically AR(p) in (Buraczewski et al., 2016, Section 4.4.9), or also random difference equation of p-th order in Kesten (1973).It is natural to call the general recurrence the random coefficient autoregressive model of infinite order, symbolically AR(∞).
Using a backward iteration starting from zero (Diaconis and Freedman (1999)), one can construct a non-anticipative stationary solution of (0.1), whose marginal distribution is the law of the following random variable: (0.2) X := where Ãt,j := A −t,j , t ∈ Z, j ⩾ 1.By non-negativity, this quantity is always well-defined but may be infinite.The aim of this paper is to find explicit necessary and sufficient conditions for the existence of moments of X.We do not address here the question of whether the solution to (0.1) thus constructed is unique and refer to Doukhan and Wintenberger (2008) for results in this direction.As usual we will always assume in the sequel that the random variables B t satisfy P(B t = 0) < 1, t ∈ Z, to avoid any degeneracy.
The finite order case AR(p), p < ∞.Let us recall what is known in the finite order case, following Kesten (1973) and the recent book by Buraczewski et al. (2016).When there exists p ∈ N such that A 1,j = 0 for all j > p, the recursion (0.1) turns into the finite order AR(p) equation One typically rephrases it as a random difference equation of first order on p × p matrices: Define with all non-appearing entries equal to zero in the definition of A t .We also denote by A the matrix defined correspondingly with (A 1 , . . ., A p ) instead of (A t,1 , . . ., A t,p ) and denote B accordingly.Then (0.1) is equivalent to This equation has been studied by many authors starting from Kesten (1973).Let us recall how it is typically solved.First, if one seeks stationary solutions of (0.1) one needs to assume that the top Lyapunov exponent of the matrix A is negative, i.e., (0.4) lim where we can of course choose ∥ • ∥ to be any norm.Next, one defines the following function (Buraczewski et al., 2016, (4.4.35)): (existence of the limit follows from a submultiplicativity argument).The tail behavior of the stationary solution (0.2) depends on the form of this function.Most of the literature works under the assumptions where there exist 0 < α < α ′ , such that h(α) = 0, h(α ′ ) < ∞ and E[B α ′ ] < ∞.In this case, and under a further non-lattice assumption, there exists a stationary solution with generic element X satisfying the distributional equation (0.6) with (A, B) independent of X.Furthermore, ∥X∥ is power law tailed of order α, i.e. there exists C > 0 such that 1 see Buraczewski et al. (2016).One easily deduces the moment properties In other words, using the convexity of the function h, the condition h(θ) < 0 is necessary and sufficient for the existence of moments of order θ > 0 for the AR(p) model.The aim of the paper is to extend such necessary and sufficient conditions when p = ∞.
The root equation h(α) = 0 is impossible to solve explicitly for general stochastic matrices A. Even the simpler top-Lyapunov condition (0.4) rarely turns into an explicit condition on the law of the random vector (A 1 , . . ., A p ).The specificity of the matrix A in the AR(p) model allows Nicholls and Quinn (1982) to provide an explicit sufficient condition for the existence of a stationary solution with finite second moment, in the case of centered coefficients.Denoting by ρ the spectral radius and ⊗ the tensor product, the sufficient condition of (0.4) is (Nicholls and Quinn, 1982, Corollary 2.2.2) also show that this assumption is necessary and sufficient for having finite moments of order 2 when B has finite moments of order 2 and is independent of A. Thus the condition ρ(E[A ⊗ A]) < 1 is equivalent to the condition h(2) < 0 in the case of centered coefficients B t .Pham (1986) extended the equivalence to a more general setting including non-negative coefficients as considered here.In this article, we are interested in similarly explicit conditions when p = ∞, in the case of non-negative coefficients.
Back to AR(∞).Very little is known in the infinite-order case p = ∞, when the recursion (0.1) is satisfied with P(A t,j > 0) > 0 for infinitely many j ⩾ 1.In the language of autoregressive processes, this model is denoted by AR(∞) and is an example of an autoregressive process of "infinite memory".Doukhan and Wintenberger (2008) studied non-linear processes with infinite memory, which includes the AR(∞) as a special case and are defined by where the iid process (ξ t ) can be taken equal to ((A t,j ) j , B t ).They gave sufficient conditions for finite moments of order θ using a coupling approach in 1 We use the notation f (x) ∼ g(x), x → ∞ to mean that the two real-valued functions f and g satisfy the relation limx→∞ f (x)/g(x) = 1.
L θ , θ ⩾ 1.In particular, for the model (0.1), the contraction condition (3.2) in Doukhan and Wintenberger (2008), for θ ⩾ 1, turns into the condition φ1 (θ) < 0, where and Wintenberger (2008) ensures the existence of a stationary solution X admitting finite moments of order θ.We are not aware of any other explicit condition appearing in the literature for the general AR(∞) model.
Relation with the smoothing transform.Equation (0.6), or rather, its restriction to the first coordinate, looks similar to an equation known in the literature as the fixed point equation of the (non-homogeneous) smoothing transform.This is the following distributional equation (the unknown being the law of the random variable Y ) where (Y (j) ) j⩾1 are iid copies of the non-negative random variable Y , independent of ((A j ) j⩾1 , B). Define It is known that a solution to (0.9) exists if ϕ 1 (0) > 0 and for some θ ∈ (0, 1], E[B θ ] < ∞ and ϕ 1 (θ) < 0. Furthermore, this criterion is close to being optimal (Buraczewski et al., 2016, Section 5.2.4).In this case, if there exists α ⩾ θ such that ϕ 1 (α) = 0 and some non-arithmeticity condition is met, then P(Y > x) ∼ Cx −α as x → ∞, for some constant C ∈ (0, ∞).Note that the characteristic exponent α does not depend on the dependencies between the A j 's, only on the marginal laws.
Description of our results.Our results provide moment properties for the non-negative random variable X defined in (0.2) under conditions involving moments of the (non-negative) random coefficients A t,j or their generic elements A j , j ⩾ 1. Recall that X is distributed as the (marginal distribution of the) stationary solution of the recurrence (0.1).Our first result is an extension of the above-mentioned sufficiency condition from Doukhan and Wintenberger (2008) to the case θ ⩽ 1.More precisely we prove that ϕ 1 (θ) < 0 is a sufficient condition for E[ Xθ ] < ∞ when 0 < θ ⩽ 1.Then we show that ϕ 1 (θ) < 0 is a necessary condition for E[ Xθ ] < ∞ when θ ⩾ 1. Finally we also show that φ1 (θ) < 0 is a necessary condition for E[ Xθ ] < ∞ when 0 < θ < 1.Since the functions ϕ 1 and φ1 coincide at θ = 1, we deduce that ϕ 1 (1) = φ1 (1) < 0 is a necessary and sufficient condition for the existence of moments of order 1 when E[B] < ∞.We thus obtain necessary conditions as well as sufficient conditions for any θ > 0, and these are asymptotically sharp when θ is close to one.The special role of θ = 1 is not surprising in view of the linearity of the model and of the expectation.
Our main contributions are necessary conditions and sufficient conditions of moments that are better than the previous ones for θ ⩾ 2 and for some values of θ in the interval [1,2].Furthermore, they are asymptotically sharp when θ is close to 2. To do so we introduce in Section 2 two other functions ϕ 2 and φ2 that have the following properties (we omit some extra conditions below to simplify the presentation): • for 0 < θ < 2 the conditions ϕ 2 (θ) < 0 and φ2 (θ) < 0 are sufficient and necessary, respectively, for the existence of moments E[ Xθ ] < ∞, • for θ > 2 the conditions ϕ 2 (θ) < 0 and φ2 (θ) < 0 are necessary and sufficient, respectively, for the existence of moments The statement above is only a partial summary omitting extra conditions in some cases.In particular, conditions on the functions ϕ 1 and φ1 are required for ensuring the existence of moments; see Theorem 2.1 for a precise statement.The functions ϕ 1 , φ1 , ϕ 2 and φ2 do not depend on the law of the non-negative random variables B t .In fact, our sufficient or necessary conditions for finiteness of moments of X apply both in situations when X has a heavier tail than B t (under moment conditions on B t ) and in situations when the moment properties of X and B t are similar.In the latter case, one could also apply Theorem 3.1 of Hult and Samorodnitsky (2008) to obtain the precise tail properties of X under moments conditions on ((A t,j ) j⩾1 ) t∈Z and regular variation of the tail of B t .
We furthermore verify that the necessary and sufficient condition ϕ 2 (θ) = φ2 (θ) < 0 coincides with the classical one (0.7) of Nicholls and Quinn (1982) in the case θ = 2 when p < ∞, although it is of a very different form.Indeed our approach uses linearity together with combinatorics on pairs instead of a matrix representation as the one of Nicholls and Quinn (1982).The equivalence stated in Theorem 3.1 extends the classical necessary and sufficient condition (0.7) to the stationary solution of the AR(p) model (0.3).
We mention that our approach also covers models of the form (0.11) with ((A t,j ) j⩾1 , B t ) t∈Z defined as above, assuming that the B t 's are independent of the A t,j 's.The non-anticipative solution of (0.11) is a predictable process with respect to the natural filtration The volatility process (σ 2 t ) of a GARCH(1,1) model ε t = Z t σ t for iid (Z t ) is a typical example.It satisfies the recursion After expanding, this turns into a recursion of infinite order of the form (0.11) We check that for the volatility process of a GARCH(1,1) model our conditions provide necessary and sufficient conditions of finite order moments.
In particular we recover the optimal second-order moment condition of the GARCH(1,1) volatility using our approach.Notice that infinite memory models (AR and ARCH) are necessary for writing the invertible form (depending only on the past observations ε t−1 , ε t−2 , . ..) of some finite memory models such as ARMA or GARCH.
There is a natural extension of some our methods to higher moments of order k ⩾ 3.However, while this easily yields necessary criteria for the finiteness of moments, we were not able to obtain simple sufficient criteria in that case.See Section 2.1 for details.
Finally, notice also that sufficient conditions for finiteness of moments for some non-linear models (such as Galton-Watson process with immigration, see Basrak et al. (2013)) can be deduced from Theorem 2.1 using a stochastic domination argument similar as in Doukhan and Wintenberger (2008).

First moment -comparison to smoothing transform
As in the introduction, let (A j ) j⩾1 be a sequence of non-negative random variables and B a non-negative random variable.We allow for arbitrary dependencies among them (we only require that B is independent of the family (A j ) j⩾1 under assumption (A2) below).In the remainder of the paper, we let (X t ) t⩾0 be the solution to the equation (0.1) starting from zero, i.e. (1.12) where the family ((A t,j ) j⩾1 , B t ) t∈Z satisfies one of the following two assumptions: (A1) ((A t,j ) j⩾1 , B t ) t∈Z are iid copies of the random sequence ((A j ) j⩾1 , B), or (A2) ((A t+j,j ) j⩾1 ) t∈Z are iid copies of the random sequence (A j ) j⩾1 .Furthermore, B is independent of the family (A j ) j⩾1 .
Under (A2), we set A ′ t,j := A t+j,j for any t ∈ Z and j ⩾ 1, so that A t,j = A ′ t−j,j for any t ∈ Z and j ⩾ 1.Note that under (A1) or (A2), the sequence of random vectors ((A t,j ) j⩾1 ) t∈Z is stationary in t, i.e. translation-invariant in law.Under (A1) it is also reversible, i.e. invariant in law under the change of index t → −t, however, under (A2) it is not in general.In order to express the distribution of the solution of (1.12) using backward iterates, we therefore define Remark 1.1.Given t, t ′ ∈ Z and j, j ′ ⩾ 1, one readily checks that Ãt,j and Ãt ′ ,j ′ are independent if with obvious generalizations for larger collections of r.v.Moreover, Ãt,j and B −t ′ are independent if t ̸ = t ′ under (A1) and for all t, t ′ under (A2).
The following result is the main result of this section.
(1) As t → ∞, X t converges in law to a (possibly infinite) limit X.
(2) Assume ϕ 1 (θ) < 0 and and does not depend on the dependence among the A j .
Proof.We begin by proving 1. Unravelling the recurrence (1.12), we get Using stationarity in t and (1.13), we get By non-negativity of the A's, the sequence Xt is non-decreasing in t and therefore converges almost surely as t → ∞ to the (possibly infinite) limit X defined in (0.2), namely, Since X t d = Xt for every t, this shows that X t converges in law to a (possibly infinite) limit X.Furthermore, X d = X.We now prove 2. showing that with θ from the assumption, i.e. θ ∈ (0, 1] and ϕ 1 (θ) < 0, E[X θ ] = E[ Xθ ] < ∞ (note that this implies in particular that X is finite almost surely).By subadditivity of the function x → x θ , we have By independence (see Remark 1.1), since ϕ 1 (θ) < 0 by hypothesis.This finishes the proof of 2. The proof of 3. is similar.Let θ ⩾ 1 and ϕ 1 (θ) ⩾ 0 or E[B θ ] = ∞.By superadditivity of the function x → x θ , we have, similarly as above, The assertion 4. follows from Doukhan and Wintenberger (2008) in the cases where (A1) is satisfied.We provide here an alternative proof that also works under (A2).Using the Minkowski inequality (θ ⩾ 1) and by independence (see Remark 1.1), < ∞, since φ1 (θ) < 0 by hypothesis.The proof of 5. is similar to the preceding one, replacing the Minkowski inequality with the reverse Minkowski inequality holding for 0 < θ ⩽ 1 and non-negative random variables: The proof of point 5. then follows along the same lines as the proof of point 4. □

Second moment -a combinatorial formula
From this section on, we let X be the random variable from the statement of Theorem 1.2 and X the random variable from (0.2).We now calculate the second moment E[X 2 ].For this, it is useful to introduce some notation, which allows us to express the second moment and related quantities in a compact way.Define the set of finite increasing integer-valued sequences starting at zero: The trivial sequence is denoted by 0 = (0).For t = (t 0 , . . ., t n ) ∈ T , we write n(t) = n and Ãt = Ãt 0 ,t 1 −t 0 • • • Ãt n−1 ,tn−t n−1 , with Ã0 = 1 by convention.We also denote Bt := B −tn .With this notation, we have Define the concatenation of a finite number of sequences t 1 , . . ., t k by ).Now define the following two sets of pairs of elements in T : We call the pairs in C and O closed and open, respectively.Note that the trivial pair is open by definition: (0, 0) ∈ O.It is easy to see that any pair (s, t) ∈ T × T can be written as a concatenation of a finite number of closed pairs and an open pair: for every (s, t) ∈ T × T there exists k ∈ N 0 , (s 1 , t 1 ), . . ., This corresponds to splitting the pair (s, t) at the points where the two sequences intersect, see Figure 1 for an illustration.Since closed pairs only intersect at the start and end points and open pairs only at the starting point, this decomposition is unique.
In the same spirit as the functions ϕ 1 and φ1 from the last section, we now introduce two functions ϕ 2 and φ2 , defined as follows: (2.17) Note that we have ϕ 2 (θ) ⩾ ϕ 1 (θ) for all θ > 0, because the set C contains the closed pairs ((0, i), (0, i)) for each i ⩾ 1 and summing over these pairs only yields ϕ 1 (θ).
(2) Assume the following: (a) either: θ ⩽ 2, ϕ 1 (θ/2) < 0 and ϕ 2 (θ) < 0, or: θ ⩾ 2, φ1 (θ/2) < 0 and φ2 (θ Proof.We start with proving the first statement (showing that E[X θ ] = ∞).We first consider the case θ = 2 and assume that ϕ 2 (2) = φ2 (2) > 0. We wish to show that E[X 2 ] = E[ X2 ] = ∞.By the decomposition of arbitrary pairs (s, t) into a concatenation of closed pairs and an open pair, we obtain from (2.15) the following expression: (2.18) We now claim the following: To see this, set τ 0 = 0 and for i = 1, . . ., k: , where the equality comes from the definition of closed pairs.In Figure 1, the τ i correspond to the abscissae of the dotted lines.We now write and Now note that for every i = 1, . . ., k, Π i is a product of terms Ãt,j with • t ∈ {τ i−1 , . . ., τ i − 1}, and Furthermore and This proves the first statement of the theorem in the case θ = 2. Now assume θ ⩾ 2 and ϕ 2 (θ) ⩾ 0. We use superadditivity of the function x → x θ/2 and (2.14) to bound and then use the case θ = 2 with A t,j and B j replaced by A θ/2 t,j and B θ/2 j for all t, j.Now assume that θ ⩽ 2 and φ2 (θ) ⩾ 0. Similarly to the proof of (2.20), but using moreover the reverse Minkowski's inequality, we get This yields the first statement of the theorem in the case θ ⩽ 2 and thus finishes the proof of the first statement.We now turn to the proof of the second statement (finiteness of E[X θ ]).Again, we first consider the case θ = 2 (note that ϕ 1 (1) = φ1 (1) and ϕ 2 (2) = φ2 (2) by definition).By assumption (a), the geometric series in (2.20) is finite.It suffices to show that (s,t)∈O E[ Ãs Bs Ãt B t ] is finite as well.
We separately consider the cases (A1) and (A2).We start with the simpler case (A2).Let (s, t) ∈ O.By definition, Ãs is a product of terms Ãt,j with t + j ∈ S := {s 1 , . . ., s n(s) }.Similarly, Ãt is a product of terms Ãt,j with t + j ∈ T := {t 1 , . . ., t n(t) }.By definition of O, the sets S and T are disjoint.Hence, under assumption (A2), Ãs and Ãt are independent, see Remark 1.1.Thus, using the non-negativity of the A's and the independence assumptions, By assumptions (a) and (c), we have ϕ 1 (1) < 0 and Therefore, the last line in the above display is finite, which finishes the proof in the case (A2).
We now treat the more delicate case (A1).Let again (s, t) ∈ O.We decompose s and t according to their first jump (if it exists): s = (i)s ′ and t = (j)t ′ , with i, j ∈ N 0 and s ′ , t ′ ∈ T , and with the short-hand notation Using (A1) and the definition of O, we get, setting Furthermore, the map is obviously injective.Hence, by non-negativity of the A's and the independence assumptions, we get (2.22)where X ′ = t∈T Ãt is the generic element of a solution of the equation (1.12) with B t = 1 a.s.for all t ∈ Z.By hypothesis (c), we have Using hypotheses (a) and (b), one gets finiteness of the term in brackets on the RHS of (2.22).Finiteness of E[X ′ ] follows from the hypothesis ϕ 1 (1) < ∞ and Theorem 1.2 (2).Altogether, this yields (s,t)∈O E[ Ãs Bs Ãt Bt ] < ∞, which was to be proven.
We now treat the case θ ⩽ 2. By subadditivity, we have θ) ) 2 ] where X (θ) is a generic element of the non-anticipative stationary solution of and the general result for θ ⩽ 2 follow from the one for θ = 2 considering an alternative AR model.
The case θ ⩾ 2 on the other hand is treated by following the proof for θ = 2, but starting from the reverse inequality of (2.21), which holds by Minkowski's inequality, instead of (2.20).□ 2.1.Extension to higher moments.It is natural to try to extend the above methods to obtain necessary conditions and sufficient conditions for finiteness of higher moments.One could define for every k ⩾ 2 two sets of closed and open k-tuples as follows (in the definition below, we identify t = (t 0 , . . ., t n ) ∈ T with the set {t 0 , . . ., t n } and note that we have 0 ∈ t for every t ∈ T ): One can then again uniquely write any k-tuple (t 1 , . . ., t k ) ∈ T k as a concatenation of a finite number of closed tuples and one open tuple.Moreover, an analogue of (2.20) holds: Assuming that B = 1 a.s.for simplicity, we have (2.23) A natural guess for a sufficient condition for the finiteness of E[X k ] is that plus probably some additional conditions on moments of lower order.However, we were not able to prove this.Indeed, while it is immediate that the above condition is necessary for the finiteness of E[X k ] (for, otherwise, the sum over n on the right-hand side of (2.23) is infinite), we were not able to prove sufficiency.The problem is caused by the term on the right-hand side of (2.23) involving O (k) : we were not able to give simple hypotheses under which this term is finite, as it is not clear how to generalize the proof for k = 2 to general k.Indeed, while in the case of k = 2, Ãs and Ãt are independent for (s, t) ∈ O (2) (at least under assumption (A2)), this is not the case anymore for k > 2. We believe that the extension of our methods to higher moments is an interesting question for further research and hope to be able to address it in the future.

Second moment -comparison to a classical criterion
There exists an extensive statistics literature on the random coefficient autoregressive equation (0.1) in the case p < ∞, see for instance the book of Nicholls and Quinn (1982).Here we consider autoregressive equations with a non-negative noise B t , t ∈ Z, under assumption (A1), a setting included in the extension of Nicholls and Quinn (1982) due to Pham (1986).The classical criterion (0.7) is necessary and sufficient for the stationary solution to have finite second moment.In this section, we show that this criterion is actually equivalent to the criterion from Theorems 2.1.
Throughout the section, we are in the context from the beginning of Section 1, i.e. we consider (1.12) with non-negative coefficients ((A t,j ) j , B t ) t∈Z and we assume that (A1) holds, i.e. ((A t,j ) j , B t ) t∈Z are iid copies of ((A j ) j⩾1 , B).We further assume that the equation is of finite order p, i.e.
(p) There exists p < ∞, such that A j = 0 a.s.for all t ∈ Z and all j > p.For simplicity, we will furthermore assume the following: Recall the definition of the matrices (3.24) We denote by ⊗ the tensor (or Kronecker) product of matrices.Recall that ⊗ is bilinear and associative and that it satisfies the mixed product property with respect to the usual matrix product: For a square matrix M , we further denote by ρ(M ) its spectral radius, i.e. the largest absolute value of its eigenvalues.We furthermore denote by I the identity matrix of dimension p.
The statement of the theorem is independent of B, hence we can assume with no loss of generality that B = 1 a.s.By Theorem 2.1, we have where we recall that X is defined in (0.2).It follows from this definition that where the equality in law follows from (A1).Hence, defining the matrix and denoting by (M ij ) i,j=1,...,p 2 its coordinates, we have We now claim that M 11 is finite if and only if ρ(E[A ⊗ A]) < 1.To see this, we first express the matrix M as follows: all the above series are finite.Thus all the entries of M are finite, in particular M 11 .
On the other hand, suppose ρ(E[A ⊗ A]) ⩾ 1; we will show that the first series is infinite.Note that, almost surely, the matrix A is irreducible and aperiodic by assumption (+).Hence, A k > 0 entrywise for k large enough, i.e. the matrix is almost surely primitive in the language of Seneta (2006).Since (A⊗A) k = A k ⊗A k , it follows that A⊗A is almost surely primitive as well, hence E[A ⊗ A] is primitive by non-negativity.We can therefore apply the strong version of the Perron-Frobenius theorem for primitive matrices (see Theorem 1.2 of Seneta ( 2006)) to the matrix E[A ⊗ A].It follows that there exist right and left eigenvectors u and v, respectively, with positive entries and associated to the largest eigenvalue ρ One deduces that, entrywise, Since all the entries of uv T are positive, this yields for every m ⩾ 0, which implies that E[ X2 ] = ∞.This proves the result.□ The following lemma was needed in the proof of Theorem 3.1 above and is included here for completeness.
Lemma 3.2.Let A be a random m × m companion matrix as in (3.24) with non-negative entries, m ⩾ 1, such that Proof.Applying Perron-Frobenius theorem to the irreducible companion matrix E[A], the spectral radius λ = ρ(E[A]) is the eigenvalue associated to the eigenvector with positive entries v = (λ m−1 , λ m−2 , . . ., λ, 1) T .Then (A ⊗ A)(v ⊗ v) is a vector of R m 2 whose entries are affine functions of the entries of A except the first one, which is equal to(v T A 1• ) 2 , where A 1• denotes the first row of A. Jensen's inequality yielding We now calculate E[A ⊗ A].First, we have and so The characteristic polynomial of this matrix is easily calculated to be where the polynomial P (X) is defined as Hence, the eigenvalues of the matrix E[A ⊗ A] are −a as well as the roots of the degree-3 polynomial P (X).Recall that a ∈ (0, 1).We have Therefore, the smallest root of the polynomial P is greater than −1 and, furthermore, the largest root is greater or equal than 1 if and only if P (1) ⩽ 0. Hence, which is exactly (4.25), as expected.

4.2.
A degenerate infinite memory case.We consider the following example motivated by the GARCH(1,1) model that also admits an ARCH(∞) representation.Consider the infinite memory recursion (4.26) where (Z t ) t∈Z is an iid sequence of copies of a non-negative random variable Z. Setting A t,j = β j Z t−j+1 , j ⩾ 1, t ∈ Z,we see that (4.26) is of the form (0.1) with this choice of A t,j .Furthermore, assumption (A2) is verified.
On the other hand, it is well-known and can easily be checked that the stationary solution of the above equation, if it exists, also satisfies the Markov equation (4.27) As before, denote by X the limit in law of X t as t → ∞.From the latter recursion, one can obtain that, for every θ > 0, (4.28) In fact, it is known that there exists C > 0 such that P(X > x) ∼ C/x α as x → ∞ for α > 0 satisfying the equation E[β α (1+Z) α ] = 1, see Buraczewski et al. (2016) for more details.Comparing the necessary and sufficient condition (4.28) with the conditions obtained by computing the functions ϕ 1 , φ1 , ϕ 2 and φ2 respectively defined in (0.10), (0.8), (2.16) and (2.17), we thus get an explicit benchmark for our conditions.The functions ϕ 1 and φ1 are easily calculated from (4.26), which yields, the above equality is expressed as Fix m ⩾ 1 and let (s, t) ∈ C m .By definition, we have s)+n(t)−2 .Note that the exponent j = n(s) + n(t) − 2 is equal to the number of points in {1, . . ., m − 1} which are contained in either s or t.There are m−1 j ways to choose these points and, given the choice of these points there are 2 j ways to distribute them among s and t.It follows that Summing over m yields We obtain A similar argument shows that φ2 With the explicit expressions of ϕ 1 , φ1 , ϕ 2 and φ2 at hand, we can now compare the conditions for finiteness of E[X θ ] provided by Theorem 1.2 and Theorem 2.1 with the condition (4.28).To do this, for every θ ∈ (0, 3], we consider the critical value β θ , such that E[X θ ] is finite for β < β θ and infinite for β > β θ (the existence and uniqueness of β θ is easily seen by monotonicity in β of the involved functions).Our theorems provide upper and lower bounds in the phases θ ∈ (0, 1], θ ∈ [1, 2] and θ ∈ [2, 3], which are furthermore sharp for θ ∈ {1, 2}.In Figure 2, we compare these bounds with the exact value for β θ obtained from the equation log(E[β θ (1 + Z) θ ]) = 0 with Z being χ 2 1 -distributed.One can notice that the use of second moment methods allows to greatly improve the quality of the bounds obtained by first moment methods, as soon as θ > 1.2.