Главная Lectures on the Poisson Process

Lectures on the Poisson Process

,
0 / 0
Насколько вам понравилась эта книга?
Какого качества скаченный файл?
Скачайте книгу, чтобы оценить ее качество
Какого качества скаченные файлы?
The Poisson process, a core object in modern probability, enjoys a richer theory than is sometimes appreciated. This volume develops the theory in the setting of a general abstract measure space, establishing basic results and properties as well as certain advanced topics in the stochastic analysis of the Poisson process. Also discussed are applications and related topics in stochastic geometry, including stationary point processes, the Boolean model, the Gilbert graph, stable allocations, and hyperplane processes. Comprehensive, rigorous, and self-contained, this text is ideal for graduate courses or for self-study, with a substantial number of exercises for each chapter. Mathematical prerequisites, mainly a sound knowledge of measure-theoretic probability, are kept in the background, but are reviewed comprehensively in the appendix. The authors are well-known researchers in probability theory; especially stochastic geometry. Their approach is informed both by their research and by their extensive experience in teaching at undergraduate and graduate levels.
Год:
2018
Издательство:
Cambridge University Press
Язык:
english
Страницы:
315
ISBN 13:
9781316104477
Серия:
Institute of Mathematical Statistics Textbooks 7
Файл:
PDF, 4,21 MB
Скачать (pdf, 4,21 MB)

Возможно Вас заинтересует Powered by Rec2Me

 

Ключевые слова

 
0 comments
 

Чтобы оставить отзыв, пожалуйста, войдите или зарегистрируйтесь
Вы можете оставить отзыв о книге и поделиться своим опытом. Другим читателям будет интересно узнать ваше мнение о прочитанных книгах. Независимо от того, пришлась ли вам книга по душе или нет, если вы честно и подробно расскажете об этом, люди смогут найти для себя новые книги, которые их заинтересуют.
1

自控力

Year:
2012
Language:
chinese
File:
EPUB, 396 KB
5.0 / 5.0
2

Najděte si svého marťana

Language:
czech
File:
PDF, 2.97 MB
0 / 0
‘An understanding of the remarkable properties of the Poisson process is
essential for anyone interested in the mathematical theory of probability or
in its many fields of application. This book is a lucid and thorough account,
rigorous but not pedantic, and accessible to any reader familiar with modern mathematics at first-degree level. Its publication is most welcome.’
— J. F. C. Kingman, University of Bristol
‘I have always considered the Poisson process to be a cornerstone of applied
probability. This excellent book demonstrates that it is a whole world in and
of itself. The text is exciting and indispensable to anyone who works in this
field.’
— Dietrich Stoyan, TU Bergakademie Freiberg
‘Last and Penrose’s Lectures on the Poisson Process constitutes a splendid
addition to the monograph literature on point processes. While emphasising the Poisson and related processes, their mathematical approach also
covers the basic theory of random measures and various applications,
especially to stochastic geometry. They assume a sound grounding in
measure-theoretic probability, which is well summarised in two appendices (on measure and probability theory). Abundant exercises conclude
each of the twenty-two “lectures” which include examples illustrating their
“course” material. It is a first-class complement to John Kingman’s essay
on the Poisson process.’
— Daryl Daley, University of Melbourne
‘Pick n points uniformly and independently in a cube of volume n in
Euclidean space. The limit of these random configurations as n → ∞
is the Poisson process. This book, written by two of the foremost experts
on point processes, gives a masterful overview of the Poisson process and
some of its relatives. Classical tenets of the theory, like thinning properties
and Campbell’s formula, are followed by modern developments, such as
Liggett’s extra heads theorem, Fock space, permanental processes and the
Boolean model. Numerous exercises throughout the book challenge readers and bring them to the edge of current theory.’
— Yuval Peres; , Principal Researcher, Microsoft Research,
and Foreign Associate, National Academy of Sciences

Lectures on the Poisson Process
The Poisson process, a core object in modern probability, enjoys a richer theory than is
sometimes appreciated. This volume develops the theory in the setting of a general
abstract measure space, establishing basic results and properties as well as certain
advanced topics in the stochastic analysis of the Poisson process. Also discussed are
applications and related topics in stochastic geometry, including stationary point
processes, the Boolean model, the Gilbert graph, stable allocations and hyperplane
processes. Comprehensive, rigorous, and self-contained, this text is ideal for graduate
courses or for self-study, with a substantial number of exercises for each chapter.
Mathematical prerequisites, mainly a sound knowledge of measure-theoretic
probability, are kept in the background, but are reviewed comprehensively in an
appendix. The authors are well-known researchers in probability theory, especially
stochastic geometry. Their approach is informed both by their research and by their
extensive experience in teaching at undergraduate and graduate levels.
G Ü N T E R L A S T is Professor of Stochastics at the Karlsruhe Institute of Technology.
He is a distinguished probabilist with particular expertise in stochastic geometry, point
processes and random measures. He has coauthored a research monograph on marked
point processes on the line as well as two textbooks on general mathematics. He has
given many invited talks on his research worldwide.
M AT H E W P E N R O S E is Professor of Probability at the University of Bath. He is an
internationally leading researcher in stochastic geometry and applied probability and
is the author of the influential monograph Random Geometric Graphs. He received the
Friedrich Wilhelm Bessel Research Award from the Humboldt Foundation in 2008,
and has held visiting positions as guest lecturer in New Delhi, Karlsruhe, San Diego,
Birmingham and Lille.

I N S T I T U T E O F M AT H E M AT I C A L S TAT I S T I C S
TEXTBOOKS

Editorial Board
D. R. Cox (University of Oxford)
B. Hambly (University of Oxford)
S. Holmes (Stanford University)
J. Wellner (University of Washington)

IMS Textbooks give introductory accounts of topics of current concern suitable for
advanced courses at master’s level, for doctoral students and for individual study. They
are typically shorter than a fully developed textbook, often arising from material
created for a topical course. Lengths of 100–290 pages are envisaged. The books
typically contain exercises.
Other Books in the Series
1.
2.
3.
4.
5.

Probability on Graphs, by Geoffrey Grimmett
Stochastic Networks, by Frank Kelly and Elena Yudovina
Bayesian Filtering and Smoothing, by Simo Särkkä
The Surprising Mathematics of Longest Increasing Subsequences, by Dan Romik
Noise Sensitivity of Boolean Functions and Percolation, by Christophe Garban and
Jeffrey E. Steif
6. Core Statistics, by Simon N. Wood
7. Lectures on the Poisson Process, by Günter Last and Mathew Penrose

Lectures on the Poisson Process
GÜNTER LAST
Karlsruhe Institute of Technology
M AT H E W P E N RO S E
University of Bath

University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India

79 Anson Road, #06–04/06, Singapore 079906
Cambridge University Press is part of the University of Cambridge.
It furthers the University’s mission by disseminating knowledge in the pursuit of
education, learning, and research at the highest international levels of excellence.
www.cambridge.org
Information on this title: www.cambridge.org/9781107088016
DOI: 10.1017/9781316104477
c Günter Last and Mathew Penrose 2018

This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2018
Printed in the United States of America by Sheridan Books, Inc.
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Names: Last, Günter, author. | Penrose, Mathew, author.
Title: Lectures on the Poisson process / Günter Last, Karlsruhe Institute of
Technology, Mathew Penrose, University of Bath.
Description: Cambridge : Cambridge University Press, 2018. | Series:
Institute of Mathematical Statistics textbooks | Includes bibliographical
references and index.
Identifiers: LCCN 2017027687 | ISBN 9781107088016
Subjects: LCSH: Poisson processes. | Stochastic processes. | Probabilities.
Classification: LCC QA274.42 .L36 2018 | DDC 519.2/4–dc23
LC record available at https://lccn.loc.gov/2017027687
ISBN 978-1-107-08801-6 Hardback
ISBN 978-1-107-45843-7 Paperback
Cambridge University Press has no responsibility for the persistence or accuracy
of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.

To Our Families

Contents

page xv
xix

Preface
List of Symbols

1
1
3
4
5
7

1
1.1
1.2
1.3
1.4
1.5

Poisson and Other Discrete Distributions
The Poisson Distribution
Relationships Between Poisson and Binomial Distributions
The Poisson Limit Theorem
The Negative Binomial Distribution
Exercises

2
2.1
2.2
2.3
2.4
2.5

Point Processes
Fundamentals
Campbell’s Formula
Distribution of a Point Process
Point Processes on Metric Spaces
Exercises

9
9
12
14
16
18

3
3.1
3.2
3.3
3.4

Poisson Processes
Definition of the Poisson Process
Existence of Poisson Processes
Laplace Functional of the Poisson Process
Exercises

19
19
20
23
24

4
4.1
4.2
4.3
4.4
4.5

The Mecke Equation and Factorial Measures
The Mecke Equation
Factorial Measures and the Multivariate Mecke Equation
Janossy Measures
Factorial Moment Measures
Exercises

26
26
28
32
34
36

ix

x

Contents

5
5.1
5.2
5.3
5.4

Mappings, Markings and Thinnings
Mappings and Restrictions
The Marking Theorem
Thinnings
Exercises

38
38
39
42
44

6
6.1
6.2
6.3
6.4
6.5
6.6

Characterisations of the Poisson Process
Borel Spaces
Simple Point Processes
Rényi’s Theorem
Completely Orthogonal Point Processes
Turning Distributional into Almost Sure Identities
Exercises

46
46
49
50
52
54
56

7
7.1
7.2
7.3
7.4
7.5

Poisson Processes on the Real Line
The Interval Theorem
Marked Poisson Processes
Record Processes
Polar Representation of Homogeneous Poisson Processes
Exercises

58
58
61
63
65
66

8
8.1
8.2
8.3
8.4
8.5
8.6

Stationary Point Processes
Stationarity
The Pair Correlation Function
Local Properties
Ergodicity
A Spatial Ergodic Theorem
Exercises

69
69
71
74
75
77
80

9
9.1
9.2
9.3
9.4
9.5

The Palm Distribution
Definition and Basic Properties
The Mecke–Slivnyak Theorem
Local Interpretation of Palm Distributions
Voronoi Tessellations and the Inversion Formula
Exercises

82
82
84
85
87
89

10
Extra Heads and Balanced Allocations
10.1 The Extra Head Problem
10.2 The Point-Optimal Gale–Shapley Algorithm
10.3 Existence of Balanced Allocations
10.4 Allocations with Large Appetite
10.5 The Modified Palm Distribution
10.6 Exercises

92
92
95
97
99
101
101

Contents

xi

11
Stable Allocations
11.1 Stability
11.2 The Site-Optimal Gale–Shapley Allocation
11.3 Optimality of the Gale–Shapley Algorithms
11.4 Uniqueness of Stable Allocations
11.5 Moment Properties
11.6 Exercises

103
103
104
104
107
108
109

12
Poisson Integrals
12.1 The Wiener–Itô Integral
12.2 Higher Order Wiener–Itô Integrals
12.3 Poisson U-Statistics
12.4 Poisson Hyperplane Processes
12.5 Exercises

111
111
114
118
122
124

13
Random Measures and Cox Processes
13.1 Random Measures
13.2 Cox Processes
13.3 The Mecke Equation for Cox Processes
13.4 Cox Processes on Metric Spaces
13.5 Exercises

127
127
129
131
132
133

14
Permanental Processes
14.1 Definition and Uniqueness
14.2 The Stationary Case
14.3 Moments of Gaussian Random Variables
14.4 Construction of Permanental Processes
14.5 Janossy Measures of Permanental Cox Processes
14.6 One-Dimensional Marginals of Permanental Cox Processes
14.7 Exercises

136
136
138
139
141
145
147
151

15
Compound Poisson Processes
15.1 Definition and Basic Properties
15.2 Moments of Symmetric Compound Poisson Processes
15.3 Poisson Representation of Completely Random Measures
15.4 Compound Poisson Integrals
15.5 Exercises

153
153
157
158
161
163

16
The Boolean Model and the Gilbert Graph
16.1 Capacity Functional
16.2 Volume Fraction and Covering Property
16.3 Contact Distribution Functions
16.4 The Gilbert Graph

166
166
168
170
171

xii
16.5
16.6

Contents
The Point Process of Isolated Nodes
Exercises

176
177

17
The Boolean Model with General Grains
17.1 Capacity Functional
17.2 Spherical Contact Distribution Function and Covariance
17.3 Identifiability of Intensity and Grain Distribution
17.4 Exercises

179
179
182
183
185

18
Fock Space and Chaos Expansion
18.1 Difference Operators
18.2 Fock Space Representation
18.3 The Poincaré Inequality
18.4 Chaos Expansion
18.5 Exercises

187
187
189
193
194
195

19
Perturbation Analysis
19.1 A Perturbation Formula
19.2 Power Series Representation
19.3 Additive Functions of the Boolean Model
19.4 Surface Density of the Boolean Model
19.5 Mean Euler Characteristic of a Planar Boolean Model
19.6 Exercises

197
197
200
203
206
207
208

20
Covariance Identities
20.1 Mehler’s Formula
20.2 Two Covariance Identities
20.3 The Harris–FKG Inequality
20.4 Exercises

211
211
214
217
217

21
Normal Approximation
21.1 Stein’s Method
21.2 Normal Approximation via Difference Operators
21.3 Normal Approximation of Linear Functionals
21.4 Exercises

219
219
221
225
226

22
Normal Approximation in the Boolean Model
22.1 Normal Approximation of the Volume
22.2 Normal Approximation of Additive Functionals
22.3 Central Limit Theorems
22.4 Exercises

227
227
230
235
237

Contents

xiii

Appendix A Some Measure Theory
A.1 General Measure Theory
A.2 Metric Spaces
A.3 Hausdorff Measures and Additive Functionals
A.4 Measures on the Real Half-Line
A.5 Absolutely Continuous Functions

239
239
250
252
257
259

Appendix B Some Probability Theory
B.1 Fundamentals
B.2 Mean Ergodic Theorem
B.3 The Central Limit Theorem and Stein’s Equation
B.4 Conditional Expectations
B.5 Gaussian Random Fields

261
261
264
266
268
269

Appendix C Historical Notes

272

References
Index

281
289

Preface

The Poisson process generates point patterns in a purely random manner.
It plays a fundamental role in probability theory and its applications, and
enjoys a rich and beautiful theory. While many of the applications involve
point processes on the line, or more generally in Euclidean space, many
others do not. Fortunately, one can develop much of the theory in the abstract setting of a general measurable space.
We have prepared the present volume so as to provide a modern textbook
on the general Poisson process. Despite its importance, there are not many
monographs or graduate texts with the Poisson process as their main point
of focus, for example by comparison with the topic of Brownian motion.
This is probably due to a viewpoint that the theory of Poisson processes
on its own is too insubstantial to merit such a treatment. Such a viewpoint
now seems out of date, especially in view of recent developments in the
stochastic analysis of the Poisson process. We also extend our remit to topics in stochastic geometry, which is concerned with mathematical models
for random geometric structures [4, 5, 23, 45, 123, 126, 147]. The Poisson
process is fundamental to stochastic geometry, and the applications areas
discussed in this book lie largely in this direction, reflecting the taste and
expertise of the authors. In particular, we discuss Voronoi tessellations, stable allocations, hyperplane processes, the Boolean model and the Gilbert
graph.
Besides stochastic geometry, there are many other fields of application
of the Poisson process. These include Lévy processes [10, 83], Brownian
excursion theory [140], queueing networks [6, 149], and Poisson limits in
extreme value theory [139]. Although we do not cover these topics here,
we hope nevertheless that this book will be a useful resource for people
working in these and related areas.
This book is intended to be a basis for graduate courses or seminars on
the Poisson process. It might also serve as an introduction to point process
theory. Each chapter is supposed to cover material that can be presented
xv

xvi

Preface

(at least in principle) in a single lecture. In practice, it may not always be
possible to get through an entire chapter in one lecture; however, in most
chapters the most essential material is presented in the early part of the
chapter, and the later part could feasibly be left as background reading if
necessary. While it is recommended to read the earlier chapters in a linear
order at least up to Chapter 5, there is some scope for the reader to pick
and choose from the later chapters. For example, a reader more interested
in stochastic geometry could look at Chapters 8–11 and 16–17. A reader
wishing to focus on the general abstract theory of Poisson processes could
look at Chapters 6, 7, 12, 13 and 18–21. A reader wishing initially to take
on slightly easier material could look at Chapters 7–9, 13 and 15–17.
The book divides loosely into three parts. In the first part we develop
basic results on the Poisson process in the general setting. In the second
part we introduce models and results of stochastic geometry, most but not
all of which are based on the Poisson process, and which are most naturally
developed in the Euclidean setting. Chapters 8, 9, 10, 16, 17 and 22 are devoted exclusively to stochastic geometry while other chapters use stochastic geometry models for illustrating the theory. In the third part we return
to the general setting and describe more advanced results on the stochastic
analysis of the Poisson process.
Our treatment requires a sound knowledge of measure-theoretic probability theory. However, specific knowledge of stochastic processes is not
assumed. Since the focus is always on the probabilistic structure, technical
issues of measure theory are kept in the background, whenever possible.
Some basic facts from measure and probability theory are collected in the
appendices.
When treating a classical and central subject of probability theory, a certain overlap with other books is inevitable. Much of the material of the earlier chapters, for instance, can also be found (in a slightly more restricted
form) in the highly recommended book [75] by J.F.C. Kingman. Further
results on Poisson processes, as well as on general random measures and
point processes, are presented in the monographs [6, 23, 27, 53, 62, 63,
69, 88, 107, 134, 139]. The recent monograph Kallenberg [65] provides
an excellent systematic account of the modern theory of random measures.
Comments on the early history of the Poisson process, on the history of
the main results presented in this book and on the literature are given in
Appendix C.
In preparing this manuscript we have benefited from comments on earlier versions from Daryl Daley, Fabian Gieringer, Christian Hirsch, Daniel
Hug, Olav Kallenberg, Paul Keeler, Martin Möhle, Franz Nestmann, Jim

Preface

xvii

Pitman, Matthias Schulte, Tomasz Rolski, Dietrich Stoyan, Christoph Thäle, Hermann Thorisson and Hans Zessin, for which we are most grateful.
Thanks are due to Franz Nestmann for producing the figures. We also wish
to thank Olav Kallenberg for making available to us an early version of his
monograph [65].
Günter Last
Mathew Penrose

Symbols

Z = {0, 1, −1, 2, −2, . . .}
N = {1, 2, 3, 4, . . .}
N0 = {0, 1, 2, . . .}

set of integers
set of positive integers
set of non-negative integers

N = N ∪ {∞}

extended set of positive integers

N0 = N0 ∪ {∞}
R = (−∞, ∞), R+ = [0, ∞)

extended set of non-negative integers
real line (resp. non-negative real half-line)

R = R ∪ {−∞, ∞}

extended real line

R+ = R+ ∪ {∞} = [0, ∞]
R(X), R+ (X)

extended half-line
R-valued (resp. R+ -valued) measurable functions on X

R(X), R+ (X)

R-valued (resp. R+ -valued) measurable functions on X

+

−

u ,u

positive and negative part of an R-valued function u

a ∧ b, a ∨ b
1{·}

minimum (resp. maximum) of a, b ∈ R
indicator function

a⊕ := 1{a  0}a−1
card A = |A|
[n]
Σn
Πn , Π∗n
(n)k = n · · · (n − k + 1)
δx
N<∞ (X) ≡ N<∞
N(X) ≡ N
Nl (X), N s (X)
Nls (X) := Nl (X) ∩ N s (X)
x∈μ
νB

generalised inverse of a ∈ R
number of elements of a set A
{1, . . . , n}
group of permutations of [n]
set of all partitions (resp. subpartitions) of [n]
descending factorial
Dirac measure at the point x
set of all finite counting measures on X
set of all countable sums of measures from N<∞
set of all locally finite (resp. simple) measures in N(X)
set of all locally finite and simple measures in N(X)
short for μ{x} = μ({x}) > 0, μ ∈ N
restriction of a measure ν to a measurable set B

xix

List of Symbols

xx

B(X)
Xb

Borel σ-field on a metric space X
bounded Borel subsets of a metric space X

Rd

Euclidean space of dimension d ∈ N

Bd := B(Rd )

Borel σ-field on Rd

λd

Lebesgue measure on (Rd , Bd )

·

Euclidean norm on Rd

·, ·

Euclidean scalar product on Rd

C ,C

compact (resp. non-empty compact) subsets of Rd

K d , K (d)

compact (resp. non-empty compact) convex subsets of Rd

Rd

convex ring in Rd (finite unions of convex sets)

K + x, K − x

translation of K ⊂ Rd by x (resp. −x)

K⊕L
V0 , . . . , Vd

φi =
Vi (K) Q(dK)

Minkowski sum of K, L ⊂ Rd
intrinsic volumes
i-th mean intrinsic volume of a typical grain

(d)

d

B(x, r)

closed ball with centre x and radius r ≥ 0

κd = λd (Bd )

volume of the unit ball in Rd

<

strict lexicographical order on Rd

l(B)
(Ω, F , P)
E[X]
Var[X]
Cov[X, Y]
Lη

lexicographic minimum of a non-empty finite set B ⊂ Rd
probability space
expectation of a random variable X
variance of a random variable X
covariance between random variables X and Y
Laplace functional of a random measure η

d

d

=, →

equality (resp. convergence) in distribution

1
Poisson and Other Discrete Distributions

The Poisson distribution arises as a limit of the binomial distribution. This
chapter contains a brief discussion of some of its fundamental properties as
well as the Poisson limit theorem for null arrays of integer-valued random
variables. The chapter also discusses the binomial and negative binomial
distributions.

1.1 The Poisson Distribution
A random variable X is said to have a binomial distribution Bi(n, p) with
parameters n ∈ N0 := {0, 1, 2, . . .} and p ∈ [0, 1] if
 
n k
P(X = k) = Bi(n, p; k) :=
p (1 − p)n−k , k = 0, . . . , n,
(1.1)
k
where 00 := 1. In the case n = 1 this is the Bernoulli distribution with
parameter p. If X1 , . . . , Xn are independent random variables with such a
Bernoulli distribution, then their sum has a binomial distribution, that is
d

X1 + · · · + Xn = X,

(1.2)
d

where X has the distribution Bi(n, p) and where = denotes equality in distribution. It follows that the expectation and variance of X are given by
E[X] = np,

Var[X] = np(1 − p).

(1.3)

A random variable X is said to have a Poisson distribution Po(γ) with
parameter γ ≥ 0 if
P(X = k) = Po(γ; k) :=

γk −γ
e ,
k!

k ∈ N0 .

(1.4)

If γ = 0, then P(X = 0) = 1, since we take 00 = 1. Also we allow γ = ∞;
in this case we put P(X = ∞) = 1 so Po(∞; k) = 0 for k ∈ N0 .
The Poisson distribution arises as a limit of binomial distributions as
1

Poisson and Other Discrete Distributions

2

follows. Let pn ∈ [0, 1], n ∈ N, be a sequence satisfying npn → γ as
n → ∞, with γ ∈ (0, ∞). Then, for k ∈ {0, . . . , n},
 

npn n
γk
(npn )k (n)k
n k
· k · (1 − pn )−k · 1 −
→ e−γ , (1.5)
pn (1 − pn )n−k =
k!
n
n
k!
k
as n → ∞, where
(n)k := n(n − 1) · · · (n − k + 1)

(1.6)

is the k-th descending factorial (of n) with (n)0 interpreted as 1.
Suppose X is a Poisson random variable with finite parameter γ. Then
its expectation is given by
E[X] = e−γ

∞
∞


γk−1
γk
= γ.
k = e−γ γ
k!
(k − 1)!
k=0
k=1

(1.7)

The probability generating function of X (or of Po(γ)) is given by


Es

X

−γ

=e

∞

γk
k=0

k!

s =e
k

−γ

∞

(γs)k
k=0

k!

= eγ(s−1) ,

s ∈ [0, 1].

It follows that the Laplace transform of X (or of Po(γ)) is given by

E e−tX = exp[−γ(1 − e−t )], t ≥ 0.

(1.8)

(1.9)

Formula (1.8) is valid for each s ∈ R and (1.9) is valid for each t ∈ R. A
calculation similar to (1.8) shows that the factorial moments of X are given
by
E[(X)k ] = γk ,

k ∈ N0 ,

(1.10)

where (0)0 := 1 and (0)k := 0 for k ≥ 1. Equation (1.10) implies that
Var[X] = E[X 2 ] − E[X]2 = E[(X)2 ] + E[X] − E[X]2 = γ.

(1.11)

We continue with a characterisation of the Poisson distribution.
Proposition 1.1 An N0 -valued random variable X has distribution Po(γ)
if and only if, for every function f : N0 → R+ , we have
E[X f (X)] = γ E[ f (X + 1)].

(1.12)

Proof By a similar calculation to (1.7) and (1.8) we obtain for any function f : N0 → R+ that (1.12) holds. Conversely, if (1.12) holds for all such
functions f , then we can make the particular choice f := 1{k} for k ∈ N, to
obtain the recursion
k P(X = k) = γ P(X = k − 1).

1.2 Relationships Between Poisson and Binomial Distributions

3

This recursion has (1.4) as its only (probability) solution, so the result follows.


1.2 Relationships Between Poisson and Binomial Distributions
The next result says that if X and Y are independent Poisson random variables, then X + Y is also Poisson and the conditional distribution of X given
X + Y is binomial:
Proposition 1.2 Let X and Y be independent with distributions Po(γ) and
Po(δ), respectively, with 0 < γ+δ < ∞. Then X+Y has distribution Po(γ+δ)
and
P(X = k | X + Y = n) = Bi(n, γ/(γ + δ); k),
Proof

n ∈ N0 , k = 0, . . . , n.

For n ∈ N0 and k ∈ {0, . . . , n},
δn−k −δ
γk
e
P(X = k, X + Y = n) = P(X = k, Y = n − k) = e−γ
k!
(n − k)!
 (γ + δ)n n γ k  δ n−k
= e−(γ+δ)
n!
k γ+δ γ+δ
= Po(γ + δ; n) Bi(n, γ/(γ + δ); k),


and the assertions follow.

Let Z be an N0 -valued random variable and let Z1 , Z2 , . . . be a sequence
of independent random variables that have a Bernoulli distribution with
parameter p ∈ [0, 1]. If Z and (Zn )n≥1 are independent, then the random
variable
X :=

Z


Zj

(1.13)

j=1

is called a p-thinning of Z, where we set X := 0 if Z = 0. This means that
the conditional distribution of X given Z = n is binomial with parameters
n and p.
The following partial converse of Proposition 1.2 is a noteworthy property of the Poisson distribution.
Proposition 1.3 Let p ∈ [0, 1]. Let Z have a Poisson distribution with
parameter γ ≥ 0 and let X be a p-thinning of Z. Then X and Z − X are
independent and Poisson distributed with parameters pγ and (1 − p)γ, respectively.

Poisson and Other Discrete Distributions

4

Proof
that

We may assume that γ > 0. The result follows once we have shown

P(X = m, Z − X = n) = Po(pγ; m) Po((1 − p)γ; n),

m, n ∈ N0 .

(1.14)

Since the conditional distribution of X given Z = m + n is binomial with
parameters m + n and p, we have
P(X = m, Z − X = n) = P(Z = m + n) P(X = m | Z = m + n)

 −γ m+n  
m+n m
e γ
p (1 − p)n
=
(m + n)!
m

 m m

n n
p γ
−pγ (1 − p) γ
e
e−(1−p)γ ,
=
m!
n!


and (1.14) follows.

1.3 The Poisson Limit Theorem
The next result generalises (1.5) to sums of Bernoulli variables with unequal parameters, among other things.
Proposition 1.4 Suppose for n ∈ N that mn ∈ N and Xn,1 , . . . , Xn,mn are
independent random variables taking values in N0 . Let pn,i := P(Xn,i ≥ 1)
and assume that
lim max pn,i = 0.

(1.15)

n→∞ 1≤i≤mn

Assume further that λn :=

mn
i=1

lim

n→∞

Let Xn :=

mn
i=1

pn,i → γ as n → ∞, where γ > 0, and that

mn


P(Xn,i ≥ 2) = 0.

(1.16)

i=1

Xn,i . Then for k ∈ N0 we have
lim P(Xn = k) = Po(γ; k).

(1.17)

n→∞


:= 1{Xn,i ≥ 1} = min{Xn,i , 1} and Xn :=
Proof Let Xn,i

Xn,i  Xn,i if and only if Xn,i ≥ 2, we have

P(Xn  Xn ) ≤

mn


mn
i=1


Xn,i
. Since

P(Xn,i ≥ 2).

i=1

By assumption (1.16) we can assume without restriction of generality that

1.4 The Negative Binomial Distribution

5


Xn,i
= Xn,i for all n ∈ N and i ∈ {1, . . . , mn }. Moreover it is no loss of
generality to assume for each (n, i) that pn,i < 1. We then have



P(Xn = k) =

pn,i1 pn,i2 · · · pn,ik

1≤i1 <i2 <···<ik ≤mn

Let μn := max1≤i≤mn pn,i . Since


mn

log

mn
j=1

mn
j=1 (1

− pn, j )

(1 − pn,i1 ) · · · (1 − pn,ik )

p2n, j ≤ λn μn → 0 as n → ∞, we have

mn
 
(1 − pn, j ) = (−pn, j + O(p2n, j )) → −γ as n → ∞,

j=1

. (1.18)

(1.19)

j=1

where the function O(·) satisfies lim supr→0 |r|−1 |O(r)| < ∞. Also,
inf

1≤i1 <i2 <···<ik ≤mn

(1 − pn,i1 ) · · · (1 − pn,ik ) ≥ (1 − μn )k → 1 as n → ∞.

(1.20)

Finally, with i1 ,...,ik ∈{1,2,...,mn } denoting summation over all ordered k-tuples
of distinct elements of {1, 2, . . . , mn }, we have


pn,i1 pn,i2 · · · pn,ik =
pn,i1 pn,i2 · · · pn,ik ,
k!
1≤i1 <i2 <···<ik ≤mn

and

i1 ,...,ik ∈{1,2,...,mn }

⎛ mn
⎞
⎜⎜⎜ ⎟⎟⎟k
0 ≤ ⎜⎜⎝
pn,i ⎟⎟⎠ −



pn,i1 pn,i2 · · · pn,ik

i1 ,...,ik ∈{1,2,...,mn }

i=1

⎛m
⎞k−2
 
m
n
⎟⎟⎟
k n 2 ⎜⎜⎜⎜
≤
pn,i ⎜⎜⎝
pn, j ⎟⎟⎟⎠ ,
2 i=1
j=1

which tends to zero as n → ∞. Therefore

k!
pn,i1 pn,i2 · · · pn,ik → γk as n → ∞.

(1.21)

1≤i1 <i2 <···<ik ≤mn

The result follows from (1.18) by using (1.19), (1.20) and (1.21).



1.4 The Negative Binomial Distribution
A random element Z of N0 is said to have a negative binomial distribution
with parameters r > 0 and p ∈ (0, 1] if
P(Z = n) =

Γ(n + r)
(1 − p)n pr ,
Γ(n + 1)Γ(r)

n ∈ N0 ,

(1.22)

6

Poisson and Other Discrete Distributions

where the Gamma function Γ : (0, ∞) → (0, ∞) is defined by
 ∞
ta−1 e−t dt, a > 0.
Γ(a) :=

(1.23)

0

(In particular Γ(a) = (a−1)! for a ∈ N.) This can be seen to be a probability
distribution by Taylor expansion of (1 − x)−r evaluated at x = 1 − p. The
probability generating function of Z is given by

E sZ ] = pr (1 − s + sp)−r , s ∈ [0, 1].
(1.24)
For r ∈ N, such a Z may be interpreted as the number of failures before
the rth success in a sequence of independent Bernoulli trials. In the special
case r = 1 we get the geometric distribution
P(Z = n) = (1 − p)n p,

n ∈ N0 .

(1.25)

Another interesting special case is r = 1/2. In this case
P(Z = n) =

(2n − 1)!!
(1 − p)n p1/2 ,
2n n!

n ∈ N0 ,

(1.26)

where we recall the definition (B.6) for (2n − 1)!!. This follows from the
√
fact that Γ(n + 1/2) = (2n − 1)!! 2−n π, n ∈ N0 .
The negative binomial distribution arises as a mixture of Poisson distributions. To explain this, we need to introduce the Gamma distribution with
shape parameter a > 0 and scale parameter b > 0. This is a probability
measure on R+ with Lebesgue density
x → ba Γ(a)−1 xa−1 e−bx

(1.27)

on R+ . If a random variable Y has this distribution, then one says that Y is
Gamma distributed with shape parameter a and scale parameter b. In this
case Y has Laplace transform
 b a

E e−tY =
, t ≥ 0.
(1.28)
b+t
In the case a = 1 we obtain the exponential distribution with parameter b.
Exercise 1.11 asks the reader to prove the following result.
Proposition 1.5 Suppose that the random variable Y ≥ 0 is Gamma distributed with shape parameter a > 0 and scale parameter b > 0. Let Z be
an N0 -valued random variable such that the conditional distribution of Z
given Y is Po(Y). Then Z has a negative binomial distribution with parameters a and b/(b + 1).

1.5 Exercises

7

1.5 Exercises
Exercise 1.1 Prove equation (1.10).
Exercise 1.2 Let X be a random variable taking values in N0 . Assume
that there is a γ ≥ 0 such that E[(X)k ] = γk for all k ∈ N0 . Show that X has
a Poisson distribution. (Hint: Derive the Taylor series for g(s) := E[sX ] at
s0 = 1.)
Exercise 1.3 Confirm Proposition 1.3 by showing that

E sX tZ−X = e pγ(s−1) e(1−p)γ(t−1) , s, t ∈ [0, 1],
using a direct computation and Proposition B.4.
Exercise 1.4 (Generalisation of Proposition 1.2) Let m ∈ N and suppose
that X1 , . . . , Xm are independent random variables with Poisson distributions Po(γ1 ), . . . , Po(γm ), respectively. Show that X := X1 + · · · + Xm is
Poisson distributed with parameter γ := γ1 + · · · + γm . Assuming γ > 0,
show moreover for any k ∈ N that
P(X1 = k1 , . . . , Xm = km | X = k) =

 γ k1
 γ km
k!
1
m
···
k1 ! · · · km ! γ
γ

(1.29)

for k1 + · · · + km = k. This is a multinomial distribution with parameters k
and γ1 /γ, . . . , γm /γ.
Exercise 1.5 (Generalisation of Proposition 1.3) Let m ∈ N and suppose
that Zn , n ∈ N, is a sequence of independent random vectors in Rm with
common distribution P(Z1 = ei ) = pi , i ∈ {1, . . . , m}, where ei is the i-th
unit vector in Rm and p1 + · · · + pm = 1. Let Z have a Poisson distribution
with parameter γ, independent of (Z1 , Z2 , . . .). Show that the components
of the random vector X := Zj=1 Z j are independent and Poisson distributed
with parameters p1 γ, . . . , pm γ.
Exercise 1.6 (Bivariate extension of Proposition 1.4) Let γ > 0, δ ≥ 0.
Suppose for n ∈ N that mn ∈ N and for 1 ≤ i ≤ mn that pn,i , qn,i ∈ [0, 1)
with mi=1n pn,i → γ and mi=1n qn,i → δ, and max1≤i≤mn max{pn,i , qn,i } → 0
as n → ∞. Suppose for n ∈ N that (Xn , Yn ) = mi=1n (Xn,i , Yn,i ), where each
(Xn,i , Yn,i ) is a random 2-vector whose components are Bernoulli distributed
with parameters pn,i , qn,i , respectively, and satisfy Xn,i Yn,i = 0 almost surely.
Assume the random vectors (Xn,i , Yn,i ), 1 ≤ i ≤ mn , are independent. Prove
that Xn , Yn are asymptotically (as n → ∞) distributed as a pair of indepen-

8

Poisson and Other Discrete Distributions

dent Poisson variables with parameters γ, δ, i.e. for k, ∈ N0 ,
γk δ
.
n→∞
k! !
Exercise 1.7 (Probability of a Poisson variable being even) Suppose X is
Poisson distributed with parameter γ > 0. Using the fact that the probability generating function (1.8) extends to s = −1, verify the identity
P(X/2 ∈ Z) = (1 + e−2γ )/2. For k ∈ N with k ≥ 3, using the fact that
the probability generating function (1.8) extends to a k-th complex root of
unity, find a closed-form formula for P(X/k ∈ Z).
lim P(Xn = k, Yn = ) = e−(γ+δ)

Exercise 1.8 Let γ > 0, and suppose X is Poisson distributed with parameter γ. Suppose f : N → R+ is such that E[ f (X)1+ε ] < ∞ for some ε > 0.
Show that E[ f (X + k)] < ∞ for any k ∈ N.
Exercise 1.9 Let 0 < γ < γ . Give an example of a random vector (X, Y)
with X Poisson distributed with parameter γ and Y Poisson distributed with
parameter γ , such that Y−X is not Poisson distributed. (Hint: First consider
a pair X  , Y  such that Y  −X  is Poisson distributed, and then modify finitely
many of the values of their joint probability mass function.)
Exercise 1.10 Suppose n ∈ N and set [n] := {1, . . . , n}. Suppose that Z is
a uniform random permutation of [n], that is a random element of the space
Σn of all bijective mappings from [n] to [n] such that P(Z = π) = 1/n! for
each π ∈ Σn . For a ∈ R let a := min{k ∈ Z : k ≥ a}. Let γ ∈ [0, 1] and let
Xn := card{i ∈ [γn] : Z(i) = i} be the number of fixed points of Z among
the first γn integers. Show that the distribution of Xn converges to Po(γ),
that is
γk
lim P(Xn = k) = e−γ , k ∈ N0 .
n→∞
k!
(Hint: Establish an explicit formula for P(Xn = k), starting with the case
k = 0.)
Exercise 1.11 Prove Proposition 1.5.
Exercise 1.12 Let γ > 0 and δ > 0. Find a random vector (X, Y) such
that X, Y and X + Y are Poisson distributed with parameter γ, δ and γ + δ,
respectively, but X and Y are not independent.

2
Point Processes

A point process is a random collection of at most countably many points,
possibly with multiplicities. This chapter defines this concept for an arbitrary measurable space and provides several criteria for equality in distribution.

2.1 Fundamentals
The idea of a point process is that of a random, at most countable, collection Z of points in some space X. A good example to think of is the
d-dimensional Euclidean space Rd . Ignoring measurability issues for the
moment, we might think of Z as a mapping ω → Z(ω) from Ω into the system of countable subsets of X, where (Ω, F , P) is an underlying probability
space. Then Z can be identified with the family of mappings
ω → η(ω, B) := card(Z(ω) ∩ B),

B ⊂ X,

counting the number of points that Z has in B. (We write card A for the
number of elements of a set A.) Clearly, for any fixed ω ∈ Ω the mapping
η(ω, ·) is a measure, namely the counting measure supported by Z(ω). It
turns out to be a mathematically fruitful idea to define point processes as
random counting measures.
To give the general definition of a point process let (X, X) be a measurable space. Let N<∞ (X) ≡ N<∞ denote the space of all measures μ on X
such that μ(B) ∈ N0 := N ∪ {0} for all B ∈ X, and let N(X) ≡ N be the
space of all measures that can be written as a countable sum of measures
from N<∞ . A trivial example of an element of N is the zero measure 0 that
is identically zero on X. A less trivial example is the Dirac measure δ x at a
point x ∈ X given by δ x (B) := 1B (x). More generally, any (finite or infinite)
sequence (xn )kn=1 of elements of X, where k ∈ N := N ∪ {∞} is the number
9

Point Processes

10

of terms in the sequence, can be used to define a measure
μ=

k


δ xn .

(2.1)

n=1

Then μ ∈ N and
μ(B) =

k


1B (xn ),

B ∈ X.

n=1

More generally we have, for any measurable f : X → [0, ∞], that

f dμ =

k


f (xn ).

(2.2)

n=1

We can allow for k = 0 in (2.1). In this case μ is the zero measure. The
points x1 , x2 , . . . are not assumed to be pairwise distinct. If xi = x j for
some i, j ≤ k with i  j, then μ is said to have multiplicities. In fact, the
multiplicity of xi is the number card{ j ≤ k : x j = xi }. Any μ of the form
(2.1) is interpreted as a counting measure with possible multiplicities.
In general one cannot guarantee that any μ ∈ N can be written in the
form (2.1); see Exercise 2.5. Fortunately, only weak assumptions on (X, X)
and μ are required to achieve this; see e.g. Corollary 6.5. Moreover, large
parts of the theory can be developed without imposing further assumptions
on (X, X), other than to be a measurable space.
A measure ν on X is said to be s-finite if ν is a countable sum of finite
measures. By definition, each element of N is s-finite. We recall that a
measure ν on X is said to be σ-finite if there is a sequence Bm ∈ X, m ∈ N,
such that ∪m Bm = X and ν(Bm ) < ∞ for all m ∈ N. Clearly every σ-finite
measure is s-finite. Any N0 -valued σ-finite measure is in N. In contrast to
σ-finite measures, any countable sum of s-finite measures is again s-finite.
If the points xn in (2.1) are all the same, then this measure μ is not σ-finite.
The counting measure on R (supported by R) is an example of a measure
with values in N0 := N ∪ {0}, that is not s-finite. Exercise 6.10 gives an
example of an s-finite N0 -valued measure that is not in N.
Let N(X) ≡ N denote the σ-field generated by the collection of all
subsets of N of the form
{μ ∈ N : μ(B) = k},

B ∈ X, k ∈ N0 .

This means that N is the smallest σ-field on N such that μ → μ(B) is
measurable for all B ∈ X.

2.1 Fundamentals

11

Definition 2.1 A point process on X is a random element η of (N, N),
that is a measurable mapping η : Ω → N.
If η is a point process on X and B ∈ X, then we denote by η(B) the
mapping ω → η(ω, B) := η(ω)(B). By the definitions of η and the σ-field
N these are random variables taking values in N0 , that is
{η(B) = k} ≡ {ω ∈ Ω : η(ω, B) = k} ∈ F ,

B ∈ X, k ∈ N0 .

(2.3)

Conversely, a mapping η : Ω → N is a point process if (2.3) holds. In this
case we call η(B) the number of points of η in B. Note that the mapping
(ω, B) → η(ω, B) is a kernel from Ω to X (see Section A.1) with the additional property that η(ω, ·) ∈ N for each ω ∈ Ω.
Example 2.2 Let X be a random element in X. Then
η := δX
is a point process. Indeed, the required measurability property follows from
⎧
⎪
⎪
{X ∈ B}, if k = 1,
⎪
⎪
⎪
⎨
{η(B) = k} = ⎪
{X  B}, if k = 0,
⎪
⎪
⎪
⎪
⎩∅,
otherwise.
The above one-point process can be generalised as follows.
Example 2.3 Let Q be a probability measure on X and suppose that
X1 , . . . , Xm are independent random elements in X with distribution Q.
Then
η := δX1 + · · · + δXm
is a point process on X. Because
 
m
P(η(B) = k) =
Q(B)k (1 − Q(B))m−k ,
k

k = 0, . . . , m,

η is referred to as a binomial process with sample size m and sampling
distribution Q.
In this example, the random measure η can be written as a sum of Dirac
measures, and we formalise the class of point processes having this property in the following definition. Here and later we say that two point processes η and η are almost surely equal if there is an A ∈ F with P(A) = 1
such that η(ω) = η (ω) for each ω ∈ A.

Point Processes

12

Definition 2.4 We shall refer to a point process η on X as a proper point
process if there exist random elements X1 , X2 , . . . in X and an N0 -valued
random variable κ such that almost surely
η=

κ


δ Xn .

(2.4)

n=1

In the case κ = 0 this is interpreted as the zero measure on X.
The motivation for this terminology is that the intuitive notion of a point
process is that of a (random) set of points, rather than an integer-valued
measure. A proper point process is one which can be interpreted as a countable (random) set of points in X (possibly with repetitions), thereby better
fitting this intuition.
The class of proper point processes is very large. Indeed, we shall see
later that if X is a Borel subspace of a complete separable metric space,
then any locally finite point process on X (see Definition 2.13) is proper,
and that, for general (X, X), if η is a Poisson point process on X there
is a proper point process on X having the same distribution as η (these
concepts will be defined in due course); see Corollary 6.5 and Corollary
3.7. Exercise 2.5 shows, however, that not all point processes are proper.

2.2 Campbell’s Formula
A first characteristic of a point process is the mean number of points lying
in an arbitrary measurable set:
Definition 2.5 The intensity measure of a point process η on X is the
measure λ defined by
λ(B) := E[η(B)],

B ∈ X.

(2.5)

It follows from basic properties of expectation that the intensity measure
of a point process is indeed a measure.
Example 2.6 The intensity measure of a binomial process with sample
size m and sampling distribution Q is given by
m
m

 
λ(B) = E
1{Xk ∈ B} =
P(Xk ∈ B) = m Q(B).
k=1

k=1

Independence of the random variables X1 , . . . , Xm is not required for this
calculation.

2.2 Campbell’s Formula

13

Let R := [−∞, ∞] and R+ := [0, ∞]. Let us denote by R(X) (resp. R(X))
the set of all measurable functions u : X → R (resp. u : X → R). Let
R+ (X) (resp. R+ (X)) be the set of all those u ∈ R(X) (resp. u ∈ R(X)) with
u ≥ 0. Given u ∈ R(X), define the functions u+ , u− ∈ R+ (X) by u+ (x) :=
max{u(x), 0} and u− (x) := max{−u(x), 0}, x ∈ X. Then u(x) = u+ (x)−u− (x).
We recall from measure
theory

 (see Section A.1) that, for any measure ν
on X, the integral u dν ≡ u(x) ν(dx) of u ∈ R(X) with respect to ν is
defined as




+
u(x) ν(dx) ≡
u dν :=
u dν − u− dν
whenever this expression
is not of the form ∞ − ∞. Otherwise we use here

the convention u(x) ν(dx) := 0. We often write

ν(u) :=

u(x) ν(dx),

so
 that ν(B) = ν(1B ) for any B ∈ X. If η is a point process, then η(u) ≡
u dη denotes the mapping ω → u(x) η(ω, dx).
Proposition 2.7 (Campbell’s formula) Let η be a point process on (X, X)
with intensity measure λ. Let u ∈ R(X). Then u(x) η(dx) is a random
variable. Moreover,

 
E
u(x) η(dx) =
u(x) λ(dx)
(2.6)
whenever u ≥ 0 or



|u(x)| λ(dx) < ∞.


Proof If u(x) = 1B (x) for some B ∈ X then u(x) η(dx) = η(B) and
both assertions are true by definition. By standard techniques of measure
theory (linearity and monotone convergence) this can be extended, first to
measurable simple functions and then to arbitrary u ∈ R+ (X).
Let u ∈ R(X). We have just seen that η(u+ ) and η(u−) are random variables, so that η(u) is a random variable too. Assume that |u(x)| λ(dx) < ∞.
Then the first part of the proof shows that η(u+ ) and η(u− ) both have a finite
expectation and that
E[η(u)] = E[η(u+ )] − E[η(u− )] = λ(u+ ) − λ(u− ) = λ(u).
This concludes the proof.



Point Processes

14

2.3 Distribution of a Point Process
In accordance with the terminology of probability theory (see Section B.1),
the distribution of a point process η on X is the probability measure Pη on
(N, N), given by A → P(η ∈ A). If η is another point process with the
d
same distribution, we write η = η .
The following device is a powerful tool for analysing point processes.
We use the convention e−∞ := 0.
Definition 2.8 The Laplace (or characteristic) functional of a point process η on X is the mapping Lη : R+ (X) → [0, 1] defined by

 

Lη (u) := E exp − u(x) η(dx) , u ∈ R+ (X).
Example 2.9
u ∈ R+ (X),

Let η be the binomial process of Example 2.3. Then, for

m

 


Lη (u) = E exp −
u(Xk ) = E
k=1
m

=


E exp[−u(Xk )] =




exp[−u(Xk )]

m

k=1

m

exp[−u(x)] Q(dx) .

k=1

The following proposition characterises equality in distribution for point
processes. It shows, in particular, that the Laplace functional of a point
process determines its distribution.
Proposition 2.10 For point processes η and η on X the following assertions are equivalent:
d

(i) η = η ;
d
(ii) (η(B1 ), . . . , η(Bm )) = (η (B1 ), . . . , η (Bm )) for all m ∈ N and all pairwise disjoint B1 , . . . , Bm ∈ X;
(iii) Lη (u) = Lη (u) for all u ∈ R+ (X);
d

(iv) for all u ∈ R+ (X), η(u) = η (u) as random variables in R+ .
Proof First we prove that (i) implies (iv). Given u ∈ R+ (X), define the
function gu : N → R+ by μ → u dμ. By Proposition 2.7 (or a direct check
based on first principles), gu is a measurable function. Also,
Pη(u) (·) = P(η(u) ∈ ·) = P(η ∈ g−1
u (·)),
d

d

and likewise for η . So if η = η then also η(u) = η (u).
Next we show that (iv) implies (iii). For any R+ -valued random variable

2.3 Distribution of a Point Process
15

Y we have E[exp(−Y)] = e−y PY (dy), which is determined by the distribution PY . Hence, if (iv) holds,
Lη (u) = E[exp(−η(u))] = E[exp(−η (u))] = Lη (u)
for all u ∈ R+ (X), so (iii) holds.
Assume now that (iii) holds and consider a simple function of the form
u = c1 1B1 + · · · + cm 1Bm , where m ∈ N, B1 , . . . , Bm ∈ X and c1 , . . . , cm ∈
(0, ∞). Then
m

 

Lη (u) = E exp −
c j η(B j ) = P̂(η(B1 ),...,η(Bm )) (c1 , . . . , cm )

(2.7)

j=1

where for any measure μ on [0, ∞]m we write μ̂ for its multivariate Laplace
transform. Since a finite measure on Rm+ is determined by its Laplace transform (this follows from Proposition B.4), we can conclude that the restriction of P(η(B1 ),...,η(Bm )) (a measure on [0, ∞]m ) to (0, ∞)m is the same as the restriction of P(η (B1 ),...,η (Bm )) to (0, ∞)m . Then, using the fact that P(η(B1 ),...,η(Bm ))
and P(η (B1 ),...,η (Bm )) are probability measures on [0, ∞]m , by forming suitable
complements we obtain P(η(B1 ),...,η(Bm )) = P(η (B1 ),...,η (Bm )) (these details are left
to the reader). In other words, (iii) implies (ii).
Finally we assume (ii) and prove (i). Let m ∈ N and B1 , . . . , Bm ∈ X,
not necessarily pairwise disjoint. Let C1 , . . . , Cn be the atoms of the field
generated by B1 , . . . , Bm ; see Section A.1. For each i ∈ {1, . . . , m} there
exists Ji ⊂ {1, . . . , n} such that Bi = ∪ j∈Ji C j . (Note that Ji = ∅ if Bi = ∅.)
Let D1 , . . . , Dm ⊂ N0 . Then
P(η(B1 ) ∈ D1 , . . . , η(Bm ) ∈ Dm )
 


=
1
k j ∈ D1 , . . . ,
k j ∈ Dm P(η(C1 ),...,η(Cn )) )(d(k1 , . . . , kn )).
j∈J1

j∈Jm

Therefore Pη and Pη coincide on the system H consisting of all sets of the
form
{μ ∈ N : μ(B1 ) ∈ D1 , . . . , μ(Bm ) ∈ Dm },
where m ∈ N, B1 , . . . , Bm ∈ X and D1 , . . . , Dm ⊂ N0 . Clearly H is a πsystem; that is, closed under pairwise intersections. Moreover, the smallest
σ-field σ(H) containing H is the full σ-field N. Hence (i) follows from the
fact that a probability measure is determined by its values on a generating
π-system; see Theorem A.5.


Point Processes

16

2.4 Point Processes on Metric Spaces
Let us now assume that X is a metric space with metric ρ; see Section A.2.
Then it is always to be understood that X is the Borel σ-field B(X) of X.
In particular, the singleton {x} is in X for all x ∈ X. If ν is a measure on X
then we often write ν{x} := ν({x}). If ν{x} = 0 for all x ∈ X, then ν is said
to be diffuse. Moreover, if μ ∈ N(X) then we write x ∈ μ if μ({x}) > 0.
A set B ⊂ X is said to be bounded if it is empty or its diameter
d(B) := sup{ρ(x, y) : x, y ∈ B}
is finite.
Definition 2.11 Suppose that X is a metric space. The system of bounded
measurable subsets of X is denoted by Xb . A measure ν on X is said to be
locally finite if ν(B) < ∞ for every B ∈ Xb . Let Nl (X) denote the set of all
locally finite elements of N(X) and let Nl (X) := {A ∩ Nl (X) : A ∈ N(X)}.
Fix some x0 ∈ X. Then any bounded set B is contained in the closed ball
B(x0 , r) = {x ∈ X : ρ(x, x0 ) ≤ r} for sufficiently large r > 0. In fact, if
B  ∅, then we can take, for instance, r := d(B) + ρ(x1 , x0 ) for some x1 ∈ B.
Note that B(x0 , n) ↑ X as n → ∞. Hence a measure ν on X is locally finite
if and only if ν(B(x0 , n)) < ∞ for each n ∈ N. In particular, the set Nl (X) is
measurable, that is Nl (X) ∈ N(X). Moreover, any locally finite measure is
σ-finite.
Proposition 2.12 Let η and η be point processes on a metric space X.
d
Suppose η(u) = η (u) for all u ∈ R+ (X) such that {u > 0} is bounded. Then
d
η = η .
Proof

Suppose that
d

η(u) = η (u),

u ∈ R+ (X), {u > 0} bounded.

(2.8)

Then Lη (u) = Lη (u) for any u ∈ R+ (X) such that {u > 0} is bounded. Given
any  ∈ R+ (X), we can choose a sequence un , n ∈ N, of functions in R+ (X)
such that {un > 0} is bounded for each n, and un ↑  pointwise. Then, by
dominated convergence and (2.8),
Lη () = lim Lη (un ) = lim Lη (un ) = Lη (),
n→∞

d

so η = η by Proposition 2.10.

n→∞



Definition 2.13 A point process η on a metric space X is said to be locally
finite if P(η(B) < ∞) = 1 for every bounded B ∈ X.

2.4 Point Processes on Metric Spaces

17

If required, we could interpret a locally finite point process η as a random
element of the space (Nl (X), Nl (X)), introduced in Definition 2.11. Indeed,
we can define another point process η̃ by η̃(ω, ·) := η(ω, ·) if the latter is
locally finite and by η̃(ω, ·) := 0 (the zero measure) otherwise. Then η̃ is
a random element of (Nl (X), Nl (X)) that coincides P-almost surely (P-a.s.)
with η.
The reader might have noticed that the proof of Proposition 2.12 has not
really used the metric on X. The proof of the next refinement of this result
(not used later in the book) exploits the metric in an essential way.
Proposition 2.14 Let η and η be locally finite point processes on a metric
d
space X. Suppose η(u) = η (u) for all continuous u : X → R+ such that
d
{u > 0} is bounded. Then η = η .
Proof Let G be the space of continuous functions u : X → R+ such that
d
{u > 0} is bounded. Assume that η(u) = η (u) for all u ∈ G. Since G is
closed under non-negative linear combinations, it follows, as in the proof
that (iii) implies (ii) in Proposition 2.10, that
d

(η(u1 ), η(u2 ), . . . ) = (η (u1 ), η (u2 ), . . . ),
first for any finite sequence and then (by Theorem A.5 in Section A.1) for
any infinite sequence un ∈ G, n ∈ N. Take a bounded closed set C ⊂ X and,
for n ∈ N, define
un (x) := max{1 − nd(x, C), 0},

x ∈ X,

where d(x, C) := inf{ρ(x, y) : y ∈ C} and inf ∅ := ∞. By Exercise 2.8,
un ∈ G. Moreover, un ↓ 1C as n → ∞, and since η is locally finite we
obtain η(un ) → η(C) P-a.s. The same relation holds for η . It follows that
statement (ii) of Proposition 2.10 holds whenever B1 , . . . , Bm are closed and
bounded, but not necessarily disjoint. Hence, fixing a closed ball C ⊂ X,
Pη and Pη coincide on the π-system HC consisting of all sets of the form
{μ ∈ Nl : μ(B1 ∩ C) ≤ k1 , . . . , μ(Bm ∩ C) ≤ km },

(2.9)

where m ∈ N, B1 , . . . , Bm ⊂ X are closed and k1 , . . . , km ∈ N0 . Another
application of Theorem A.5 shows that Pη and Pη coincide on σ(HC ) and
then also on Nl := σ(∪∞
i=1 σ(H Bi )), where Bi := B(x0 , i) and x0 ∈ X is fixed.
It remains to show that Nl = Nl . Let i ∈ N and let Ni denote the smallest
σ-field on Nl containing the sets {μ ∈ Nl : μ(B ∩ Bi ) ≤ k} for all closed sets
B ⊂ X and each k ∈ N0 . Let D be the system of all Borel sets B ⊂ X such
that μ → μ(B∩Bi ) is Ni -measurable. Then D is a Dynkin system containing

18

Point Processes

the π-system of all closed sets, so that the monotone class theorem shows
D = X. Therefore σ(HBi ) contains {μ ∈ Nl : μ(B ∩ Bi ) ≤ k} for all B ∈ X
and all k ∈ N0 . Letting i → ∞ we see that Nl contains {μ ∈ Nl : μ(B) ≤ k}
and therefore every set from Nl .


2.5 Exercises
Exercise 2.1 Give an example of a point process η on a measurable space
(X, X) with
 intensity measure λ and u ∈ R(X) (violating the condition that
u ≥ 0 or |u(x)|λ(dx) < ∞), such that Campbell’s formula (2.6) fails.
Exercise 2.2 Let X∗ ⊂ X be a π-system generating X. Let η be a point
process on X that is σ-finite on X∗ , meaning that there is a sequence Cn ∈

X∗ , n ∈ N, such that ∪∞
n=1 C n = X and P(η(C n ) < ∞) = 1 for all n ∈ N. Let η
be another point process on X and suppose that the equality in Proposition
d
2.10(ii) holds for all B1 , . . . , Bm ∈ X∗ and m ∈ N. Show that η = η .
Exercise 2.3 Let η1 , η2 , . . . be a sequence of point processes and define
η := η1 + η2 + · · · , that is η(ω, B) := η1 (ω, B) + η2 (ω, B) + · · · for all ω ∈ Ω
and B ∈ X. Show that η is a point process. (Hint: Prove first that N(X) is
closed under countable summation.)
Exercise 2.4 Let η1 , η2 , . . . be a sequence of proper point processes. Show
that η := η1 + η2 + · · · is a proper point process.
Exercise 2.5 Suppose that X = [0, 1]. Find a σ-field X and a measure
μ on (X, X) such that μ(X) = 1 and μ(B) ∈ {0, 1} for all B ∈ X, which is
not of the form μ = δ x for some x ∈ X. (Hint: Take the system of all finite
subsets of X as a generator of X.)
Exercise 2.6 Let η be a point process on X with intensity measure λ and
let B ∈ X such that λ(B) < ∞. Show that

d
λ(B) = − Lη (t1B ) .
t=0
dt
Exercise 2.7 Let η be a point process on X. Show for each B ∈ X that
P(η(B) = 0) = lim Lη (t1B ).
t→∞

Exercise 2.8 Let (X, ρ) be a metric space. Let C ⊂ X, C  ∅. For x ∈ X let
d(x, C) := inf{ρ(x, z) : z ∈ C}. Show that d(·, C) has the Lipschitz property
|d(x, C) − d(y, C)| ≤ ρ(x, y),

x, y ∈ X.

(Hint: Take z ∈ C and bound ρ(x, z) by the triangle inequality.)

3
Poisson Processes

For a Poisson point process the number of points in a given set has a
Poisson distribution. Moreover, the numbers of points in disjoint sets are
stochastically independent. A Poisson process exists on a general s-finite
measure space. Its distribution is characterised by a specific exponential
form of the Laplace functional.

3.1 Definition of the Poisson Process
In this chapter we fix an arbitrary measurable space (X, X). We are now
ready for the definition of the main subject of this volume. Recall that for
γ ∈ [0, ∞], the Poisson distribution Po(γ) was defined at (1.4).
Definition 3.1 Let λ be an s-finite measure on X. A Poisson process with
intensity measure λ is a point process η on X with the following two properties:
(i) For every B ∈ X the distribution of η(B) is Poisson with parameter
λ(B), that is to say P(η(B) = k) = Po(λ(B); k) for all k ∈ N0 .
(ii) For every m ∈ N and all pairwise disjoint sets B1 , . . . , Bm ∈ X the
random variables η(B1 ), . . . , η(Bm ) are independent.
Property (i) of Definition 3.1 is responsible for the name of the Poisson
process. A point process with property (ii) is said to be completely independent. (One also says that η has independent increments or is completely
random.) For a (locally finite) point process without multiplicities and a
diffuse intensity measure (on a complete separable metric space) we shall
see in Chapter 6 that the two defining properties of a Poisson process are
equivalent.
If η is a Poisson process with intensity measure λ then E[η(B)] = λ(B),
so that Definition 3.1 is consistent with Definition 2.5. In particular, if λ = 0
is the zero measure, then P(η(X) = 0) = 1.
19

Poisson Processes

20

Let us first record that for each s-finite λ there is at most one Poisson
process with intensity measure λ, up to equality in distribution.
Proposition 3.2 Let η and η be two Poisson processes on X with the
d
same s-finite intensity measure. Then η = η .
Proof



The result follows from Proposition 2.10.

3.2 Existence of Poisson Processes
In this section we show by means of an explicit construction that Poisson
processes exist. Before we can do this, we need to deal with the superposition of independent Poisson processes.
Theorem 3.3 (Superposition theorem) Let ηi , i ∈ N, be a sequence of
independent Poisson processes on X with intensity measures λi . Then
η :=

∞


ηi

(3.1)

i=1

is a Poisson process with intensity measure λ := λ1 + λ2 + · · · .
Proof Exercise 2.3 shows that η is a point process.
For n ∈ N and B ∈ X, we have by Exercise 1.4 that ξn (B) := ni=1 ηi (B)
has a Poisson distribution with parameter ni=1 λi (B). Also ξn (B) converges
monotonically to η(B) so by continuity of probability, and the fact that
Po(γ; j) is continuous in γ for j ∈ N0 , for all k ∈ N0 we have
P(η(B) ≤ k) = lim P(ξn (B) ≤ k)
n→∞
⎛ n
k

⎜⎜
= lim
Po ⎜⎜⎜⎝ λi (B);
n→∞

j=0

i=1

⎞
⎛∞
k
⎟⎟⎟ 
⎜⎜
j⎟⎟⎠ =
Po ⎜⎜⎜⎝ λi (B);
j=0

⎞
⎟⎟
j⎟⎟⎟⎠

i=1

so that η(B) has the Po(λ(B)) distribution.
Let B1 , . . . , Bm ∈ X be pairwise disjoint. Then (ηi (B j ), 1 ≤ j ≤ m, i ∈
N) is a family of independent random variables, so that by the grouping
property of independence the random variables i ηi (B1 ), . . . , i ηi (Bm ) are
independent. Thus η is completely independent.

Now we construct a Poisson process on (X, X) with arbitrary s-finite
intensity measure. We start by generalising Example 2.3.
Definition 3.4 Let V and Q be probability measures on N0 and X, respectively. Suppose that X1 , X2 , . . . are independent random elements in X

3.2 Existence of Poisson Processes

21

with distribution Q, and let κ be a random variable with distribution V,
independent of (Xn ). Then
η :=

κ


δ Xk

(3.2)

k=1

is called a mixed binomial process with mixing distribution V and sampling
distribution Q.
The following result provides the key for the construction of Poisson
processes.
Proposition 3.5 Let Q be a probability measure on X and let γ ≥ 0.
Suppose that η is a mixed binomial process with mixing distribution Po(γ)
and sampling distribution Q. Then η is a Poisson process with intensity
measure γ Q.
Proof Let κ and (Xn ) be given as in Definition 3.4. To prove property (ii)
of Definition 3.1 it is no loss of generality to assume that B1 , . . . , Bm are
pairwise disjoint measurable subsets of X satisfying ∪mi=1 Bi = X. (Otherwise we can add the complement of this union.) Let k1 , . . . , km ∈ N0 and
set k := k1 + · · · + km . Then
P(η(B1 ) = k1 , . . . , η(Bm ) = km )
k
k



= P(κ = k) P
1{X j ∈ B1 } = k1 , . . . ,
1{X j ∈ Bm } = km .
j=1

j=1

Since the second probability on the right is multinomial, this gives
k!
γk −γ
e
Q(B1 )k1 · · · Q(Bm )km
k!
k1 ! · · · km !
m
(γQ(B j ))k j −γQ(B j )
e
=
.
k j!
j=1

P(η(B1 ) = k1 , . . . , η(Bm ) = km ) =

Summing over k2 , . . . , km shows that η(B1 ) is Poisson distributed with parameter γ Q(B1 ). A similar statement applies to η(B2 ), . . . , η(Bm ). Therefore
η(B1 ), . . . , η(Bm ) are independent.

Theorem 3.6 (Existence theorem) Let λ be an s-finite measure on X.
Then there exists a Poisson process on X with intensity measure λ.
Proof The result is trivial if λ(X) = 0.
Suppose for now that 0 < λ(X) < ∞. On a suitable probability space,
assume that κ, X1 , X2 , . . . are independent random elements, with κ taking

22

Poisson Processes

values in N0 and each Xi taking values in X, with κ having the Po(λ(X))
distribution and each Xi having λ(·)/λ(X) as its distribution. Here the probability space can be taken to be a suitable product space; see the proof of
Corollary 3.7 below. Let η be the mixed binomial process given by (3.2).
Then, by Proposition 3.5, η is a Poisson process with intensity measure λ,
as required.
Now suppose that λ(X) = ∞. There is a sequence λi , i ∈ N, of measures on (X, X) with strictly positive and finite total measure, such that
λ= ∞
i=1 λi . On a suitable (product) probability space, let ηi , i ∈ N, be a
sequence of independent Poisson processes with ηi having intensity measure λi . This is possible by the preceding part of the proof. Set η = ∞
i=1 ηi .
By the superposition theorem (Theorem 3.3), η is a Poisson process with
intensity measure λ, and the proof is complete.

A corollary of the preceding proof is that on arbitrary (X, X) every Poisson point process is proper (see Definition 2.4), up to equality in distribution.
Corollary 3.7 Let λ be an s-finite measure on X. Then there is a probability space (Ω, F , P) supporting random elements X1 , X2 , . . . in X and κ
in N0 , such that
κ

δ Xn
(3.3)
η :=
n=1

is a Poisson process with intensity measure λ.
Proof We consider only the case λ(X) = ∞ (the other case is covered by
Proposition 3.5). Take the measures λi , i ∈ N, as in the last part of the proof
of Theorem 3.6. Let γi := λi (X) and Qi := γi−1 λi . We shall take (Ω, F , P)
to be the product of spaces (Ωi , Fi , Pi ), i ∈ N, where each (Ωi , Fi , Pi ) is
again an infinite product of probability spaces (Ωi j , Fi j , Pi j ), j ∈ N0 , with
Ωi0 := N0 , Pi0 := Po(γi ) and (Ωi j , Fi j , Pi j ) := (X, X, Qi ) for j ≥ 1. On
this space we can define independent random elements κi , i ∈ N, and Xi j ,
i, j ∈ N, such that κi has distribution Po(γi ) and Xi j has distribution Qi ; see
Theorem B.2. The proof of Theorem 3.6 shows how to define κ, X1 , X2 , . . .
in terms of these random variables in a measurable (algorithmic) way. The
details are left to the reader.

As a consequence of Corollary 3.7, when checking a statement involving
only the distribution of a Poisson process η, it is no restriction of generality
to assume that η is proper. Exercise 3.9 shows that there are Poisson processes which are not proper. On the other hand, Corollary 6.5 will show

3.3 Laplace Functional of the Poisson Process

23

that any suitably regular point process on a Borel subset of a complete
separable metric space is proper.
The next result is a converse to Proposition 3.5.
Proposition 3.8 Let η be a Poisson process on X with intensity measure
λ satisfying 0 < λ(X) < ∞. Then η has the distribution of a mixed binomial
process with mixing distribution Po(λ(X)) and sampling distribution Q :=
λ(X)−1 λ. The conditional distribution P(η ∈ · | η(X) = m), m ∈ N, is that of
a binomial process with sample size m and sampling distribution Q.
Proof Let η be a mixed binomial process that has mixing distribution
d
Po(λ(X)) and sampling distribution Q. Then η = η by Propositions 3.5
and 3.2. This is our first assertion. Also, by definition, P(η ∈ · | η (X) = m)
has the distribution of a binomial process with sample size m and sampling
distribution Q, and by the first assertion so does P(η ∈ · | η(X) = m),
yielding the second assertion.


3.3 Laplace Functional of the Poisson Process
The following characterisation of Poisson processes is of great value for
both theory and applications.
Theorem 3.9 Let λ be an s-finite measure on X and let η be a point
process on X. Then η is a Poisson process with intensity measure λ if and
only if
  


Lη (u) = exp −
1 − e−u(x) λ(dx) , u ∈ R+ (X).
(3.4)
Proof Assume first that η is a Poisson process with intensity measure λ.
Consider first the simple function u := c1 1B1 + · · · + cm 1Bm , where m ∈ N,
c1 , . . . , cm ∈ (0, ∞) and B1 , . . . , Bm ∈ X are pairwise disjoint. Then
m

 

 m

E[exp[−η(u)]] = E exp −
ci η(Bi ) = E
exp[−ci η(Bi )] .
i=1

i=1

The complete independence and the formula (1.9) for the Laplace transform of the Poisson distribution (this also holds for Po(∞)) yield
m
m


Lη (u) =
E exp[−ci η(Bi )] =
exp[−λ(Bi )(1 − e−ci )]
i=1

i=1

m
m 
 

 

−ci
= exp −
λ(Bi )(1 − e ) = exp −
(1 − e−u ) dλ .
i=1

i=1

Bi

24

Poisson Processes

Since 1 − e−u(x) = 0 for x  B1 ∪ · · · ∪ Bm , this is the right-hand side of (3.4).
For general u ∈ R+ (X), choose simple functions un with un ↑ u as n → ∞.
Then, by monotone convergence (Theorem A.6), η(un ) ↑ η(u) as n → ∞,
and by dominated convergence for expectations the left-hand side of
  


E[exp[−η(un )]] = exp −
1 − e−un (x) λ(dx)
tends to Lη (u). By monotone convergence again (this time for the integral
with respect to λ), the right-hand side tends to the right-hand side of (3.4).
Assume now that (3.4) holds. Let η be a Poisson process with intensity
measure λ. (By Theorem 3.6, such an η exists.) By the preceding argument, Lη (u) = Lη (u) for all u ∈ R+ (X). Therefore, by Proposition 2.10,
d

η = η ; that is, η is a Poisson process with intensity measure λ.



3.4 Exercises
Exercise 3.1 Use Exercise 1.12 to deduce that there exist a measure space
(X, X, λ) and a point process on X satisfying part (i) but not part (ii) of the
definition of a Poisson process (Definition 3.1).
Exercise 3.2 Show that there exist a measure space (X, X, λ) and a point
process η on X satisfying part (i) of Definition 3.1 and part (ii) of that
definition with ‘independent’ replaced by ‘pairwise independent’, such that
η is not a Poisson point process. In other words, show that we can have
η(B) Poisson distributed for all B ∈ X, and η(A) independent of η(B) for all
disjoint pairs A, B ∈ X, but η(A1 ), . . . , η(Ak ) not mutually independent for
all disjoint A1 , . . . , Ak ∈ X.
Exercise 3.3 Let η be a Poisson process on X with intensity measure λ
and let B ∈ X with 0 < λ(B) < ∞. Suppose B1 , . . . , Bn are sets in X forming
a partition of B. Show for all k1 , . . . , kn ∈ N0 and m := i ki that
 n  λ(B ) ki

 
m!
i
P ∩ni=1 {η(Bi ) = ki } | η(B) = m =
.
k1 !k2 ! · · · kn ! i=1 λ(B)
Exercise 3.4 Let η be a Poisson process on X with s-finite intensity measure λ and let u ∈ R+ (X). Use the proof of Theorem 3.9 to show that



 


E exp
u(x) η(dx) = exp
eu(x) − 1 λ(dx) .

3.4 Exercises

25

Exercise 3.5 Let V be a probability measure on N0 with generating funcn
tion GV (s) := ∞
n=0 V({n})s , s ∈ [0, 1]. Let η be a mixed binomial process
with mixing distribution V and sampling distribution Q. Show that


Lη (u) = GV
e−u dQ , u ∈ R+ (X).
Assume now that V is a Poisson distribution; show that the preceding formula is consistent with Theorem 3.9.
Exercise 3.6 Let η be a point process on X. Using the convention e−∞ :=
0, the Laplace functional Lη (u) can be defined for any u ∈ R+ (X). Assume
now that η is a Poisson process with intensity measure λ. Use Theorem 3.9
to show that
 κ
 

E
u(Xn ) = exp − (1 − u(x)) λ(dx) ,
(3.5)
n=1

for any measurable u : X → [0, 1], where η is assumed to be given by (3.3).
The left-hand side of (3.5) is called the probability generating functional
of η. It can be defined
for any point process (proper or not) by taking the

expectation of exp ln u(x) η(dx) .
Exercise 3.7 Let η be a Poisson process with finite intensity measure λ.
Show for all f ∈ R+ (N) that

∞

1
−λ(X)
−λ(X)
E[ f (η)] = e
f (δ x1 + · · · + δ xn ) λn (d(x1 , . . . , xn )).
f (0) + e
n!
n=1
Exercise 3.8 Let η be a Poisson process with s-finite intensity measure λ
and let f ∈ R+ (N) be such that E[ f (η)] < ∞. Suppose that η is a Poisson
process with intensity measure λ such that λ = λ + ν for some finite
measure ν. Show that E[ f (η )] < ∞. (Hint: Use the superposition theorem.)
Exercise 3.9 In the setting of Exercise 2.5, show that there is a probability
measure λ on (X, X) and a Poisson process η with intensity measure λ such
that η is not proper. (Hint: Use Exercise 2.5.)
Exercise 3.10 Let 0 < γ < γ . Give an example of two Poisson processes
η, η on (0, 1) with intensity measures γλ1 and γ λ1 , respectively (λ1 denoting Lebesgue measure), such that η ≤ η but η − η is not a Poisson process.
(Hint: Use Exercise 1.9.)
Exercise 3.11 Let η be a Poisson process with intensity measure λ and
let B1 , B2 ∈ X satisfy λ(B1 ) < ∞ and λ(B2 ) < ∞. Show that the covariance
between η(B1 ) and η(B2 ) is given by Cov[η(B1 ), η(B2 )] = λ(B1 ∩ B2 ).

4
The Mecke Equation and Factorial Measures

The Mecke equation provides a way to compute the expectation of integrals, i.e. sums, with respect to a Poisson process, where the integrand can
depend on both the point process and the point in the state space. This functional equation characterises a Poisson process. The Mecke identity can be
extended to integration with respect to factorial measures, i.e. to multiple
sums. Factorial measures can also be used to define the Janossy measures,
thus providing a local description of a general point process. The factorial
moment measures of a point process are defined as the expected factorial
measures. They describe the probability of the occurrence of points in a
finite number of infinitesimally small sets.

4.1 The Mecke Equation
In this chapter we take (X, X) to be an arbitrary measurable space and use
the abbreviation (N, N) := (N(X), N(X)). Let η be a Poisson process on
X with s-finite intensity measure λ and let f ∈ R+ (X × N). The complete
independence of η implies for each x ∈ X that, heuristically speaking, η(dx)
and the restriction η{x}c of η to X \ {x} are independent. Therefore
 

E[ f (x, η{x}c )] λ(dx),
(4.1)
E
f (x, η{x}c ) η(dx) =
where we ignore measurability issues. If P(η({x}) = 0) = 1 for each x ∈ X
(which is the case if λ is adiffuse measure on a Borel space), then the righthand side of (4.1) equals E[ f (x, η)] λ(dx). (Exercise 6.11 shows a way to
extend this to an arbitrary intensity measure.) We show that a proper version of the resulting integral identity holds in general and characterises the
Poisson process. This equation is a fundamental tool for analysing the Poisson process and can be used in many specific calculations. In the special
case where X has just a single element, Theorem 4.1 essentially reduces to
an earlier result about the Poisson distribution, namely Proposition 1.1.
26

4.1 The Mecke Equation

27

Theorem 4.1 (Mecke equation) Let λ be an s-finite measure on X and η
a point process on X. Then η is a Poisson process with intensity measure λ
if and only if

 
E
f (x, η) η(dx) =
E[ f (x, η + δ x )] λ(dx)
(4.2)
for all f ∈ R+ (X × N).
Proof Let us start by noting that the mapping (x, μ) → μ + δ x (adding a
point x to the counting measure μ) from X × N to N is measurable. Indeed,
the mapping (x, μ) → μ(B) + 1B (x) is measurable for all B ∈ X.
If η is a Poisson process, then (4.2) is a special case of (4.11) to be
proved in Section 4.2.
Assume now that (4.2) holds for all measurable f ≥ 0. Let B1 , . . . , Bm
be disjoint sets in X with λ(Bi ) < ∞ for each i. For k1 , . . . , km ∈ N0 with
k1 ≥ 1 we define
m

f (x, μ) = 1B1 (x)

1{μ(Bi ) = ki },

(x, μ) ∈ X × N.

i=1

Then



E
f (x, η) η(dx) = E η(B1 )

m




1{η(Bi ) = ki } = k1 P ∩mi=1 {η(Bi ) = ki } ,

i=1

with the (measure theory) convention ∞ · 0 := 0. On the other hand, we
have for each x ∈ X that
E[ f (x, η + δ x )] = 1B1 (x) P(η(B1 ) = k1 − 1, η(B2 ) = k2 , . . . , η(Bm ) = km )
(with ∞ − 1 := ∞) so that, by (4.2),




k1 P ∩mi=1 {η(Bi ) = ki } = λ(B1 ) P {η(B1 ) = k1 − 1} ∩ ∩mi=2 {η(Bi ) = ki } .


Assume that P ∩mi=2 {η(Bi ) = ki } > 0 and note that otherwise η(B1 ) and the
event ∩mi=2 {η(Bi ) = ki } are independent. Putting


πk = P η(B1 ) = k | ∩mi=2 {η(Bi ) = ki } , k ∈ N0 ,
we have
kπk = λ(B1 )πk−1 ,

k ∈ N.

Since λ(B1 ) < ∞ this implies π∞ = 0. The only distribution satisfying
this recursion is given by πk = Po(λ(B1 ); k), regardless of k2 , . . . , km ; hence
η(B1 ) is Po(λ(B1 )) distributed, and independent of ∩mi=2 {η(Bi ) = ki }. Hence,
by an induction on m, the variables η(B1 ), . . . , η(Bm ) are independent.

28

The Mecke Equation and Factorial Measures

For general B ∈ X we still get for all k ∈ N that
k P(η(B) = k) = λ(B) P(η(B) = k − 1).
If λ(B) = ∞ we obtain P(η(B) = k − 1) = 0 and hence P(η(B) = ∞) = 1.
It follows that η has the defining properties of the Poisson process. 

4.2 Factorial Measures and the Multivariate Mecke Equation
Equation (4.2) admits a useful generalisation involving multiple integration. To formulate this version we consider, for m ∈ N, the m-th power
(Xm , Xm ) of (X, X); see Section A.1. Suppose μ ∈ N is given by
μ=

k


δx j

(4.3)

j=1

for some k ∈ N0 and some x1 , x2 , . . . ∈ X (not necessarily distinct) as in
(2.1). Then we define another measure μ(m) ∈ N(Xm ) by

1{(xi1 , . . . , xim ) ∈ C}, C ∈ Xm ,
(4.4)
μ(m) (C) =
i1 ,...,im ≤k

where the superscript  indicates summation over m-tuples with pairwise
different entries and where an empty sum is defined as zero. (In the case
k = ∞ this involves only integer-valued indices.) In other words this means
that

μ(m) =
δ(xi1 ,...,xim ) .
(4.5)
i1 ,...,im ≤k

To aid understanding, it is helpful to consider in (4.4) a set C of the special
product form B1 × · · · × Bm . If these sets are pairwise disjoint, then the
right-hand side of (4.4) factorises, yielding
m

μ (B1 × · · · × Bm ) =

μ(B j ).

(m)

(4.6)

j=1

If, on the other hand, B j = B for all j ∈ {1, . . . , m} then, clearly,
μ(m) (Bm ) = μ(B)(μ(B) − 1) · · · (μ(B) − m + 1) = (μ(B))m .

(4.7)

Therefore μ(m) is called the m-th factorial measure of μ. For m = 2 and
arbitrary B1 , B2 ∈ X we obtain from (4.4) that
μ(2) (B1 × B2 ) = μ(B1 )μ(B2 ) − μ(B1 ∩ B2 ),

(4.8)

4.2 Factorial Measures and the Multivariate Mecke Equation

29

provided that μ(B1 ∩ B2 ) < ∞. Otherwise μ(2) (B1 × B2 ) = ∞.
Factorial measures satisfy the following useful recursion:
Let μ ∈ N be given by (4.3) and define μ(1) := μ. Then, for

Lemma 4.2
all m ∈ N,
μ(m+1) =

 

−

1{(x1 , . . . , xm+1 ) ∈ ·} μ(dxm+1 )
m



1{(x1 , . . . , xm , x j ) ∈ ·} μ(m) (d(x1 , . . . , xm )).

(4.9)

j=1

Proof

Let m ∈ N and C ∈ Xm+1 . Then
μ(m+1) (C) =



i1 ,...,im ≤k

k


1{(xi1 , . . . , xim , x j ) ∈ C}.

j=1
j{i1 ,...,im }

Here the inner sum equals
k

j=1

1{(xi1 , . . . , xim , x j ) ∈ C} −

m


1{(xi1 , . . . , xim , xil ) ∈ C},

l=1

where the latter difference is either a non-negative integer (if the first sum
is finite) or ∞ (if the first sum is infinite). This proves the result.

For a general space (X, X) there is no guarantee that a measure μ ∈ N
can be represented as in (4.3); see Exercise 2.5. Equation (4.9) suggests a
recursive definition of the factorial measures of a general μ ∈ N, without
using a representation as a sum of Dirac measures. The next proposition
confirms this idea.
Proposition 4.3 For any μ ∈ N there is a unique sequence μ(m) ∈ N(Xm ),
m ∈ N, satisfying μ(1) := μ and the recursion (4.9). The mappings μ → μ(m)
are measurable.
The proof of Proposition 4.3 is given in Section A.1 (see Proposition
A.18) and can be skipped without too much loss. It is enough to remember
that μ(m) can be defined by (4.4), whenever μ is given by (4.3). This follows
from Lemma 4.2 and the fact that the solution of (4.9) must be unique. It
follows by induction that (4.6) and (4.7) remain valid for general μ ∈ N;
see Exercise 4.4.
Let η be a point process on X and let m ∈ N. Proposition 4.3 shows that

The Mecke Equation and Factorial Measures

30

η(m) is a point process on Xm . If η is proper and given as at (2.4), then

δ(Xi1 ,...,Xim ) .
(4.10)
η(m) =
i1 ,...,im ∈{1,...,κ}

We continue with the multivariate version of the Mecke equation (4.2).
Theorem 4.4 (Multivariate Mecke equation) Let η be a Poisson process
on X with s-finite intensity measure λ. Then, for every m ∈ N and for every
f ∈ R+ (Xm × N),


(m)
E
f (x1 , . . . , xm , η) η (d(x1 , . . . , xm ))


=
E f (x1 , . . . , xm , η + δ x1 + · · · + δ xm ) λm (d(x1 , . . . , xm )). (4.11)
Proof By Proposition 4.3, the map μ → μ(m) is measurable, so that (4.11)
involves only the distribution of η. By Corollary 3.7 we can hence assume
that η is proper and given by (2.4). Let us first assume that λ(X) < ∞.
Then λ = γ Q for some γ ≥ 0 and some probability measure Q on X. By
Proposition 3.5, we can then assume that η is a mixed binomial process as
in Definition 3.4, with κ having the Po(γ) distribution. Let f ∈ R+ (Xm × N).
Then we obtain from (4.10) and (2.2) that the left-hand side of (4.11) equals
e−γ

∞



γk 
E
f (Xi1 , . . . , Xim , δX1 + · · · + δXk )
k! i ,...,i ∈{1,...,k}
k=m
1

= e−γ

∞

γk
k=m

k!

m




E f (Xi1 , . . . , Xim , δX1 + · · · + δXk ) ,

(4.12)

i1 ,...,im ∈{1,...,k}

where we have used first independence of κ and (Xn ) and then the fact that
we can perform integration and summation in any order we want (since
f ≥ 0). Let us denote by y = (y1 , . . . , ym ) a generic element of Xm . Since
the Xi are independent with distribution Q, the expression (4.12) equals
  
k−m
m
∞



γk (k)m
e−γ
E
f y,
δ Xi +
δy j Qm (dy)
k!
i=1
j=1
k=m
∞
k−m
m

 γk−m    

−γ m
E f y,
δXi +
δy j Qm (dy)
=e γ
(k − m)!
i=1
j=1
k=m


=
E f (y1 , . . . , ym , η + δy1 + · · · + δym ) λm (d(y1 , . . . , ym )),

4.2 Factorial Measures and the Multivariate Mecke Equation

31

where we have again used the mixed binomial representation. This proves
(4.11) for finite λ.
Now suppose λ(X) = ∞. As in the proof of Theorem 3.6 we can then
assume that η = i ηi , where ηi are independent proper Poisson processes
with intensity measures λi each having finite total measure. By the grouping property of independence, the point processes


η j , χi :=
ηj
ξi :=
j≤i

j≥i+1

are independent for each i ∈ N. By (4.10) we have ξi(m) ↑ η(m) as i → ∞.
Hence we can apply monotone convergence (Theorem A.12) to see that the
left-hand side of (4.11) is given by


lim E
f (x1 , . . . , xm , ξi + χi ) ξi(m) (d(x1 , . . . , xm ))
i→∞


= lim E
fi (x1 , . . . , xm , ξi ) ξi(m) (d(x1 , . . . , xm )) ,
(4.13)
i→∞


where fi (x1 , . . . , xm , μ) := E f (x1 , . . . , xm , μ + χi ) , (x1 , . . . , xm , μ) ∈ Xm × N.
Setting λi := ij=1 λ j , we can now apply the previous result to obtain from
Fubini’s theorem (Theorem A.13) that the expression (4.13) equals

lim E[ fi (x1 , . . . , xm , ξi + δ x1 + · · · + δ xm )] (λi )m (d(x1 , . . . , xm ))
i→∞


= lim E f (x1 , . . . , xm , η + δ x1 + · · · + δ xm ) (λi )m (d(x1 , . . . , xm )).
i→∞

By (A.7) this is the right-hand side of (4.11).



Next we formulate another useful version of the multivariate Mecke
equation. For μ ∈ N and x ∈ X we define the measure μ \ δ x ∈ N by
⎧
⎪
⎪
⎨μ − δ x , if μ ≥ δ x ,
μ \ δ x := ⎪
(4.14)
⎪
⎩μ,
otherwise.
For x1 , . . . , xm ∈ X, the measure μ \ δ x1 \ · · · \ δ xm ∈ N is defined inductively.
Theorem 4.5 Let η be a proper Poisson process on X with s-finite intensity measure λ and let m ∈ N. Then, for any f ∈ R+ (Xm × N),


E
f (x1 , . . . , xm , η \ δ x1 \ · · · \ δ xm ) η(m) (d(x1 , . . . , xm ))

=
E[ f (x1 , . . . , xm , η)] λm (d(x1 , . . . , xm )). (4.15)

32

The Mecke Equation and Factorial Measures

Proof If X is a subspace of a complete separable metric space as in Proposition 6.2, then it is easy to show that (x1 , . . . , xm , μ) → μ \ δ x1 \ · · · \ δ xm
is a measurable mapping from Xm × Nl (X) to Nl (X). In that case, and
if λ is locally finite, (4.15) follows upon applying (4.11) to the function
(x1 , . . . , xm , μ) → f (x1 , . . . , xm , μ \ δ x1 \ · · · \ δ xm ). In the general case we use
that η is proper. Therefore the mapping (ω, x1 , . . . , xm ) → η(ω)\δ x1 \· · ·\δ xm
is measurable, which is enough to make (4.15) a meaningful statement. The
proof can proceed in exactly the same way as the proof of Theorem 4.4. 

4.3 Janossy Measures
The restriction νB of a measure ν on X to a set B ∈ X is a measure on X
defined by
νB (B ) := ν(B ∩ B ),

B ∈ X.

(4.16)

If η is a point process on X, then so is its restriction ηB . For B ∈ X, m ∈ N
and a measure ν on X we write νmB := (νB )m . For a point process η on X we
(m)
write η(m)
B := (η B ) .
Factorial measures can be used to describe the restriction of point processes as follows.
Definition 4.6 Let η be a point process on X, let B ∈ X and m ∈ N.
The Janossy measure of order m of η restricted to B is the measure on Xm
defined by
Jη,B,m :=

1 
E 1{η(B) = m}η(m)
B (·) .
m!

(4.17)

The number Jη,B,0 := P(η(B) = 0) is called the Janossy measure of order 0.
Note that the Janossy measures Jη,B,m are symmetric (see (A.17))) and
Jη,B,m (Xm ) = P(η(B) = m),

m ∈ N.

(4.18)

If P(η(B) < ∞) = 1, then the Janossy measures determine the distribution
of the restriction ηB of η to B:
Theorem 4.7 Let η and η be point processes on X. Let B ∈ X and assume
that Jη,B,m = Jη ,B,m for each m ∈ N0 . Then
P(η(B) < ∞, ηB ∈ ·) = P(η (B) < ∞, ηB ∈ ·).

4.3 Janossy Measures

33

Proof For notational convenience we assume that B = X. Let m ∈ N and
suppose that μ ∈ N satisfies μ(X) = m. We assert for each A ∈ N that

!
"
1
1 δ x1 + · · · + δ xm ∈ A μ(m) (d(x1 , . . . , xm )).
(4.19)
1{μ ∈ A} =
m!
Since both sides of (4.19) are finite measures in A, it suffices to prove this
identity for each set A of the form
A = {ν ∈ N : ν(B1 ) = i1 , . . . , ν(Bn ) = in },
where n ∈ N, B1 , . . . , Bn ∈ X and i1 , . . . , in ∈ N0 . Given such a set, let μ
be defined as in Lemma A.15. Then μ ∈ A if and only if μ ∈ A and the
right-hand side of (4.19) does not change upon replacing μ by μ . Hence it
suffices to check (4.19) for finite sums of Dirac measures. This is obvious
from (4.4).
It follows from (4.17) that for all m ∈ N and f ∈ R+ (X) we have



1 
f dJη,X,m =
E 1{η(B) = m}
f dη(m) .
(4.20)
m!
From (4.19) and (4.20) we obtain for each A ∈ N that
P(η(X) < ∞, η ∈ A)
= 1{0 ∈ A}Jη,X,0 +

∞ 


1{δ x1 + · · · + δ xm ∈ A} Jη,X,m (d(x1 , . . . , xm ))

m=1



and hence the assertion.

Example 4.8 Let η be a Poisson process on X with s-finite intensity measure λ. Let m ∈ N and B ∈ X. By the multivariate Mecke equation (Theorem 4.4) we have for each C ∈ Xm that
1 
E 1{η(B) = m}η(m) (Bm ∩ C)
m! 

1 
=
E
1{(η + δ x1 + · · · + δ xm )(B) = m} λmB (d(x1 , . . . , xm )) .
m!
C

Jη,B,m (C) =

For x1 , . . . , xm ∈ B we have (η + δ x1 + · · · + δ xm )(B) = m if and only if
η(B) = 0. Therefore we obtain
Jη,B,m =

e−λ(B) m
λ ,
m! B

m ∈ N.

(4.21)

The Mecke Equation and Factorial Measures

34

4.4 Factorial Moment Measures
Definition 4.9 For m ∈ N the m-th factorial moment measure of a point
process η is the measure αm on Xm defined by
αm (C) := E[η(m) (C)],

C ∈ Xm .

(4.22)

If the point process η is proper, i.e. given by (2.4), then
 

αm (C) = E
1{(Xi1 , . . . , Xim ) ∈ C} ,
i1 ,...,im ≤κ

and hence for f ∈ R+ (Xm ) we have that

 
f (x1 , . . . , xm ) αm (d(x1 , . . . , xm )) = E

i1 ,...,im ≤κ

Xm

(4.23)


f (Xi1 , . . . , Xim ) .

The first factorial moment measure of a point process η is just the intensity
measure of Definition 2.5, while the second describes the second order
properties of η. For instance, it follows from (4.8) (and Exercise 4.4 if η is
not proper) that
α2 (B1 × B2 ) = E[η(B1 )η(B2 )] − E[η(B1 ∩ B2 )],

(4.24)

provided that E[η(B1 ∩ B2 )] < ∞.
Theorem 4.4 has the following immediate consequence:
Corollary 4.10 Given m ∈ N the m-th factorial moment measure of a
Poisson process with s-finite intensity measure λ is λm .
Proof Apply (4.11) to the function f (x1 , . . . , xm , η) = 1{(x1 , . . . , xm ) ∈ C}
for C ∈ Xm .

Let η be a point process on X with intensity measure λ and let f, g ∈
L1 (λ) ∩ L2 (λ). By the Cauchy–Schwarz inequality ((A.2) for p = q = 2) we
have f g ∈ L1 (λ) so that Campbell’s formula (Proposition 2.7) shows that
η(| f |) < ∞ and η(| f g|) < ∞ hold almost surely. Therefore it follows from
the case m = 1 of (4.9) that

f (x) f (y) η(2) (d(x, y)) = η( f )η(g) − η( f g), P-a.s.
Reordering terms and taking expectations gives

E[η( f )η(g)] = λ( f g) +
f (x)g(y) α2 (d(x, y)),
provided that



(4.25)

| f (x)g(y)| α2 (d(x, y)) < ∞ or f, g ≥ 0. If η is a Poisson

4.4 Factorial Moment Measures

35

process with s-finite intensity measure λ, then (4.25) and Corollary 4.10
imply the following useful generalisation of Exercise 3.11:
f, g ∈ L1 (λ) ∩ L2 (λ).

E[η( f )η(g)] = λ( f g) + λ( f )λ(g),

(4.26)

Under certain assumptions the factorial moment measures of a point process determine its distribution. To derive this result we need the following
lemma. We use the conventions e−∞ := 0 and log 0 := −∞.
Lemma 4.11 Let η be a point process on X. Let B ∈ X and assume that
there exists c > 1 such that the factorial moment measures αn of η satisfy
αn (Bn ) ≤ n!cn ,

n ≥ 1.

(4.27)

Let u ∈ R+ (X) and a < c−1 be such that u(x) < a for x ∈ B and u(x) = 0 for
x  B. Then



E exp
log(1 − u(x)) η(dx)
=1+


∞

(−1)n
n!

n=1

u(x1 ) · · · u(xn ) αn (d(x1 , . . . , xn )).

(4.28)

Since u vanishes outside B, we have




P := exp
log(1 − u(x)) η(dx) = exp
log(1 − u(x)) ηB (dx) .

Proof

Hence we can assume that η(X \ B) = 0. Since α1 (B) = E[η(B)] < ∞, we
can also assume that η(B) < ∞. But then we obtain from Exercise 4.6 that
∞

P=
(−1)n Pn ,
n=0

where P0 := 1 and
1
n!

Pn :=


u(x1 ) · · · u(xn ) η(n) (d(x1 , . . . , xn )),

and where we note that η(n) = 0 if n > η(X); see (4.7). Exercise 4.9 asks
the reader to prove that
2m−1


2m


n=0

n=0

(−1)n Pn ≤ P ≤

(−1)n Pn ,

These inequalities show that


k

P −
n
(−1) Pn  ≤ Pk ,

n=0

m ≥ 1.

k ≥ 1.

(4.29)

The Mecke Equation and Factorial Measures

36

It follows that


k


E[P] − E
 ≤ E[P ] = 1
n
u(x1 ) · · · u(xk ) αk (d(x1 , . . . , xk )),
(−1)
P
n 
k

k!
n=0
where we have used the definition of the factorial moment measures. The
last term can be bounded by
ak
αk (Bk ) ≤ ak ck ,
k!
which tends to zero as k → ∞. This finishes the proof.



Proposition 4.12 Let η and η be point processes on X with the same
factorial moment measures αn , n ≥ 1. Assume that there is a sequence
Bk ∈ X, k ∈ N, with Bk ↑ X and numbers ck > 0, k ∈ N, such that
 
αn Bnk ≤ n!cnk , k, n ∈ N.
(4.30)
d

Then η = η .
Proof By Proposition 2.10 and monotone convergence it is enough to
prove that Lη () = Lη () for each bounded  ∈ R+ (X) such that there exists
a set B ∈ {Bk : k ∈ N} with (x) = 0 for all x  B. This puts us into the
setting of Lemma 4.11. Let  ∈ R+ (X) have the upper bound a > 0. For
each t ∈ [0, −(log(1 − c−1 ))/a) we can apply Lemma 4.11 with u := 1 − e−t .
This gives us Lη (t) = Lη (t). Since t → Lη (t) is analytic on (0, ∞), we

obtain Lη (t) = Lη (t) for all t ≥ 0 and, in particular, Lη () = Lη ().

4.5 Exercises
Exercise 4.1 Let η be a Poisson process on X with intensity measure λ
and let A ∈ N have P(η ∈ A) = 0. Use the Mecke equation to show that
P(η + δ x ∈ A) = 0 for λ-a.e. x.
Exercise 4.2 Let μ ∈ N be given by (4.3) and let m ∈ N. Show that

m−1
m−2






μ(m) (C) =
1C (x1 , . . . , xm ) μ −
δ x j (dxm ) μ −
δ x j (dxm−1 )
j=1

· · · (μ − δ x1 )(dx2 ) μ(dx1 ),

j=1

C∈X .
m

(4.31)

This formula involves integrals with respect to signed measures of the form
μ − ν, where μ, ν ∈ N and ν is finite. These integrals are defined as a
difference of integrals in the natural way.

4.5 Exercises

37

Exercise 4.3 Let μ ∈ N and x ∈ X. Show for all m ∈ N that


1{(x, x1 , . . . , xm ) ∈ ·} + · · · + 1{(x1 , . . . , xm , x) ∈ ·} μ(m) (d(x1 , . . . , xm ))
+ μ(m+1) = (μ + δ x )(m+1) .
(Hint: Use Proposition A.18 to reduce to the case μ(X) < ∞ and then
Lemma A.15 to reduce further to the case (4.3) with k ∈ N.)
Exercise 4.4 Let μ ∈ N. Use the recursion (4.9) to show that (4.6), (4.7)
and (4.8) hold.
Exercise 4.5 Let μ ∈ N be given by μ := kj=1 δ x j for some k ∈ N0 and
some x1 , . . . , xk ∈ X. Let u : X → R be measurable. Show that

k
k

(−1)n
u(x1 ) · · · u(xn ) μ(n) (d(x1 , . . . , xn )).
(1 − u(x j )) = 1 +
n!
j=1
n=1
Exercise 4.6 Let μ ∈ N such that μ(X) < ∞ and let u ∈ R+ (X) satisfy
u < 1. Show that


exp
log(1 − u(x)) μ(dx)
 n
∞

(−1)n
=1+
u(x j ) μ(n) (d(x1 , . . . , xn )).
n!
n=1
j=1
(Hint: If u takes only a finite number of values, then the result follows from
Lemma A.15 and Exercise 4.5.)
Exercise 4.7 (Converse to Theorem 4.4) Let m ∈ N with m > 1. Prove
or disprove that for any σ-finite measure space (X, X, λ), if η is a point
process on X satisfying (4.11) for all f ∈ R+ (Xm × N), then η is a Poisson
process with intensity measure λ. (For m = 1, this is true by Theorem 4.1.)
Exercise 4.8 Give another (inductive) proof of the multivariate Mecke
identity (4.11) using the univariate version (4.2) and the recursion (4.9).
Exercise 4.9 Prove the inequalities (4.29). (Hint: Use induction.)
Exercise 4.10 Let η be a Poisson process on X with intensity measure λ
and let B ∈ X with 0 < λ(B) < ∞. Let U1 , . . . , Un be independent random
elements of X with distribution λ(B)−1 λ(B∩·) and assume that (U1 , . . . , Un )
and η are independent. Show that the distribution of η + δU1 + · · · + δUn is
absolutely continuous with respect to P(η ∈ ·) and that μ → λ(B)−n μ(n) (Bn )
is a version of the density.

5
Mappings, Markings and Thinnings

It was shown in Chapter 3 that an independent superposition of Poisson
processes is again Poisson. The properties of a Poisson process are also
preserved under other operations. A mapping from the state space to another space induces a Poisson process on the new state space. A more intriguing persistence property is the Poisson nature of position-dependent
markings and thinnings of a Poisson process.

5.1 Mappings and Restrictions
Consider two measurable spaces (X, X) and (Y, Y) along with a measurable mapping T : X → Y. For any measure μ on (X, X) we define the image
of μ under T (also known as the push-forward of μ), to be the measure T (μ)
defined by T (μ) = μ ◦ T −1 , i.e.
T (μ)(C) := μ(T −1C),

C ∈ Y.

(5.1)

In particular, if η is a point process on X, then for any ω ∈ Ω, T (η(ω)) is a
measure on Y given by
T (η(ω))(C) = η(ω, T −1 (C)),

C ∈ Y.

(5.2)

κ
n=1

δXn as in (2.4), the

If η is a proper point process, i.e. one given by η =
definition of T (η) implies that
T (η) =

κ


δT (Xn ) .

(5.3)

n=1

Theorem 5.1 (Mapping theorem) Let η be a point process on X with
intensity measure λ and let T : X → Y be measurable. Then T (η) is a point
process with intensity measure T (λ). If η is a Poisson process, then T (η) is
a Poisson process too.
38

5.2 The Marking Theorem

39

Proof We first note that T (μ) ∈ N for any μ ∈ N. Indeed, if μ = ∞j=1 μ j ,
then T (μ) = ∞j=1 T (μ j ). Moreover, if the μ j are N0 -valued, so are the T (μ j ).
For any C ∈ Y, T (η)(C) is a random variable and by the definition of the
intensity measure its expectation is
E[T (η)(C)] = E[η(T −1C)] = λ(T −1C) = T (λ)(C).

(5.4)

If η is a Poisson process, then it can be checked directly that T (η) is completely independent (property (ii) of Definition 3.1), and that T (η)(C) has
a Poisson distribution with parameter T (λ)(C) (property (i) of Definition
3.1).

If η is a Poisson process on X then we may discard all of its points
outside a set B ∈ X to obtain another Poisson process. Recall from (4.16)
the definition of the restriction νB of a measure ν on X to a set B ∈ X.
Theorem 5.2 (Restriction theorem) Let η be a Poisson process on X with
s-finite intensity measure λ and let C1 , C2 , . . . ∈ X be pairwise disjoint.
Then ηC1 , ηC2 , . . . are independent Poisson processes with intensity measures λC1 , λC2 , . . . , respectively.
Proof As in the proof of Proposition 3.5, it is no restriction of generality
to assume that the union of the sets Ci is all of X. (If not, add the complement of this union to the sequence (Ci ).) First note that, for each i ∈ N,
ηCi has intensity measure λCi and satisfies the two defining properties of a
Poisson process. By the existence theorem (Theorem 3.6) we can find a sequence ηi , i ∈ N, of independent Poisson processes on a suitable (product)
probability space, with ηi having intensity measure λCi for each i.
By the superposition theorem (Theorem 3.3), the point process η :=
∞
 d
i=1 ηi is a Poisson process with intensity measure λ. Then η = η by
Proposition 3.2. Hence for any k and any f1 , . . . , fk ∈ R+ (N) we have
E



k

i=1



fi (ηCi ) = E



fi (ηC i ) = E

k

i=1

k


fi (ηi ) =

i=1

k

E[ fi (ηi )].
i=1

d

Taking into account that ηCi = ηi for all i ∈ N (Proposition 3.2), we get the
result.


5.2 The Marking Theorem
Suppose that η is a proper point process, i.e. one that can be represented as
in (2.4). Suppose that one wishes to give each of the points Xn a random

40

Mappings, Markings and Thinnings

mark Yn with values in some measurable space (Y, Y), called the mark
space. Given η, these marks are assumed to be independent, while their
conditional distribution is allowed to depend on the value of Xn but not
on any other information contained in η. This marking procedure yields a
point process ξ on the product space X × Y. Theorem 5.6 will show the
remarkable fact that if η is a Poisson process then so is ξ.
To make the above marking idea precise, let K be a probability kernel
from X to Y, that is a mapping K : X × Y → [0, 1] such that K(x, ·) is
a probability measure for each x ∈ X and K(·, C) is measurable for each
C ∈ Y.
Definition 5.3 Let η = κn=1 δXn be a proper point process on X. Let K be
a probability kernel from X to Y. Let Y1 , Y2 , . . . be random elements in Y
and assume that the conditional distribution of (Yn )n≤m given κ = m ∈ N and
(Xn )n≤m is that of independent random variables with distributions K(Xn , ·),
n ≤ m. Then the point process
ξ :=

κ


δ(Xn ,Yn )

(5.5)

n=1

is called a K-marking of η. If there is a probability measure Q on Y such
that K(x, ·) = Q for all x ∈ X, then ξ is called an independent Q-marking
of η.
For the rest of this section we fix a probability kernel K from X to Y.
If the random variables Yn , n ∈ N, in Definition 5.3 exist, then we say that
the underlying probability space (Ω, F , P) supports a K-marking of η. We
now explain how (Ω, F , P) can be modified so as to support a marking. Let
Ω̃ := Ω × Y∞ be equipped with the product σ-field. Define a probability
kernel K̃ from Ω to Y∞ by taking the infinite product
K̃(ω, ·) :=

∞
#

K(Xn (ω), ·),

ω ∈ Ω.

n=1

We denote a generic element of Y∞ by y = (yn )n≥1 . Then

P̃ :=
1{(ω, y) ∈ ·} K̃(ω, dy) P(dω)

(5.6)

is a probability measure on Ω̃ that can be used to describe a K-marking of
η. Indeed, for ω̃ = (ω, y) ∈ Ω̃ we can define η̃(ω̃) := η(ω) and, for n ∈ N,
(X̃n (ω̃), Yn (ω̃)) := (Xn (ω), yn ). Then the distribution of (η̃(X), (X̃n )) under P̃
coincides with that of (η(X), (Xn )) under P. Moreover, it is easy to check
that under P̃ the conditional distribution of (Yn )n≤m given η̃(X) = m ∈ N and

5.2 The Marking Theorem

41

(X̃n )n≤m is that of independent random variables with distributions K(X̃n , ·),
n ≤ m. This construction is known as an extension of a given probability
space so as to support further random elements with a given conditional
distribution. In particular, it is no restriction of generality to assume that
our fixed probability space supports a K-marking of η.
The next proposition shows among other things that the distribution of a
K-marking of η is uniquely determined by K and the distribution of η.
Proposition 5.4 Let ξ be a K-marking of a proper point process η on X
as in Definition 5.3. Then the Laplace functional of ξ is given by
Lξ (u) = Lη (u∗ ),
where
u∗ (x) := − log



u ∈ R+ (X × Y),


e−u(x,y) K(x, dy) ,

(5.7)

x ∈ X.

(5.8)

Recall that N0 := N0 ∪ {∞}. For u ∈ R+ (X × Y) we have that
 
m
 
E 1{κ = m} exp −
u(Xk , Yk )
Lξ (u) =

Proof

k=1

m∈N0

 

m
 
=
E 1{κ = m}
exp −
u(Xk , yk )
k=1

m∈N0

m

K(Xk , dyk ) ,
k=1

where in the case m = 0 empty sums are set to 0 while empty products are
set to 1. Therefore
 m 

 
exp[−u(Xk , yk )]K(Xk , dyk ) .
E 1{κ = m}
Lξ (u) =
k=1

m∈N0
∗

Using the function u defined by (5.8) this means that
 m

 
∗
E 1{κ = m}
exp[−u (xk )]
Lξ (u) =
m∈N0

k=1

 

m
 
∗
=
E 1{κ = m} exp −
u (Xk ) ,
m∈N0

k=1

which is the right-hand side of the asserted identity (5.7).



The next result says that the intensity measure of a K-marking of a point
process with intensity measure λ is given by λ ⊗ K, where

(5.9)
(λ ⊗ K)(C) :=
1C (x, y) K(x, dy) λ(dx), C ∈ X ⊗ Y.

42

Mappings, Markings and Thinnings

In the case of an independent Q-marking this is the product measure λ ⊗ Q.
If λ and K are s-finite, then so is λ ⊗ K.
Proposition 5.5 Let η be a proper point process on X with intensity measure λ and let ξ be a K-marking of η. Then ξ is a point process on X × Y
with intensity measure λ ⊗ K.
Proof
that

Let C ∈ X ⊗ Y. Similarly to the proof of Proposition 5.4 we have
m

 

E 1{κ = m}
1{(Xk , Yk ) ∈ C}
E[ξ(C)] =
m∈N0

k=1

m∈N0

k=1

m 

 

=
1{(Xk , yk ) ∈ C} K(Xk , dyk ) .
E 1{κ = m}

Using Campbell’s
formula (Proposition 2.7) with u ∈ R+ (X) defined by

u(x) := 1{(x, y) ∈ C} K(x, dy), x ∈ X, we obtain the result.

Now we formulate the previously announced behaviour of Poisson processes under marking.
Theorem 5.6 (Marking theorem) Let ξ be a K-marking of a proper Poisson process η with s-finite intensity measure λ. Then ξ is a Poisson process
with intensity measure λ ⊗ K.
Proof

Let u ∈ R+ (X × Y). By Proposition 5.4 and Theorem 3.9,
 

∗
Lξ (u) = exp − (1 − e−u (x) ) λ(dx)
 

= exp −
(1 − e−u(x,y) ) K(x, dy) λ(dx) .

Another application of Theorem 3.9 shows that ξ is a Poisson process.



Under some technical assumptions we shall see in Proposition 6.16 that
any Poisson process on a product space is a K-marking for some kernel K,
determined by the intensity measure.

5.3 Thinnings
A thinning keeps the points of a point process η with a probability that may
depend on the location and removes them otherwise. Given η, the thinning
decisions are independent for different points. The formal definition can be
based on a special K-marking:

5.3 Thinnings

43

Definition 5.7 Let p : X → [0, 1] be measurable and consider the probability kernel K from X to {0, 1} defined by
K p (x, ·) := (1 − p(x))δ0 + p(x)δ1 ,

x ∈ X.

If ξ is a K p -marking of a proper point process η, then ξ(· × {1}) is called a
p-thinning of η.

1
0

0

1

We shall use this terminology also in the case where p(x) ≡ p does not
depend on x ∈ X.

X
X
Figure 5.1 Illustration of a marking and a thinning, both based
on the same set of marked points. The points on the horizontal
axis represent the original point process in the first diagram, and
the thinned point process in the second diagram.

More generally, let pi , i ∈ N, be a sequence of measurable functions
from X to [0, 1] such that
∞


pi (x) = 1,

x ∈ X.

(5.10)

i=1

Define a probability kernel K from X to N by
K(x, {i}) := pi (x),

x ∈ X, i ∈ N.

(5.11)

If ξ is a K-marking of a point process η, then ηi := ξ(· × {i}) is a pi thinning of η for every i ∈ N. By Proposition 5.5, ηi has intensity measure
pi (x) λ(dx), where λ is the intensity measure of η. The following generalisation of Proposition 1.3 is consistent with the superposition theorem
(Theorem 3.3).
Theorem 5.8 Let ξ be a K-marking of a proper Poisson process η, where
K is given as in (5.11). Then ηi := ξ(· × {i}), i ∈ N, are independent Poisson
processes.

44

Mappings, Markings and Thinnings

Proof By Theorem 5.6, ξ is a Poisson process. Hence we can apply The
orem 5.2 with Ci := X × {i} to obtain the result.
If η p is a p-thinning of a proper point process η then (according to Definitions 2.4 and 5.7) there is an A ∈ F such that P(A) = 1 and η p (ω) ≤ η(ω)
for each ω ∈ A. We can then define a proper point process η − η p by setting
(η − η p )(ω) := η(ω) − η p (ω) for ω ∈ A and (η − η p )(ω) := 0, otherwise.
Corollary 5.9 (Thinning theorem) Let p : X → [0, 1] be measurable and
let η p be a p-thinning of a proper Poisson process η. Then η p and η − η p
are independent Poisson processes.

5.4 Exercises
Exercise 5.1 (Displacement theorem) Let λ be an s-finite measure on
the Euclidean space Rd , let Q be a probability measure on Rd and let the
convolution λ ∗ Q be the measure on Rd , defined by

(λ ∗ Q)(B) :=
1B (x + y) λ(dx) Q(dy), B ∈ B(Rd ).
Show that λ ∗ Q is s-finite. Let η = κn=1 δXn be a Poisson process with intensity measure λ and let (Yn ) be a sequence of independent random vectors
with distribution Q that is independent of η. Show that η := κn=1 δXn +Yn is
a Poisson process with intensity measure λ ∗ Q.
Exercise 5.2 Let η1 and η2 be independent Poisson processes with intensity measures λ1 and λ2 , respectively. Let p be a Radon–Nikodým derivative of λ1 with respect to λ := λ1 +λ2 . Show that η1 has the same distribution
as a p-thinning of η1 + η2 .
Exercise 5.3 Let ξ1 , . . . , ξn be identically distributed point processes and
let ξ(n) be an n−1 -thinning of ξ := ξ1 + · · · + ξn . Show that ξ(n) has the same
intensity measure as ξ1 . Give examples where ξ1 , . . . , ξn are independent
and where ξ(n) and ξ1 have (resp. do not have) the same distribution.
Exercise 5.4 Let p : X → [0, 1] be measurable and let η p be a p-thinning
of a proper point process η. Using Proposition 5.4 or otherwise, show that





Lη p (u) = E exp
log 1 − p(x) + p(x)e−u(x) η(dx) , u ∈ R+ (X).
Exercise 5.5 Let η be a proper Poisson process on X with σ-finite intensity measure λ. Let λ be a σ-finite measure on X and let ρ := λ + λ .
Let h := dλ/dρ (resp. h := dλ /dρ) be the Radon–Nikodým derivative of

5.4 Exercises

45

λ (resp. λ ) with respect to ρ; see Theorem A.10. Let B := {h > h } and
define p : X → [0, 1] by p(x) := h (x)/h(x) for x ∈ B and by p(x) := 1,
otherwise. Let η be a p-thinning of η and let η be a Poisson process with
intensity measure 1X\B (x)(h (x) − h(x)) ρ(dx), independent of η . Show that
η + η is a Poisson process with intensity measure λ .
Exercise 5.6 (Poisson cluster process) Let K be a probability kernel from
X to N(X). Let η be a proper Poisson process on X with intensity measure
λ and let A ∈ F such that P(A) = 1 and such that (2.4) holds on A. Let ξ be
a K-marking of η and define a point process χ on X by setting

χ(ω, B) :=
μ(B) ξ(ω, d(x, μ)), B ∈ X,
(5.12)
for ω ∈ A and χ(ω, ·) := 0, otherwise. Show that χ has intensity measure

λ (B) =
μ(B) K(x, dμ) λ(dx), B ∈ X.
Show also that the Laplace functional of χ is given by
 

Lχ () = exp − (1 − e−μ() ) λ̃(dμ) ,  ∈ R+ (X),
where λ̃ :=



(5.13)

K(x, ·) λ(dx).

Exercise 5.7 Let χ be a Poisson cluster process as in Exercise 5.6 and let
B ∈ X. Combine Exercise 2.7 and (5.13) to show that
 

P( χ(B) = 0) = exp − 1{μ(B) > 0} λ̃(dμ) .
Exercise 5.8 Let χ be as in Exercise 5.6 and let B ∈ X. Show that
P( χ(B) < ∞) = 1 if and only if λ̃({μ ∈ N : μ(B) = ∞}) = 0 and

λ̃({μ ∈ N : μ(B) > 0}) < ∞. (Hint: Use P( χ(B) < ∞) = limt↓0 E e−tχ(B) .)
Exercise 5.9 Let p ∈ [0, 1) and suppose that η p is a p-thinning of a proper
point process η. Let f ∈ R+ (X × N) and show that




p
E
f (x, η p + δ x ) (η − η p )(dx) .
E
f (x, η p ) η p (dx) =
1− p

6
Characterisations of the Poisson Process

A point process without multiplicities is said to be simple. For locally finite
simple point processes on a metric space without fixed atoms the two defining properties of a Poisson process are equivalent. In fact, Rényi’s theorem
says that in this case even the empty space probabilities suffice to imply
that the point process is Poisson. On the other hand, a weak (pairwise) version of the complete independence property leads to the same conclusion.
A related criterion, based on the factorial moment measures, is also given.

6.1 Borel Spaces
In this chapter we assume (X, X) to be a Borel space in the sense of the
following definition. In the first section we shall show that a large class of
point processes is proper.
Definition 6.1 A Borel space is a measurable space (Y, Y) such that there
is a Borel-measurable bijection ϕ from Y to a Borel subset of the unit
interval [0, 1] with measurable inverse.
A special case arises when X is a Borel subset of a complete separable
metric space (CSMS) and X is the σ-field on X generated by the open sets
in the inherited metric. In this case, (X, X) is called a Borel subspace of
the CSMS; see Section A.2. By Theorem A.19, any Borel subspace X of a
CSMS is a Borel space. In particular, X is then a metric space in its own
right.
Recall that N<∞ (X) denotes the set of all integer-valued measures on X.
Proposition 6.2 There exist