-
Notifications
You must be signed in to change notification settings - Fork 0
/
chapitre1.tex
650 lines (612 loc) · 58.9 KB
/
chapitre1.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
\chapter{Linear dynamic systems and stability}
In this chapter, we study explicit autonomous ordinary differential equations of first order that are linear, namely equations of the form
\[\dotxx=A\xx\]
where $A$ is a $n\times n$ matrix and $\xx$ is a differentiable function from a real interval to $\R^n$.
Our main goal is to investigate stability. For this, we present a brief description of the solution space and explicit forms of its elements. This first section will help us for the second, in the study of the stability.
\section{Solution space}
First of all, using results by Cauchy and Picard, and the particular smoothness of $\xx\to A\xx$, we know that the solutions locally exist, and that they are unique for a same initialisation. Since $$\ddt\|\xx\|^2=2\ps{\xx}{\dotxx}=2\ps{\xx}{A\xx}\leq 2\|A\|\|\xx\|^2,$$
we have
$$\|\xx(t)\|^2\leq \|\xx(0)\|^2e^{2\|A\|t} $$
by Grönwall and solutions cannot explode, they are defined on all $\R$. We note that since the derivative of a solution is just itself multiplied by a matrix, it is derivable too, and by induction $\xx$ is $C^\infty$ with $\xx^{(k)}=A^k\xx$.
The equation is autonomous, so any solution can be reparametrised by a translation of the time:
If $\dotxx(t)=A\xx(t)$, then $\ddt(\xx(t+\tau))=A\xx(t+\tau)$. That is why we will generally initialise them at $t=0$ without loss of generality. Solutions are described by a \emph{flow} $\phi:\R^n\times \R\to\R^n$ where $t\mapsto\phi(\xx_0,t)$ is the solution starting in $\xx_0$ at $t=0$. However, existence theorems does not tell us much about their actual form. We present then their existence in this particular case.
First, we observe that the set of solution is a vector subspace of $C^\infty$. Indeed for all solutions $\xx$, $\yy$ and a scalar $\a$, \[(\a\xx+\yy)'=\a\dotxx + \dot{\yy}=\a A\xx + A\yy=A(\a\xx+\yy)\] and the identical null function is trivially in the space. This motivate us to find a basis of this subspace and understand how to construct it.
\begin{definition}
A collection of $k$ solutions $\xx_1,\dots,\xx_k$ are said to be \emph{linearly independent} or \emph{independent}, if $\xx_1(t),\dots,\xx_k(t)$ are linearly independents for all $t$.
\end{definition}
\begin{lemme} \label{lem:indépendance}
For any scalar $\tau$, a collection $\xx_1,\dots,\xx_k$ of solutions are linearly independents if and only if $\xx_1(\tau),\dots,\xx_k(\tau)$ are linearly independents.
\end{lemme}
\begin{proof}
If the solutions are independent we have the result by definition. In the other way, by uniqueness of the solutions up to initialisation, we have that if for time $\tau$, the positions are not independent, then we must have non all null scalars $\a_1,\dots,\a_k$ such that $\a_1\xx_1(\tau) + \dots + \a_k\xx_k(\tau)=0$. But then 0 is a trivial solution starting there and actually $\a_1\xx_1(t) + \dots + \a_k\xx_k(t)=0$ for all $t$.
\end{proof}
This lemma prove that the space of solution has the same dimension as the space $\R^n$, \ie $n$.
In order to manipulate the solutions, we put them in a matrix, like $(\xx_1,\dots,\xx_k)$. In this form we see that we can actually extend the equation to matrix entries:
\[\dot{X}=AX\]
and since
\[\dot{X}=(\dotxx_1,\dots,\dotxx_k)
\text{ and }
AX=(A\xx_1,\dots,A\xx_k),\]
a matrix X is a solution if and only if its columns are vector solutions. We see that for a vector $v\in\R^k$, $Xv$ is a vector solution $(Xv)' = \dot{X}v = AXv$. This solution ca be written as $Xv = v_1\xx_1 + \dots + v_k\xx_k$ and is then a linear combination of the column solutions of $X$. Thus, a matrix solution allow us to easily write new solutions as linear combinaison of a collection of solutions. As the dimension of the space of solutions is $n$, exactly $n$ linear independents solutions will be enough to construct all solutions as a product of the matrix solution constructed with them.
\begin{definition}
If a matrix solution $M=(\xx_1,\dots,\xx_n)$ is square (k=n) and of full rank, then it is called a \emph{fundamental matrix solution} and $\{\xx_1,\dots,\xx_k\}$ a \emph{fundamental set of solutions}.
\end{definition}
With such a $M$, $M\R^n$ is the all space of solutions. At time $t=0$, $M(0)v=v_1\xx_1(0) + \dots + v_k\xx_k(0)$ meaning that with the condition $\xx(0)=x_0$, v must be chosen to be the vector of coordinates of $x_0$ in the basis of the columns of M.
\\ \\
We can now look at the forms the solutions can take. In one dimension, the problem is $\dotx=ax$ and if a is non null, non trivial solutions satisfy $a=(\log|x|)'$ which give $\log|x(t)|=at+c$ and $x(t)=\pm e^{at+c}=x_0e^{at}$. We check easily that $e^{at}$ is indeed a solution. The $n$-dimensional case is more complicated. We try to put the equation in its integral form and derive a property by recurrence (supposing $X_0=I$ for simplicity):
\begin{IEEEeqnarray*}{rCl}
X(t)
&=& I + \int_0^t AX(s) \dd s
= I + \int_0^t A\bigg(I + \int_0^{s_1} AX(s_2) \dd s_2\bigg) \dd s_1
= I + tA +\int_0^t \int_0^{s_1} A^2X(s_2) \dd s_2 \dd s_1
\\ &=& I + tA + \frac12(tA)^2 + \int_0^t \int_0^{s_1} \int_0^{s_3} A^3X(s_3) \dd s_2 \dd s_1.
\end{IEEEeqnarray*}
We see the Taylor expansion of $\exp$ appear, as a generalisation to matrices, where we define $A^0=I$ for any matrix $A$. This motivate the following definition:
\begin{definition}
If it converges, we define the \emph{exponential} $e^A$ of a square matrix A and the underlying function $\exp$ as the infinite sum
\[e^A = \sum_{n=0}^\infty \frac{1}{n!}A^n.\]
\end{definition}
\begin{lemme} \label{lem:exp}
The exponential of a matrix always converges.
\end{lemme}
\begin{proof}
We use $\|.\|$ for the operator norm on matrices, keeping in mind that all matricies norms are equivalent. Now we have by basic properties of this norm that
\[ \sum_{n=0}^N \|\frac{1}{n!}A^n\|
\leq \sum_{n=0}^N \frac{1}{n!}\|A^n\|
\leq \sum_{n=0}^N \frac{1}{n!}\|A\|^n ,\]
and this is the Taylor finite expansion of $\|A\|$, which converge for all value of $\|A\|$. As a result the sum is absolutely convergent for the operator norm, and then must converge.
\end{proof}
\begin{theoreme}
For a matrix $A$ and a scalar $t$, the quantity $e^{At}$ is differentiable with respect to $t$ and has derivative $\dd/{\dd t} (e^{At}) = Ae^{At}$.
\end{theoreme}
\begin{proof}
Each coordinate of the series is actually a convergent Taylor series in t and is then analytic. The theory of analytic functions tell us that they are $C^\infty$ and that we can derive term by term.
%However, we explain the argument of the proof in our case: As seen in \prettyref{lem:exp}, the sum converge absolutely, and in our case it converge locally uniformly in $t$ when $N\to\infty$, because if $|t|\leq t_{max}$,
%\[ \sum_{n=0}^N \|\frac{1}{n!}t^nA^n\|
%\leq \sum_{n=0}^N \frac{1}{n!}(t_{max}\|A\|)^n\] converge without independence in $t$ for the last quantity.
We get
\[ \ddt e^{At} =\frac{\dd}{\dd t}\sum_{n=0}^\infty\frac{1}{n!}t^nA^n
= \sum_{n=1}^\infty \frac{1}{(n-1)!}t^{n-1}A^n
= A\sum_{n=0}^\infty \frac{1}{n!}(tA)^n
=Ae^{At} \]
%converge locally uniformly to $Ae^{tA}$ for the same reason. As a result, since the partial sums and the derivatives of the partial sums converge locally uniformly, the limit is differentiable with derivative the limit of the derivatives of the partial sums, namely $Ae^{tA}$ and we have the result.
\end{proof}
All of this tell us that as we expected, $e^{tA}$ is a matrix solution to the linear differential equation.
\begin{corollaire}
The matrix $e^{tA}$ is a fundamental solution and each solution write $\xx(t)=e^{tA}\xx(0)$
\end{corollaire}
\begin{proof}
We evaluate it in $t=0$ by direct calculation since the sum become finite:
\[e^{0A}=\sum_{n=0}^\infty \frac{1}{n!}(0A)^n = I.\]
So the initial matrix is non singular, meaning that it will stay along the time non singular and $e^{tA}$ is a fundamental matrix solution with formula $x(t)=e^{tA}x_0$.
\end{proof}
For matrices that are not in a special form, we cannot directly compute the series of the exponential and then then neither the solutions. Specials forms of matrices whose exponential are commutable are for example the ones where the power of the matrix has a general formula, like diagonal matrices. We know that diagonal matrices are directly related to eigenvalues and eigenvectors. The eigenvector are the directions where the matrix act like in the one dimensional case and as often, it is a way to find similar results in the higher dimensional problems.
Let us investigate what happens in theses directions, and choose a candidate that look like a one dimensional solution,$e^{\l t}v$, for a real eigenvalue $\l$ of the matrix $A$ and the corresponding eigenvector~$\vv$. We compute its derivative and obtain
\[\dd/{\dd t}(e^{\l t}v) = e^{\l t}\l v = e^{\l t}Av = A(e^{\l t}v).\]
This give us indeed a non trivial solution since the eigenvector is non null. Theses kind of solutions can be seen in the computation of the exponential when we have a basis of eigenvectors: the matrix $A$ is diagonalisable like $A=PDP^{-1}$ where $D$ is the diagonal matrix of the eigenvalues and P is the non singular matrix formed by the eigenvectors. Then
\[ A^n= (PDP^{-1})^n = PDP^{-1}\cdots PDP^{-1} = PD^nP^{-1} \]
and
\[ e^{tA}
= Pe^{tD}P^{-1}
= P\text{diag}(e^{\l_jt})P^{-1}
= (v_1e^{\l_1t},\dots,v_ne^{\l_nt})P^{-1}
\]
which is the fundamental matrix of the kind of solutions we find before, up to a reparametrization.
This was the simple case. More generally the matrix can have complex eigenvalues with complex eigenvectors, but the following lemma show us that complex solutions are made of real solutions :
\begin{lemme} \label{lem:complex}
If $\zz=\xx+i\yy$ is a complex solution of real and imaginary parts $\xx$ and $\yy$, then $\xx$ and $\yy$ are (real) solutions.
\end{lemme}
\begin{proof}
This can be shown by the simple fact that
\[\dotxx+i\dotyy =\dotzz = A\zz = A\xx+iA\yy. \]
As A is real, we can directly read the real and imaginary parts as being $A\xx$ and $A\yy$. The real and respectively imaginary parts being the same along equalities implies $\dotxx=A\xx$ and $\dotyy=A\yy$ which is the result.
\end{proof}
We should then take into account complex solutions obtained by complex eigenvalues because they actually give us new reals solutions. We sum up the results together in the following theorem.
\begin{theoreme} \label{th:eigensolutions}
For a complex eigenvalue $\s=\a+i\b$ of $A$ with eigenvector $w=u+iv$, we have the real solutions
\[e^{\a t}(\cos(\b t)u - \sin(\b t)v)\]
\[e^{\a t}(\cos(\b t)v + \sin(\b t)u).\]
They are independent if ad only if $u$ and $v$ are independent. If $\s$ is real, this give only one solution $e^{\a t}u.$
\end{theoreme}
\begin{proof}
First we have that $e^{\s t}w$ is a complex solution, since
\[\ddt(e^{\s t}w) = e^{\s t}\s w = e^{\s t}Aw = A(e^{\s t}w).\]
Then by \prettyref{lem:complex}, the real and the imaginary parts are real solutions, we compute them by rewriting the complex solution:
\[e^{\s t}w = e^{\a t}(\cos(\b t) + i\sin(\b t))(u+iv)
= e^{\a t}(\cos(\b t)u - \sin(\b t)v) + ie^{\a t}(\cos(\b t)v + \sin(\b t)u).\]
We recognise the resulting solutions in the real and imaginary part. If we evaluate them in t=0, we get respectively $u$ and $v$ so the independence follow by \prettyref{lem:indépendance}. By setting $\b=0$, the solutions are $e^{\a t}(\cos(0)u - \sin(0)v) = e^{\a t}u$ and $e^{\a t}v$. But since the eigenvalue is real, the eigenvector is real too and $v=0$ letting only one non-trivial solution.
\end{proof}
\begin{remarque}
Note that complex values come by pairs of conjugates, as well as the eigenvectors:
$Aw=\s w$ implies
$$A\overline{w} = \overline{A}\overline{w} = \overline{Aw}= \overline{\s w} = \bar{\s}\bar{w}.$$
However, this will not give us new real solutions by the method of \prettyref{th:eigensolutions} because $$e^{\overline{\s} t}\overline{w}= \overline{e^{\s t}}\overline{w} = \overline{e^{\s t}w}$$
has the same real and imaginary part as $e^{\s t}w$ up to the sign. As a result, a complex eigenvalue give two solutions but together with its conjugate, they give us still two solutions, so we can maybe find as many independent solutions as independent eigenvector we find, real or not.
\end{remarque}
Now we have to deal with the case when we do not have a basis of eigenvectors. In this case, some eigenvalue $\l$ with algebraic multiplicity $\m_a$ and a geometric multiplicity
$$\m_g=\dim\ker(A-\l I) < \m_a.$$
Such an eigenvalue is called \emph{defective}, and the matrix is said \emph{defective} too when it has at least one defective eigenvalue. We use here the concept of generalized eigenvector that come from the result about the Jordan form:
\begin{definition}
A vector $\ww$ is a \emph{generalized eigenvector} of rank $m$ of a matrix A and corresponding to an eigenvalue $\l$, if it is a vector (possibly complex if $\l$ is complex) that satisfy
$$(A-\l I)^m\ww=0 \quad\text{and}\quad (A-\l I)^{m-1}\ww\neq0$$
for $m\in\N^*$.
A \emph{canonical} basis of generalised eigenvectors is a basis of generalised eigenvector such that for all generalized eigenvector $\ww$ of rank $m$ that is in the basis, for all $0<j<m$, $(A-\l I)^j\ww$ are generalised eigenvectors of rank $m-j$ with respect to $\l$, and are in the basis too.
\end{definition}
\begin{remarque}
As before, a square matrix powered by 0 is the identity matrix. Note that $(A-\l I)^{m-1}\ww\neq0$ implies that in particular $\ww\neq0$. As a consequence, a generalised eigenvector of rank 1 is a usual eigenvector.
\end{remarque}
\begin{theoreme}
A matrix always have a canonical basis of generalised eigenvectors.
\end{theoreme}
\begin{proof}
Linear algebra result, see appendix of \cite{Hir}.
\end{proof}
In term of computation, we start from $(A-\l I)\vv=0$ to find the eigenvectors. Then we search for some $\ww$ such that $(A-\l I)\ww=\vv$, assuring that $(A-\l I)^2\ww=(A-\l I)\vv=0$ and hence that $\ww$ is a generalised vector of rank 2. We continue like this making a chain of generalized eigenvectors, even if they are complex.
We show that this basis is useful if we use its elements as initialisation points.
\begin{theoreme}[General form of the eigensolutions] \label{th:solutiondegeneree}
For a canonical basis $B$ of generalised eigenvectors, we have a basis of complex solutions with form $e^{t\l}p_\ww(t)$, where $\l$ is the eigenvalue associated to a generalised eigenvector $\ww$ of rank $m$ in the basis, and $p_\ww$ is a polynomial of degree $m-1$ with coefficients that are in the eigenspace of $w$ .
\end{theoreme}
\begin{proof}
We see easily that the basic property of the real exponential that changes sum into product is true for the matrices with the condition that they commute. The proof is similar to the real case.
We write $A= \l I + (A-\l I) $ and $\l I$ is diagonal hence commutative with all matrices. Taking $\ww$ from a canonical basis, we get
\begin{IEEEeqnarray}{rCl} \label{eq:solutioncomplexe}
e^{tA}\ww
&=& e^{t\l I + t(A-\l I)}\ww
=e^{t\l I} e^{t(A-\l I)}\ww
= e^{t\l}\sum_{n=0}^\infty \frac{1}{n!}t^n(A-\l I)^n\ww
= e^{t\l}\sum_{n=0}^{m-1} \frac{1}{n!}t^n\ww_n
\end{IEEEeqnarray}
where the $\ww_n=(A-\l I)^n\ww$ are other generalised vectors by definition of the canonical basis. So have indeed a polynomial in $t$ with generalised eigenvectors coefficients. These are surely independent solutions for different choices of $\ww$ since the generalised eigenvectors are supposed independent and they are the initial values of these solutions, the proof is complete.
\end{proof}
We have now a complex basis but we know that eigenvalues and eigenvectors come by pairs of conjugates. It is the same for generalised eigenvectors and eigenvalues:
$$(A-\l I)^m\ww=0 \quad\text{and}\quad (A-\l I)^{m-1}\ww\neq0$$
implies
$$(A-\overline{\l} I)^m\overline{\ww}=0 \quad\text{and}\quad (A-\overline{\l} I)^{m-1}\overline{\ww}\neq0$$
So we will get solutions like \prettyref{eq:solutioncomplexe} by pairs of conjugates $e^{tA}\ww$ and $e^{tA}\overline{\ww}$. by subtracting them or summing them, we get two real new solutions that we will call \emph{degenerated}:
$$
e^{tA}\ww + e^{tA}\overline{\ww}
= e^{tA}(\ww+\overline{\ww})
= 2e^{tA}\Re(\ww),
$$
$$
e^{tA}\ww - e^{tA}\overline{\ww}
= e^{tA}(\ww-\overline{\ww})
= 2e^{tA}\Im(\ww).
$$
However, we can see that the formulas for the real and the imaginary part are not very convenient:
\begin{IEEEeqnarray*}{C}
e^{t\l}\sum_{n=0}^{m-1} \frac{1}{n!}t^n\ww_n \\
= e^{t\a}(\cos(\b t)+i\sin(\b t))\sum_{n=0}^{m-1} \frac{1}{n!}t^n(\Re\ww_n + i\Im\ww_n) \\
= e^{t\a}\Big(\cos(\b t)\sum_{n=0}^{m-1} \frac{1}{n!}t^n\Re\ww_n-\sin(\b t)\sum_{n=0}^{m-1} \frac{1}{n!}t^n\Im\ww_n\Big)
+ ie^{t\a}\Big(\cos(\b t)\sum_{n=0}^{m-1} \frac{1}{n!}t^n\Im\ww_n+\sin(\b t)\sum_{n=0}^{m-1} \frac{1}{n!}t^n\Re\ww_n\Big).
\end{IEEEeqnarray*}
We know that we have a (complex) basis of the complex space solution. The dimension of the $\C$-vector space $\C^n$ is $n$, and since $\R\subset\C$, all real solutions are included in this description. However, the method of taking the real part of a basis does not give us a priori a independent set of real solutions. For this reason, we will stay in the the complex space because it present a good and simple description thanks to its algebraic closure, and we will be able too deduce from the complex case results about what we need for the real case. We state a corollary of the precedent theorem, giving all possible forms that can take the reals solutions, but without giving a explicit formula.
\begin{corollaire} \label{cor:formesolutionlineaire}
All solutions to linear system have coordinates that are linear combinations of the following functions :
\begin{itemize}
\item $e^{\l t}$
\item $e^{\a t}\cos(\b t)$ , $e^{\a t}\sin(\b t)$
\item $t^je^{\l t}$ , $t^je^{\a t}\cos(\b t)$ , $t^je^{\a t}\sin(\b t)$
\end{itemize}
where $\l$ is a real eigenvalue, $\s=\a+i\b$ is an complex eigenvalue, $0\leq j< m_a$ is a natural number with $m_a$ the algebraic multiplicity of $\s$. Note that each point is a generalisation of the precedent.
\end{corollaire}
\begin{remarque}
This actually describe the complex space too, since we can rebuild the complex solutions we used to derive this corollary by summing the real part and $i$ times imaginary one.
\end{remarque}
\section{Stability}
In other terms, \prettyref{th:eigensolutions} tell us that each non null real eigenvalue gives direction(s) where the trajectories are straight and of exponential velocity. Null eigenvalues give direction(s) where trajectories are fixed. Non real eigenvalue gives direction(s) where the trajectories are like ellipses that can change of size exponentially. The \prettyref{th:solutiondegeneree} add other sorts of solutions as polynomials resized by a exponential and correspond to the case of degenerated eigenvalues.
All these considerations are only on the special directions. Depending on the sign of the real part of the eigenvalues, theses specials solutions go very fast to $0$, stay in an orbit, diverge very fast to infinity values, or maybe follow a polynomial. This motivate us to see how these special solutions act together, and what is the asymptotic comportment of trajectories, how stable is the origin, and make a classification about all theses factors.
First of all we define concepts about stability, and asymptotic convergence. For this we put our self in a more general cadre which will be useful later, with function $\FF$ and the equation
\[\dotxx = \FF(\xx)\]
We suppose $\FF$ regular enough to have a flow $\phi(\xx_0,t)$ which encapsulate all solutions:
\begin{IEEEeqnarray*}{rCl}
\dot{\phi}(\xx_0,t) &=& \FF(\phi(\xx_0,t)) \\
\phi(\xx_0,0) &=& \xx_0
\end{IEEEeqnarray*}
\begin{definition}
A solution x is \emph{Lyapunov stable} or \emph{L-stable} if the continuity of the flow with respect to the initial condition $\xx_0$ is uniform with respect to positive times. Namely, if for all $\e>0$ there exist a $\d>0$ such that $\|\zz-\xx(0)\|<\d$ implies that for all $t\geq0$, $\|\phi(\zz,t)-\xx(t)\|<\e$.
\end{definition}
\begin{definition}
Two solutions $\xx$ and $\yy$ are $\omega$\emph{-attracted} to each other if $\lim_{t\to\infty}\|\yy(t)-\xx(t)\| = 0$. The resulting equivalence class is called the \emph{basin of attraction}. A solution $\xx$ is said \emph{$\omega$-attracting} or \emph{attracting} if there is a $\d>0$ such that $\|\zz_0-\xx_0\|<\d$ implies that $\phi(\zz_0,t)$ is in the same basin of attraction of $\xx$. A solution is said \emph{globally $\omega$-attracting} (or \emph{globally attracting}) on a set if this set is in the basin of attraction. We do not need to specify the set if it is the all solution space.
\end{definition}
\begin{remarque}
\end{remarque} The basin of attraction can be see as a space of solutions as well as a space of points, the initial positions of its elements.
\begin{definition}
A solution is \emph{asymptotically stable} if it is L-stable and attracting. It is said \emph{globally asymptotically stable} if it is L-stable and globally attracting
\end{definition}
\begin{remarque} \label{rem:stabilité}
Note that these two notions are different. For examples of non-attracting point which is L-stable, we can take simply $F(x)=0$ or for a non trivial case, we take the origin of $\dotx = -y$, $\dot{y} = x$ who describe the circle trajectories $x(t)=cos(t)$, $y(t)=sin(t)$.
In the other way there exist non L-stable points which are attracting. Such a point is the limit of all near trajectories but they always go a bit far before converging, like a detour. For this we place our-self in polar coordinates. We want the trajectories to follow the circle and finish in $(1,0)$ for this we make the $\theta$ always go and stop to $2\pi$, and $r$ go and stop to 1. For this we can write $(\dot{r},\dot{\theta})=(1-r,2\pi-\theta) = G(r,\theta)$. But if we want $(G_1\cos(G_2),G_1\sin(G_2))$ to be continuous on $\R_+\times 0$, we should write $\dot{r} = r(1-r)$, $\dot{\theta} = \theta(2\pi-\theta)$. And to obtain the continuity of the derivative, in purpose to have a flow, we could write $\dot{r} = r(1-r^2)$, $\dot{\theta} = \theta(2\pi-\theta^2)$. This gives us what we need but to be able to explicitly change the coordinates into cartesian, we prefer $\dot{\theta} = \sin(\theta/2)^2$. The fact that we started as a polar coordinate and that they are independent in the polar equation, actually give a proof that all solutions tends indeed to $(1,0)$ but solutions beginning near $(1,0)$ and in the upper plane, pass behind the origin before returning to the fixed point $(1,0)$. The show that it is attracting but not L-stable. \com{graphic}
\end{remarque}
\begin{lemme}
When $\FF$ is linear, i.e. $\dotxx=A\xx$, all solutions have the same L-stability and attractivity.
\end{lemme}
\begin{proof}
In the linear case, $\phi$ is linear in the first variable, indeed $\phi(\xx_0,t) = e^{tA}\xx_0 =: X(t)\xx_0$. Now for any solution $\xx$ and all initial point $\zz$,
\[ \|\phi(\zz,t)-\xx(t)\| = \|X(t)\zz-X(t)\xx(0)\| = \|X(t)(\zz-\xx_0)\| =\|\phi((\zz-\xx_0),t)-\phi(0,t)\|, \] meaning that L-stability and attractivity is entirely determined by the stability of the trivial solution $\phi(0,t)=0$.
\end{proof}
\begin{remarque}
Therefore, we can speak of the L-stability and the attractivity of a linear system, meaning that it applies to all solutions, or none of them, and doing it by looking at the trivial solution 0. In this case the system is L-stable if and only if there exists a $\d>0$ such that $\|\xx_0\|<\d$ implies that for all $t\geq0$, $\|\phi(\xx_0,t)\|<\e$. The system is attracting if and only if there exists a $\d>0$ such that $\|\xx_0\|<\d$ implies that $\phi(\xx_0,t) \to 0$.
\end{remarque}
\begin{theoreme} \label{th:stablecondition}
The linear system is L-stable if and only if each of its solutions is bounded for positive times.
\end{theoreme}
\begin{proof}
Suppose the system is L-stable, and for contradiction that a solution $\xx$ is not bounded. Let $\d>0$ be the distance given by the stability, such that all solutions that start with a norm smaller than $\d$ do not go away the unit ball. We can then define an other solution $\yy=\d\xx/\|\xx(0)\|$ and $\|\yy(0)\|=\d$, so $\yy$ wont go away the unit ball and is bounded. This contradict the fact that $\xx$ and $\yy$ are proportionals and all solutions must be bounded.
Suppose now that all solutions are bounded. Then the columns of $X=e^{tA}$ the fundamental system are bounded, implying that actually the norm of X is bounded (all norm on finite dimensional spaces are equivalent). Now we get
\[ \|\phi(x_0,t)\|=\|X(t)x_0\|
\leq \|X(t)\| \|\xx_0\|
\leq \max_{t\geq0} \|X(t)\| \|\xx_0\| \]
which is smaller than any positive $\e$ as soon as
$$\|\xx_0\| < \frac{\e}{\max_{t\geq0} \|X(t)\|}$$
and give us the stability of the system.
\end{proof}
\begin{theoreme}
The linear system is globally asymptotically stable if and only if it is attracting.
\end{theoreme}
\begin{proof}
By definition, global asymptotic stability implies global attractivity and so in particular attractivity with any radius condition. In the other direction, if it is attracting with radius condition $\d$, such that when $\|\xx_0\|<\d$, $\|\phi(\xx_0,t)\|\to0$. Then any solution $\xx$ can be written $\xx=\|\xx(0)\|\yy/\d$ where $\yy=\d\xx/\|\xx(0)\|$ is a proportional solution that start with norm $\d\|\xx(0)\|/\|\xx(0)\|=\d$ and is small enough to converge to zero, implying that $\xx=\xx(0)\yy/\d$ converge to zero too. The system is globally attracting by arbitrarity of $\xx$. Now that all solutions are converging to zero, they are all bounded and by \prettyref{th:stablecondition}, we know that the system is actually L-stable. Both condition of stability and global attractivity are reunited, the system is globally asymptotically stable.
\end{proof}
\begin{exemple}[saddle]
Consider the system
$$\begin{cases}
\dotx=x+3y \\
\doty=3x+ay
\end{cases}
\text{ rewritten with } \zz=(x,y)^\top: \quad \dotzz=
\begin{pmatrix} 1&3\\3&1 \end{pmatrix}\zz =A\zz
$$
The characteristic polynomial is $p_A(\l)=\det(A-\l I)=(1-\l)^2-9=(-2-\l)(4-\l)$ and has roots $\l_1=-2$, $\l_2=4$. For the eigenvalue -2, $$ A-\l_1I = A+2I= \begin{pmatrix} -3&3\\3&-3 \end{pmatrix} $$
has $\begin{pmatrix}1\\-1\end{pmatrix}$ in the kernel and this is the first eigenvector. The stable associated eigensolution is $$e^{-2t}\begin{pmatrix}1\\-1\end{pmatrix} \xrightarrow[t\to\infty]{} 0.$$
Similary,
$$ A-\l_2I = A-4I= \begin{pmatrix} 3&3\\3&3 \end{pmatrix} $$
give the eigenvector $\begin{pmatrix}1\\1\end{pmatrix}$ and the unstable eigensolution $e^{4t}\begin{pmatrix}1\\1\end{pmatrix}\xrightarrow[t\to\infty]{\|.\|}~\infty$. We plot the results:
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{images/saddle.eps}
\caption{Saddle phase plane}
\label{fig:question11.1}
\end{figure}
We see that we have only two straight solutions, the eigensolutions. Their norm evolve exponentially but they are time inverted from each other. All the other solutions are combinations of those two and are hyperbolas, they come near the origin and then goes back away. The only one that goes to zero is the eigensolution with eigenvalue -2. We can be as near to zero as we want for the start, the solution eventually goes to infinity if it is not this eigensolution. We does not see any stability, a such system is called a saddle and we will define a general definition for it later.
\end{exemple}
\begin{exemple}[focus]
Consider the system
$$ \dotzz=
\begin{pmatrix} -4&-2\\3&-11 \end{pmatrix}\zz =A\zz
$$
The characteristic polynomial is $(-4-\l)(11-\l)+6=(-5-\l)(-10-\l)$ and has roots $\l_1=-10$, $\l_2=-5$. For the eigenvalue -10, $$ (A-\l_1I)\begin{pmatrix}1\\3\end{pmatrix}
= \begin{pmatrix}6&-2\\3&-1\end{pmatrix} \begin{pmatrix}1\\3\end{pmatrix} = 0.$$
The stable associated eigensolution is $e^{-10t}\begin{pmatrix}1\\3\end{pmatrix} \xrightarrow[t\to\infty]{}~0$. Similary,
$$ (A-\l_2I)\begin{pmatrix}2\\1\end{pmatrix}
= \begin{pmatrix} 1&-2\\3&-6 \end{pmatrix} \begin{pmatrix}2\\1\end{pmatrix} = 0$$
give the eigenvector $\begin{pmatrix}2\\1\end{pmatrix}$ and the second stable eigensolution $e^{-5t}\begin{pmatrix}2\\1\end{pmatrix}\xrightarrow[t\to\infty]~0$. We plot the results:
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{images/focus.eps}
\caption{Focus phase plane}
\label{fig:focus}
\end{figure}
Here the two eigensolutions goes to zero exponentially and thus, all linear combinations do the same. The system is attracting and so asymptotically stable. We will call this situation later a focus, and more generally a stable node.
\end{exemple}
\begin{exemple}[null eigenvalue]
Consider the system
$$ \dotzz=
\begin{pmatrix} -1&-3\\-1&-3 \end{pmatrix}\zz =A\zz
$$
The characteristic polynomial is $(-1-\l)(-3-\l)-3=\l(-4-\l)$ and has roots $\l_1=-4$, $\l_2=0$. The eigenvectors are respectively $\begin{pmatrix}1\\1\end{pmatrix}$ and
$\begin{pmatrix}3\\-1\end{pmatrix}$. They give the solutions $e^{-4t}\begin{pmatrix}1\\1\end{pmatrix}$ and $\begin{pmatrix}3\\-1\end{pmatrix}$ (constant solution).
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{images/null_eigenvalue.eps}
\caption{Null eigenvalue phase plane}
\label{fig:null_eigenvalue}
\end{figure}
We see that we have a line where the solutions are fixed and a other where the solution goes exponentially to the origin. As a result the system is L-stable because solutions do not go away, but it is not attracting or asymptotically stable because this fixed line contain solutions that never move to the origin.
\end{exemple}
These are simple examples where the matrix is diagonalisable. When it is not the case, the situation is more complicated because of the generalised eigenvectors:
\begin{exemple}[defective eigenvalue]
Consider the system
$$ \dotzz=
\begin{pmatrix} -2&1\\-1&0 \end{pmatrix}\zz =A\zz.
$$
The characteristic polynomial is $(1+\l)^2$ and has one root $\l=-1$, $\l_2=0$; $(A+I)=\begin{pmatrix} -1&1\\-1&1 \end{pmatrix}$ give the eigenvector $\vv=\begin{pmatrix}1\\1\end{pmatrix}$. It is the only one because $\det \begin{pmatrix} -1&1\\-1&1 \end{pmatrix} =0$ and it must be of rank one. This give us a solutions $e^{-t}\begin{pmatrix}1\\1\end{pmatrix}$. We search for a generalised eigenvector:
$$(A+I)\ww=\vv \text{ , \ie } \begin{pmatrix} -1&1\\-1&1 \end{pmatrix}\ww= \begin{pmatrix}1\\1\end{pmatrix}.$$
We find $\ww=\begin{pmatrix}0\\1\end{pmatrix}$ and the second solution is $e^{\l t}(\ww+t\vv) = e^{-t}(\begin{pmatrix}0\\1\end{pmatrix}+t\begin{pmatrix}1\\1\end{pmatrix}) = e^{-t}\begin{pmatrix}t\\1+t\end{pmatrix}$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{images/defective.eps}
\caption{Defective eigenvalue phase plane}
\label{fig:defective}
\end{figure}
\end{exemple}
\begin{remarque}
Interested readers might want to have a look online to this
\href{ https://mathlets.org/mathlets/linear-phase-portraits-matrix-entry/}{interactive phase plane}
\footnote{Copyright © 2009--2015 H. Miller | Powered by WordPress, https://mathlets.org/mathlets/linear-phase-portraits-matrix-entry/ },
to see the diversity of possibilities on the plane.
\end{remarque}
\begin{theoreme}
The linear system is
\begin{enumerate}
\item L-stable if and only if all the eigenvalues of the matrix have non positive real parts, and all the one that are defective have negative real part.
\item globally asymptotically stable if and only if all the eigenvalues of the matrix
have negative real parts.
\end{enumerate}
\end{theoreme}
\begin{proof}
\quad\\
\begin{enumerate}
\item
By \prettyref{th:stablecondition}, we just have to show that the conditions on the eigenvalues are equivalent to the fact that the solutions are bounded. The \prettyref{cor:formesolutionlineaire} told us the possible forms of all solutions. The functions $t^je^{\a t}\cos(\b t)$ , $t^je^{\a t}\sin(\b t)$ describe all the possibilities of linear combinations for the coordinates.
Since a linear combination is a finite sum, the solution is bounded if all these possible terms are bounded.
For both, if the eigenvalue $\s=\a+i\b$ is non defective, then $j=0$ only and they are bounded if $\a$ is non positive. If $\s$ is defective, $j>0$ and they are bounded if $\a$ is negative because the exponential is $(t^j)$ for all $j$'s. The sytem is now L-stable.
Alternatively, if there exists an eigenvalue $\s=\a+i\b$ with eigenvector $w=u+iv$ that have a positive real part, then by \prettyref{th:eigensolutions}, there exist a solution $e^{\a t}(cos(\b t)u-\sin(\b t)v)$ and its norm $e^{\a t}\|cos(\b t)u-\sin(\b t)v\|$
is not bounded since $cos(\b t)u-\sin(\b t)v$ does not converge to zero. If $\s$ is defective and is purely imaginary, then \prettyref{th:solutiondegeneree} tell us that there exist a solution $p_w(t)$, polynomial of non null degree, and $p_w(t)\to\infty$ when $t\to\infty$ because of the dominant term. There exists solutions that are unbounded , and the system is not L-stable.
\item
By \prettyref{th:stablecondition}, we just have to show that the conditions on the eigenvalues are equivalent to the fact that the system is attracting.
But following the considerations of the first part, if all eigenvalues have negative real parts, there is always a $e^{\a t}$ with $\a<0$ term in front of the bases solutions and they all converge to zero. The system is attracting, and globally asymptotically stable.
Alternatively if it is not the case and that $\s=\a+i\b$ has a non negative part, we have a solution $e^{\a t}p_w(t)$ that satisfy $\|e^{\a t}p_w(t)\| \geq \|p_w(t)\|$ which does not go to zero. The system is not attracting and then not globally asymptotically stable.
\end{enumerate}
\end{proof}
\begin{exemple}
The second condition of the first point seem to point out a very specific situation. Would the theorem work without this condition, meaning that this situation is algebraically impossible? Indeed, we usually do not often see real matrices with a defective and purely imaginary eigenvalue. This is maybe because the following remarks: it cannot be in one dimension because eigenvalues are real; it cannot be in two dimension because imaginary eigenvalues come by pair of conjugates and then they would be opposed, hence different and non-defective. To have a purely imaginary eigenvalue of multiplicity two we must have its conjugate of multiplicity two too. So we try in four dimension. We search for a matrix with characteristic polynomial $(\l^2+1)^2$. The basic bloc that introduce $i$ eigenvalue is
$$J=\begin{pmatrix}0&1\\-1&0\end{pmatrix}.$$
We could try with diag$(J,J)$, but this gives a non-defective eigenspace. With a little modification on it, we find a matrix that is defective:
$$\begin{pmatrix}J&I_2\\0&J\end{pmatrix}.$$
It has the only two independent eigenvectors $(\pm i,1,0,0)^\top$. Hence in this pathological situation, the defective eigensolutions are of the form $e^{it}p(t)=(\cos(t)+i\sin(t))p(t)$, which is neither constant, neither exponential, neither cyclic, neither a polynomial, neither bounded.
\end{exemple}
The last theorem is one of the main results of the section, it presents a complete understanding of the stability. It motivates the categorisation of linear systems, taking into account the nature of their stability.
\begin{definition}
For a sytem $\dotxx=A\xx$, we set the following denomination of the matrix $A$.
\begin{itemize}
\item \emph{Stable} : all eigenvalues have negative real part. ($i\R+\R^*_-$)
\item \emph{unstable} : $-A$ is stable. i.e. all eigenvalues have positive real part. ($i\R+\R^*_+$)
\item \emph{saddle} : all eigenvalues have a non null real part and they do not all have the same sign.
\item \emph{hyperbolic} : stable, unstable or saddle; \ie all eigenvalues have non null real part. ($i\R+\R^*$)
\item \emph{center} : all eigenvalues are purely imaginary. ($i\R^*$)
\item \emph{elliptic} : all eigenvalues are non null, purely imaginary, and non defective.
\item \emph{focus} : all eigenvalues are real, negative, and non defective.
\item \emph{source} : all eigenvalues are real, positive, and non defective.
\item \emph{degenerated} : there exists a defective eigenvalue.
\end{itemize}
More generally, we use these terms to speak about a specific eigenvalue, seeing it as a one-dimensional matrix, meaning that it must respect the generic condition (this does not apply to saddle).
\end{definition}
\begin{remarque}
The matrix $A$ is unstable (resp. a focus) if $-A$ is stable (resp. a source).
We point out the similarity between the words "L-stabe" and "stable", which speak respectively about Lyapunov stability and eigenvalues, do not confuse them. A stable linear system is L-stable but not not in the other way because of the elliptic case.
\end{remarque}
For example, we present a focus resulting of the matrix
$$\begin{pmatrix}
-4 & 5 \\ -5 & 2
\end{pmatrix},$$
which has no real eigenvalues but two complex and stable ones. We do not present the computations, they are the same as before. In this case the solutions are in a convergent spiral because of the trigonometric functions that appear in the formula and the negativity of the real part.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{images/spirale.eps}
\caption{Saddle phase plane}
\label{fig:spirale}
\end{figure}
\begin{corollaire}
\quad
\begin{itemize}
\item Stable(and focus in particular) and elliptic linear systems are L-stable.
\item Sources and saddles linear systems are not L-stable, so neither asymptotically stable.
\item Stable (and focus in particular) linear systems are globally asymptotically stable.
\end{itemize}
\end{corollaire}
All this classification and considerations on the eigenvalues, show us that there exists directions where the trajectories act kind of exponentially, and these are the directions that are linear combinations of some generalised eigenvectors. We present then a separation of the space of solution, which tell us where the solutions act like in a stable, unstable, or center system.
\begin{definition}
For a linear system $\dotxx=A\xx$ of dimension $n$, we define the stable eigenspace, the unstable eigenspace, and the center eigenspace as follows respectively :
\begin{IEEEeqnarray*}{rCl}
\E^s &=& \Span \{v\in \C^n|\, v\text{ is a generalised eigenvector of the system with eigenvalue }\l \text{ such that }\Re(\l)<0\} \\
\E^u &=& \Span\{v\in \C^n|\, v\text{ is a generalised eigenvector of the system with eigenvalue }\l \text{ such that }\Re(\l)>0\}\\
\E^c &=& \Span\{v\in \C^n|\, v\text{ is a generalised eigenvector of the system with eigenvalue }\l \text{ such that }\Re(\l)=0\}
\end{IEEEeqnarray*}
They are the generalised eigenspaces of the stable, unstable, and center generalised eigenvectors.
In the same idea we want to compare them to the directions where the solutions act exponentially or not:
\begin{IEEEeqnarray*}{rCl}
V^s &=& \{v\in \C^n|\,\exists a,b>0 :\forall t\geq0,\|e^{At}v\| \leq be^{-at}\}
\\
V^u &=& \{v\in \C^n|\,\exists a,b>0 :\forall t\leq0, \|e^{At}v\| \leq be^{at}\}
\\
V^c &=& \{v\in \C^n|\,\forall a>0,\, e^{At}v e^{-a|t|} \to 0\text{ when }|t|\to\infty\}.
\end{IEEEeqnarray*}
Namely, the spaces where the solutions act exponentially or not.
\end{definition}
\begin{remarque}
We can precise the value of the matrix as an argument when dealing with multiples systems and does not use argument when the context is clear. Note that they are all subspaces and $\E^u(A)=\E^s(-A)$ as well as $V^u(A)=V^s(-A)$. Taking $-A$ instead of $A$ correspond actually to inverse time.
\end{remarque}
\begin{lemme}
For $\s\in\{s,u,c\}$, we have the equalities $\E^\s=V^\s$, the direct sum $\C^n = \E^s\oplus\E^u\oplus\E^c$, and the $\E^\s$ are invariants in the sense that $\phi(\E^\s\times\R)=\E^\s$, \ie all the solutions stay in the space.
\end{lemme}
\begin{proof}
The invariance of the sets comes from the general form of the eigensolutions presented in \prettyref{th:eigensolutions}. Such a solution looks like $e^{\l t}p(t)$, with an eigenvalue $\l$ and a polynomial $p(t)$ whose coefficients are in the generalised eigenspace of $\l$. It is a linear combination of vectors in the same $\E^\s$ for all $t$. The eigensolutions stay then in the space, as well as a span of them, $\E^\s$.
\\ \\
The direct sum come from the linear algebra fact that eigenspaces associated to different eigenvalues are independents. The three spaces are just a separation of them in three sums. Namely $\E^\s$ is just the direct sum of the generalised eigenspaces associated to eigenvalues of type $\s$.
\\ \\
Now we prove the equalities. First, we show $\E^\s\subset V^\s$. Each hyperbolic generalised eigensolution associated to a generalised eigenvector $\ww$, can be written $e^{\d t}p(t)$ for a polynomial $p$ with complex generalised eigenvector coefficients associated to the same eigenvalue $\d=a+ib$ ($a$,$b$ scalars).
$\E^s$: When $a<0$, we can just bound the norm as
$$\|e^{\d t}p(t)\|=e^{at}\|p(t)\| \leq e^{ta/2} \max_{t\geq0}\||p(t)|e^{ta/2}\| ,$$
The max exists because the continuous expression $p(t)e^{ta/2}\to 0$ when $t\to\infty$ if $a<0$ (by Taylor expansion or iterating Hospital).
$\E^u$: When $a>0$, we directly obtain the $\E^u(A) = \E^s(-A) \subset V^s(-A) = V^u(A)$ by the remark made above.
%When $a>0$ ($\E^u$) and $e^{\d t}p(t)$ is real, we have
%$$\|\Re(e^{\d t}p(t)\|
%=e^{at}\|p(t))\|
%\geq e^{at}\min_{\R_+}\|p\|$$
%where the min isn't null otherwise the polynomial would have passed by zero and the trajectory would be identically null.
%If $e^{\d t}p(t)$ isn't real all along, then actually it is never real, otherwise the trajectory would be entirely real by unicity of trajectories. In particular $e^{ibt}p(t)$ is never real and since $\Re(e^{ibt}p(t))>0$. Then
%$$\|\Re(e^{\d t}p(t)\|
%= e^{at}\|\Re(e^{ibt}p(t)\|
%\geq e^{at/2} \min_{t\geq0}\|e^{at/2}\Re(e^{ibt}p(t))\| .$$
%The min exist since
%$\|e^{at/2}\Re(e^{ibt}p(t))\|\to \infty$. Indeed $\Re(e^{ibt}p(t))$ couldn't go exponentially to zero since it is constituted of polynimals and trigonometrics terms. The min is non null as we said that $\Re(e^{ibt}p(t))>0$.
$\E^c$: When $a=0$, we have
$$\|e^{-\a |t|}e^{\d t}p(t)\|
= e^{-\a |t|}\|p(t))\| \to0 \text{ when }|t|\to\infty.$$
for any $\a>0$.
We get then $\ww \in \E^\s$ for the three $\s$. We clearly see that the $V^\s$'s are subspaces, and since $\ww$ was chosen as an arbitrary generalised eigenvector, actually the whole span satisfy $\E^\s\subset V^\s$.
Secondly, we prove the inverse inclusion.
%\com{first try} The $V^\s$'s contain independents subspaces $\E^\s$ whose dimensions sum up to the dimension of the all space. As a result, if one of them was strictly bigger, we would have a dependency between them for example a linear combination $0 = \b_s v_s + \b_u v_u + \b_c v_c$ with the complex $v_\s$ not all null. Then $0 = \|\b_s v_s + \b_u v_u + \b_c v_c\|$.
%\com{second try}
Each vector $v$ of $V^\s$ can be written as a unique sum of vectors of the three $\E^\s$: $v = v_s + v_u + v_c$. We define a new norm on $\C^n$ using this unique writing.
$$N(v)= \big\|(\|v_s\|,\|v_u\|,\|v_c\|)\big\|_1 = \|v_s\|+\|v_u\|+\|v_c\|.$$ Indeed, absolute homogeneity and positive definiteness are clear, while subadditivity come from the unique decomposition: for vectors $v$ and $w$, we have $v_s+w_s + v_u+w_u + v_c+w_c=v+u$. This implies necessarily that $(u+v)_\s=v_\s + w_\s$, and then
$$N(v+w) = \|v_s+w_s\|+\|v_u+w_u\|+\|v_c+w_c\| \leq \|v_s\|+\|w_s\|+\|v_u\|+\|w_u\|+\|v_c\|+ \|w_c\|= N(v)+N(w).$$
Now, $N$ is equivalent to any other norm, \ie we have a constant $c$ such that $c^{-1}\|v\|\leq N(v)\leq c\|v\|$.
If $\s=s$, $N(e^{At}v)\to0$ exponentially when $t\to\infty$. By looking at the norm $N$, we see that it must be the same for the directions $v_s$, $v_u$ and $v_c$. From the general form of eigensolutions we see that unstable and center eigensolutions does not even goes to zero:
$$\|e^{\d t}p(t)\|
=\|e^{\Re(\d)t}p(t)\|
\geq e^{\Re(\d)t}\min\|p\|\geq \min\|p\|>0$$
Where the min exist and is positive because the polynomial is not trivial. Then actually $v_u=v_c=0$, and $v = v_s\in \E^s$, so $V^s\subset \E^s$. As before we directly conclude that $$V^u(A)=V^s(-A)\subset \E^s(-A)=\E^u(A).$$ Same kind of argument for a $v=v_s+v_u+v_c\in V^c$:
$$e^{-\a|t|}\|e^{\d t}p(t)\|
=e^{-\a|t|}e^{at}\|p(t)\|
\geq e^{(at/|t|-\a)|t|}\min\|p\|\geq \min\|p\|>0$$
for a small $0<\a<|a|$ and alternatively for positive time if $\d$ unstable ($a>0$), or for negative time if $\d$ stable ($a<0$). We get that $v$ is in $\E^c$.
We have proven the both inclusions and obtain the result.
\end{proof}
This result gives more precision about the velocity of convergence in a context of stability, it is exponential. It gives indication in the other way too, the velocity of the convergence of a solution can tell us about its composition with respect to the eigen-basis. It must be a somewhat "purely" stable composed solution to vanish exponentially, all other (composed) solutions are not exponential.
Now that we understand well the complex eigenspaces of solutions, we can easily deal with the real case:
\begin{corollaire}
We have the equalities $\Re(\E^\s)=\Re(V^\s)$ and $\dim(\Re(\E^\s))=\dim(\E^\s)$, the direct sum
$\R^n = \Re(\E^s)\oplus\Re(\E^u)\oplus\Re(\E^c)$, and the sets $\E^s$ are invariant:
$\phi(\Re(\E^\s)\times\R)=\Re(\E^\s)$.
\end{corollaire}
\begin{proof}
The equality $\Re(\E^\s)=\Re(V^\s)$ is trivial. We have the simple sum
$$\R^n
=\Re(\C^n)
=\Re(\E^s\oplus\E^u\oplus\E^c)
= \Re(\E^s)+\Re(\E^u)+\Re(\E^c).$$
This is actually a direct sum because the conjugate invariance of the $\E^\s$'s implies that $\Re(\E^\s)\subset\E^\s$. This last inclusion implies that $\dim(\Re(\E^\s))\leq\dim(\E^\s)$ and we have equality thanks to the fact that the dimensions must sum up to $n$. The invariance of the spaces along solutions come from the fact that a real solution stay real in the time and $\E^\s$ is invariant.
\end{proof}
With this descrption, we understand better what could look like a $\R^3$ example (or even in more dimensions). There will be some planes or lines where the solutions act perfectly like in the $\R^2$ case. For graphics in $\R^3$, see \cite{Rob} page 34 for instance.
\begin{remarque}
By taking the real part of the spaces (just like the imaginary part) and summing them, we can construct all the real space and the induced real solutions. However, note that taking the real part of a basis will not a priory give a real basis. Indeed the complex basis can use complex coefficient to build all real solutions while a real basis can only use real coefficients and the real part of a product is not just the product of the real parts.
\end{remarque}
We see that the eigenspaces seem to act independently from each other and the general comportment of solutions in them, seem to be somewhat uniform. A change of an eigenvalue can change totally the stability of a system; the flow $e^{At}x$ is continuous on $A$, but how does the uniformity continuity change along $t$ and $x$? We can show that actually, in the hyperbolic case, the dimensions of the eigenspaces do not change when we chose a second matrix near enough the first one, in some sense the general stability structure depends continuously on $A$:
\begin{theoreme}
For $\s=s,u$, the map $A\mapsto \text{dim }\E^\s(A)$ is locally continuous on hyperbolic matrices, and so locally constant.
\end{theoreme}
\begin{proof}
We treat only the stable eigenspace since $\text{dim }\E^u(A) =\text{dim }\E^s(-A)$. It is a classical but non trivial analysis result that the roots of a polynomial are continuous, so we prove that if we can find a neighbourhood where the roots stay stable, the structure of the generalised eigenspaces stay the same. Since no eigenvalue is in the imaginary line, we can surround all the stable eigenvalues by a positively oriented simple closed rectifiable curve $\g$, that does not cross any eigenvalue, the imaginary line neither. Since the characteristic polynomial $p_A$ vanishes exactly on the eigenvalues, and is holomorphic, we have by the residues that
$$\frac{1}{2\pi i}\int_{\g} \frac{p_A'}{p_A}
= \text{dim }\E^s(A) .$$
Indeed, if $p_A$ has a root $\l$ of order $k$, the residue of $p'_A/p_a$ in $\l$ is just $k$. Since $p_A$ is a polynomial, the order of a zero is just the its multiplicity, and it correspond then to the dimension of the generalised eigenspace of this eigenvalue. The space $\E^\s$ is a direct sum of the generalised eigenspaces of stable eigenvalues, whose dimensions sum up to the dimension of the stable eigenspace itself. Since the coefficients of $p_A$ are polynomials of the coefficients of $A$, they change continuously with $A$. We can suppose that $A$ varies in a small neighbourhood so $p_A$ still do not vanish in the curves $\g$ (that does not change with $A$). As a result, the formula is well defined around $A$ and is continuous with respect to $p_A$ and hence to $A$.
\end{proof}
As we see, the stability structure of the system is the same for matrices in a neighbourhood. But more generally, if two matrices $A$ and $B$ are far from each other, but still have the same dimensions of eigenspaces, do them have similarities? We introduce the following tool of comparison, that match all the trajectories from both system in a continuous way.
\begin{definition}
Two flows $\phi$ and $\psi$ on the space $\R^n$, are said \emph{topologically conjugate}, if there exist a homeomorphism $h:\R^n\to\R^n$ such that
$h\circ \phi_t = \psi_t\circ h$, i.e. the image of a trajectory is a trajectory.
\end{definition}
\begin{lemme} \label{lem:ps}
If a matrix is stable with maximum real part of eigenvalues $-a<0$, there exists a scalar product $\ps{.}{.}_a$ such that $\ps{x}{Ax}_a<-a\|x\|_a^2$.
\end{lemme}
\begin{proof}
Linear algebra result, see \cite{Hir} at page 145 for a proof.
\end{proof}
\begin{theoreme}[Hartman and Grobman]
Two stable near systems on the same space are topologically conjugate.
\end{theoreme}
\begin{proof}
To find a homeomorphism $h$, we first search one on a subset of the space that content points that are part of all non trivial solutions and then we will extend it along the trajectories. The idea is to find a kind of sphere that trajectories cross only one time. The \prettyref{lem:ps} exhibit a special scalar product that seem to be related to the problem. We consider two matrices $A$, $B$ and their associated linear systems. We set the spheres $S_A$ and $S_B$ associated to the specials products $\ps{}{}_A$, $\ps{}{}_B$ of the matrices $A$ and $B$. We show that trajectories cross them exactly one time by showing that the norm chosen is monotone along the solutions:
$$\ddt \|\xx\|_A
= \ddt \ps{\xx}{\xx}_A
= 2\ps{\xx}{\dotxx}_A
= 2\ps{\xx}{A\xx}_A
\leq -a\|\xx\|^2_A$$
For a scalar $a>0$. So before applying Grönwal and conclude the exponential convergence to zero, we see that this convergence is strictly monotone definite in the sense of the norm $\|.\|_A$. Thus, $\xx$ only passes in $S_A$ one time. The same hold for $B$ by symetry of the situation. We can set now an homeomorphism
\begin{IEEEeqnarray*}{rCl}
h_0:S_A &\to& S_B \\
\xx &\mapsto& \xx/\|\xx\|_B
\end{IEEEeqnarray*}
It is well defined because the image is obviously in $S_B$, $\xx$ is never 0 and the denominator never vanishes. The continuity come from continuity of the norm. By symmetry of the situation its inverse should be $g_0:\xx \mapsto \xx/\|\xx\|_A$. Indeed for a $\xx\in S_B$,
$$ (h_0\circ g_0)(\xx) = h_0(\frac{\xx}{\|\xx\|_A})
= \frac{\frac{\xx}{\|\xx\|_A}}{\|\frac{\xx}{\|\xx\|_A}\|_B}
= \frac{\xx}{\|\xx\|_B} =\xx$$
so $h_0\circ g_0 = \text{Id}_{S_B}$ and by the symetry of the situation, $g_0\circ h_0 = \text{Id}_{S_A}$. The function $h_0$ is a continuous bijection, an homeomorphism.
We construct now the final $h$. For this we want to take a initial point, find the intersection of its trajectory with the sphere, go to the other sphere, and return go back in the trajectory as much as we came in the first sphere. For this we need to know for any point $\xx$, how many time $\tau(x)$ does it take to go to the sphere, \ie $e^{A\tau(x)}x\in S_A$. The value $\tau(x)$ is uniquely defined by the choice of the sphere. We set $h$ by
$$h(\xx)
= \begin{cases}
0 \text{ if } \xx=0 \\
e^{-B\tau(\xx)}h_0(e^{A\tau(\xx)}\xx) \text{ else}
\end{cases}
$$
We prove first that it has indeed the relation $h(e^{At}\xx)=e^{Bt}h(\xx)$. We note that $\tau(e^{At}x)= \tau(x)-t$ because $$\phi(e^{At}x,\tau(x)-t)
= e^{A(\tau(x)-t)}e^{At}x
= e^{A\tau(\xx)}x \in S_A.$$ When $\xx$ is not the origin we compute,
\begin{IEEEeqnarray*}{rCl}
h(e^{At}\xx)
&=& e^{-B\tau(e^{At}\xx)} h_0(e^{A\tau(e^{At}\xx)}e^{At}\xx) \\
&=& e^{-B(\tau(\xx)-t)} h_0(e^{A(\tau(\xx)-t)}e^{At}\xx)\\
&=& e^{Bt}e^{-B\tau(\xx)} h_0(e^{A\tau(\xx)}\xx) \\
&=& e^{Bt} h(\xx)
\end{IEEEeqnarray*}
If we write it $h=h_{AB}$, we show that it has an inverse $g=h_{BA}$. By looking at the definition of $h(\xx)$, we see that
$$e^{B\tau(\xx)}h(\xx) = h_0(e^{A\tau(\xx)}\xx) \in S_B$$
so $\tau(h(\xx))=\tau(\xx)$. We compute that for all $\xx\in S_A$
\begin{IEEEeqnarray*}{rCl}
g\circ h(\xx)
&=& e^{-A\tau(h(\xx))}g_0(e^{B\tau(h(\xx))}h(\xx)) \\
&=& e^{-A\tau(\xx)}g_0(e^{B\tau(\xx)}h(\xx)) \\
&=& e^{-A\tau(\xx)}g_0(e^{B\tau(\xx)}e^{-B\tau(\xx)}h_0(e^{A\tau(\xx)}\xx))\\
&=& e^{-A\tau(\xx)}g_0(h_0(e^{A\tau(\xx)}\xx)) \\
&=& e^{-A\tau(\xx)}e^{A\tau(\xx)}\xx \\
&=& x
\end{IEEEeqnarray*}
and $g\circ h= \text{Id}_{S_A}$. By symmetry of the situation, $h\circ g= \text{Id}_{S_B}$ too and $h$ is a bijection with inverse $g$.
It remind to show the continuity of $h$, which is the same for $g$. Apart of the origin, everything is continuous if we can prove it for $\tau$. We put the case orgine away and will do it separately and for now we can use the implicit function theorem on $(\xx,t)\mapsto\|e^{At}\xx\|_A-1$, which is continuous. The partial derivative with respect to t and in a point $(\xx_0,t_0)$ is
$$\partial_t(\|e^{At_0}\xx_0\|_A-1)
=2\ps{e^{At_0}\xx_0}{Ae^{At_0}\xx_0}
<-a\|e^{At_0}\| < 0$$
and is inversible because monotone. We must have then a continuous map $\tilde{\tau}$ in a neighbourhood of $\xx_0$ such that $\|e^{A\tilde{\tau}(\xx)}\xx\|_A-1=0$. By unicity of $\tau$ (and $\tilde{\tau}$ actually), we know that in this neighbourhood, $\tau=\tilde{\tau}$ is continuous. Since $\xx_0$ was chosen arbitrarily, $\tau$ is locally continuous and so continuous. For $\xx\neq0$, $h(\xx)$ is a composition of continuous function and is itself continuous.
For the continuity in $\xx=0$, we take a sequence $(\xx_k)_k\to0$ and we have necessarily that $\tau(\xx_k)$ goes to $-\infty$. If not, then we must have a bounded sub-sequence $\xx_{k_j}$ since $\tau$ is necessarily negative in the unit ball. The result is that we can suppose this sub-sequence converging to a time $\tau^*$, and get by the continuity of the flow
$$S_A \ni \phi(\xx_{k_j},\tau(\xx_{k_j}))
\xrightarrow[j\to\infty]{} \phi(0,\tau^*)=0,$$ which is a contradiction with the closure of the sphere. We use $\tau(\xx_k)\to-\infty$ to get
$$\|h(\xx_k)\|_B
= \|e^{-B\tau(\xx_k)}h_0(e^{A\tau(\xx_k)}\xx_k)\|_B
\leq \|e^{-B\tau(\xx_k)}\|_B\to 0$$
because $B$ is stable. Finally, $h$ is continuous in the origin and everywhere and it is a homeomorphism.
\end{proof}
This theorem give us a total correspondence between two flows, and explain how the eigenvalues are decisive in the structure of the stability. More than that, it show that the stability itself, which is a local property, is determinant for the all flow. This exhibit the importance of the study of stability from a topological point of view, which is somewhat the qualitative way to see the comportment of solutions.
We can actually show that the unstable and saddle cases have a similar result, built on last theorem. We present a partial proof to see what is going on, see \cite{Rob} page 113 for a detailled proof.
\begin{corollaire}
Two hyperbolic linear systems with stable eigenspace of same dimension, are topologically conjugates.
\end{corollaire}
\begin{proof}
Let $A$ and $B$ be unstable matrices with induced linear flows $\phi$ and $\psi$. The linear systems of matrices $-A$ and $-B$ are stable, they are topologically conjugate by a homeomorphism $h$. Now simply $$h\circ\phi_t(\xx)
= h(e^{At}\xx)
= h(e^{-A(-t)}\xx)
= e^{-B(-t)}h(\xx)
= e^{Bt}h(\xx)
= \phi_t\circ h(\xx)
,$$
so the same $h$ works for the unstable case. We do not prove here the saddle case, but we present a sketch of proof:
When the two matrices are saddle, but with stable eigenspaces of the same dimension $k$ and $m$, we can decompose the space like
$$\R^n
= \E^s(A)+\E^u(A)
=\E^s(B)+\E^u(B)$$
Finite and same dimensional spaces are homeomorphic by a bijective linear map so we can use the theorem in each eigenspace independently as it was $\R^d$ with $d$ the dimension of the stable or unstable eigenspace. We would get two homeomorphism $h_s:\E^s(A)\to\E^s(B)$ and $h_u:\E^u(A)\to\E^u(B)$. To define a $h$ in the all space we can use the projections $\pi_s$ and $\pi_u$ of the decomposition $\R^n
= \E^s(A)+\E^u(A)$ and we would get something like $h = h_s\circ\pi_s + h_u\circ\pi_u$ that will give us a homeomorphism.
\end{proof}
We have studied the stability of linear dynamical system under multiples point of view. One can think that Linear systems are a very specific situation and less intricate. It is indeed in all generality, but we present here a first link to the non-linear case, whose stability appears to be determined in some way by a linear associated system. This motivate the study of linear system and their possible applications to non-linear dynamics.
\begin{theoreme}[Linearization, Hartman–Grobman] \label{th:linearisation}
Consider a system $\dotxx=\FF(\xx)$ such that $\FF$ is derivable, and $\xx^*$ a fixed point. If the linearized system $\dotxx=D\FF(\xx^*)\xx$ is stable, then $\xx^*$ is asymptotically stable for the non linear system.
\end{theoreme}
\begin{proof}
If the fixed point is not the origin, we set $\yy=\xx-\xx^*$ which satisfy $\dotyy=\dotxx=\FF(\xx)=\FF(\yy+\xx^*)$. Its flow is just a translation of the flow of the initial problem, is therefore isometric to it, and has the corresponding fixed point in the origin. Since $\FF$ is $C^2$, we can write its Taylor expression around the origin:
$$\FF(\xx)= \FF(0)+D\FF(\xx)\xx + h(\xx)\xx =: A\xx + h(\xx)\xx.$$
for some function $h(\xx)\to0$ when $\xx\to0$. We use now the special scalar product $\ps{.}{.}_a$ related to the stable matrix $A$ of \prettyref{lem:ps} and the metric induced by it, which is equivalent to the euclidean metric. We get
\begin{IEEEeqnarray*}{rCl}
\ddt \ps{\xx}{\xx}_a
&=& 2\ps{\xx}{\dotxx}_a
= 2\ps{\xx}{\FF(\xx)}_a
= 2\ps{\xx}{A\xx + h(\xx)\xx}_a
= 2\ps{\xx}{A\xx}_a + 2\ps{\xx}{h(\xx)\xx}_a \\
&\leq& -a\|\xx\|_a^2 + 2\|\xx\|^2_ah(\xx)\\
&=& (2h(x)- a)\|\xx\|_a^2. \\
&\leq& -c\|\xx\|_a^2
\end{IEEEeqnarray*}
We can take $\|\xx\|$ small enough such that $2h(x)-a$ is negative, let us say for any $a>\e>0$, $\|\xx\|<\d(\e)$ implies $\a=2h(x)-a<-\e$ Then by Grönwall, $\ddt\|\xx\|^2_a<-\a\|\xx\|^2_a$ implies
$$\|\xx\|^2_a < \|\xx(0)\|_a^2e^{-\a t}\to0$$
exponentially when $t\to\infty$ and the system is attracting since it happen in a neighbourhood $B_{\|.\|_a}(0,\d(\e))$ of the origin. Note that taking a neighbourhood small enough, we can attain a exponential convergence of any rate $\a<a$. The L-stability come just from the fact that the convergence of the attractivity is direct and does not go out its chosen neighbourhood, namely a sphere for the metric $\|.\|_a$. This give us the asymptotic stability.
\end{proof}
The idea behind this proof is that we have defined a quantity, $\|\xx\|_a$, and that we have showed that this quantity was always decreasing along the trajectories, like a potential energy. This kind of function is called a Lyapunov function, we will define it in the next chapter and see that it has nice general applications.