-
Define
const RealComplex{T<:Real} = Union{T,Complex{T}}
and use better names forReals
,Floats
andComplexes
. -
In
using LinearAlgebra
,norm(z,1)
is defined as the sum of the absolute values of the elements of the complex-valued arrayz
whilenorm(z,Inf)
is defined as the maximum absolute value of the elements of the complex-valued arrayz
. -
Fix doc. about the type argument for
vnorm2(x)
, etc. -
promote_multiplier
should have 2 different behaviors: to allow for using BLAS routines, all multipliers must be converted to complexes if array arguments are complex-valued. -
Remove file
test/common.jl
. -
Rationalize exceptions and error messages.
-
Separate simplifications (like
inv(A)*A -> Id
) and optimizations (like multiplying two diagonal operators yields a single diagonal operator), perhaps via methodssimplify(X)
andoptimize(X)
withX
a construction of mappings. -
Deprecate
is_same_mapping
in favor ofare_same_mappings
andis_same_mutable_object
in favor ofare_same_objects
which is more general.@deprecate is_same_mapping are_same_mappings are_same_objects(::Any, ::Any) = false # always false if types are different are_same_objects(a::T, b::T) where T = (T.mutable ? pointer_from_objref(a) === pointer_from_objref(b) : a === b)
The above implementation does exactly what does
===
so it is not needed. Discuss extendingare_same_mappings
for any derived mapping type. Methodare_same_mapping
should default to===
but may be extended for special cases like FFT operators. The documentation should explain whenare_same_mapping
has to be extended. -
Optimize composition of cropping and zero-padding operators. The adjoint of a cropping or zero-padding operator is the pseudo-inverse of the operator, hence extend the
pinv
method. If input and ouput dimnesions are the same (and offsets are all zeros), a cropping/zero-padding operator is the identity. -
vscale!
can callrmul!
? -
Implement preconditioned conjugate gradient.
-
Simplify left/right multiplication of a sparse/diagonal operator by a diagonal operator. Same thing for sparse interpolator. Take care of scaling by a multiplier (otherwise this makes little sense).
-
Automatically simplify composition of diagonal operators.
-
Do not make
A*x
equivalent toA(x)
for non-linear mappings. -
Provide means to convert a sparse operator to a regular array or to a sparse matrix and reciprocally. Use BLAS/LAPACK routines for sparse operators?
-
Write an implementation of the L-BFGS operator and of the SR1 operator and perhaps of other low-rank operators.
-
Use more extensively BLAS subroutines. Fix usage of BLAX
dot
andaxpy
routines for dense arrays (use flat arrays). -
SelfAdjoint should not be a trait? Perhaps better to extend
adjoint(A::T) = A
whenT
is self-adjoint. -
Provide simplification rules for sums and compositions of diagonal operators (which are also easy to invert).
-
Add rules so that the composition of two scaled linear operators, say
(αA)⋅(βB)
, automatically yields((αβ)A)⋅B
whenA
is a linear mapping. This cannot easily propagate to have, e.g.(αA)⋅(βB)⋅(γC)
automatically yields((αβγ)A)⋅B⋅C
whenA
andB
are linear mappings. Perhaps this could be solved with somesimplify(...)
method to be applied to constructed mappings. In fact, the solution is to have(αA)⋅(βB)
automatically represented (by the magic of the constructors chain) as a scaled composition that is(αβ)(A⋅B)
(that is pull the scale factor outside the expression). Indeed,(αA)⋅(βB)⋅(γC)
will then automatically becomes(αβ)(A⋅B)⋅(γC)
and then(αβγ)(A⋅B⋅C)
with no additional efforts.-
α*A
=>A
ifα = 1
-
α*A
=>O
ifα = 0
withO
the null mapping which is represented as0
times a mapping, hereA
. This is needed to know the result of applying the null mapping. In other words, there is no universal neutral element for the addition of mappings; whereas the identityId
is the universal neutral element for the composition of mappings. -
A*(β*B)
=>β*(A*B)
ifA
is a a linear mapping. -
(α*A)*(β*B)
=>(α*β)*(A*B)
ifA
is a linear mapping. -
As a consequence of the above rules,
(α*A)*(β*B)*(γ*C)
=>(α*βγ*)*(A*B*C)
ifA
andB
are linear mappings, and so on. -
α\A
=>(1/α)*A
-
A/(β*B)
=>β\(A/B)
ifA
is a linear mapping. -
(α*A)/(β*B)
=>(α/β)*(A/B)
ifA
is a linear mapping. -
A\(β*B)
=>β*(A\B)
ifA
is a linear mapping. -
(α*A)\(β*B)
=>(β/α)*(A\B)
ifA
is a linear mapping. -
(α*Id)*A
=>α*A
whereId
is the identity. -
A/A
,A\A
, orinv(A)*A
=>Id
forA
invertible (this trait means thatA
can be safely assumed invertible, possibilities:Invertible
,NonInvertible
to trigger an error on attempt to invert,PossiblyInvertible
for mappings that may be invertible but not always and for which it is too costly to check. For intance, checking for a uniform scaling(α*Id)
is trivial as it suffices to check whetherα
is non-zero).
-
-
Concrete implementation of mappings on arrays is not consistent for complex valued arrays.
-
Decide that, unless forbidden,
inv
is always possible (may be clash when trying to apply). Or decide the opposite. -
Optimize
FiniteDifferences
for other multipliers. -
Make a demo like:
using LazyAlgebra psf = read_image("psf.dat") dat = read_image("data.dat") wgt = read_image("weights.dat") µ = 1e-3 # choose regularization level .... # deal with sizes, zero-padding, or cropping etc. F = FFTOperator(dat) # make a FFT operator to work with arrays similar to dat # Build instrumental model H (convolution by the PSF) H = F\Diag(F*ifftshift(psf))*F W = Diag(wgt) # W is the precision matrix for independent noise D = Diff() # D will be used for the regularization A = H'*W*H + µ*D'*D # left hand-side matrix of the normal equations b = H'*W*y # right hand-side vector of the normal equations img = conjgrad(A, b) # solve the normal equations using linear conjugate gradients save_image(img, "result.dat")
Notes: (1)
D'*D
is automatically simplified into aHalfHessian
construction whose application to a vector, sayx
, is faster thanD'*(D*x))
. (2) The evaluation ofH'*W*H
automatically uses the least temporary workspace(s). -
Replace
Coder
by using available meta-programming tools.