Issues¶
Currently, algebras over 6 dimensions are very slow. this is because this module was written for pedagogical purposes. However, because the syntax for this module is so attractive, we plan to fix the performance problems, in the future…
Due to Python’s order of operations, the bit operators
^
<<
|
are evaluated after the normal arithmetic operators+
-
*
/
, which do not follow the precedence expected in GA# written meaning possibly intended 1^e1 + 2^e2 == 1^(e1+2)^e2 != (1^e0) + (2^e1) e2 + e1|e2 == (e2 + e1)|e2 != e1 + (e1|e2)This can also cause confusion within the bitwise operators:
# written meaning possibly intended e1 << e2 ^ e1 == (e1 << e2) ^ e1 != e1 << (e2 ^ e1) e1 ^ e2 | e1 == (e1 << e2) ^ e1 != e1 << (e2 ^ e1)Since
|
is the inner product and the inner product with a scalar vanishes by definition, an expression like:(1|e0) + (2|e1)is null. Use the outer product or full geometric product, to multiply scalars with
MultiVector
s. This can cause problems if one has code that mixes Python numbers and MultiVectors. If the code multiplies two values that can each be either type without checking, one can run into problems as1 | 2
has a very different result from the same multiplication with scalar MultiVectors.Taking the inverse of a
MultiVector
will use a method proposed by Christian Perwass that involves the solution of a matrix equation. A description of that method follows:Representing multivectors as \(2^\text{dims}\)-vectors (in the matrix sense), we can carry out the geometric product with a multiplication table. In pseudo-tensorish language (using summation notation)
\[m_i g_{ijk} n_k = v_j\]Suppose \(m_i\) are known (M is the vector we are taking the inverse of), the \(g_{ijk}\) have been computed for this algebra, and \(v_j = 1\) if the \(j\)’th element is the scalar element and 0 otherwise, we can compute the dot product \(m_i g_{ijk}\). This yields a rank-2 matrix. We can then use well-established computational linear algebra techniques to solve this matrix equation for \(n_k\). The
laInv
method does precisely that.The usual, analytic, method for computing inverses (\(M^{-1} = \tilde M/(M \tilde M)\) iff \(M\tilde M = {|M|}^2\)) fails for those multivectors where
M*~M
is not a scalar. It is onl)y used if theinv
method is manually set to point tonormalInv
.My testing suggests that
laInv
works. In the cases wherenormalInv
works,laInv
returns the same result (within_eps
). In all cases,M * M.laInv() == 1.0
(within_eps
). Use whichever you feel comfortable with.Of course, a new issue arises with this method. The inverses found are sometimes dependant on the order of multiplication. That is:
M.laInv() * M == 1.0 M * M.laInv() != 1.0XXX Thus, there are two other methods defined,
leftInv
andrightInv
which point toleftLaInv
andrightLaInv
. The methodinv
points torightInv
. Should the user choose,leftInv
andrightInv
will both point tonormalInv
, which yields a left- and right-inverse that are the same should either exist (the proof is fairly simple).The basis vectors of any algebra will be orthonormal unless you supply your own multiplication tables (which you are free to do after the
Layout
constructor is called). A derived class could be made to calculate these tables for you (and include methods for generating reciprocal bases and the like).No care is taken to preserve the dtype of the arrays. The purpose of this module is pedagogical. If your application requires so many multivectors that storage becomes important, the class structure here is unsuitable for you anyways. Instead, use the algorithms from this module and implement application-specific data structures.
Conversely, explicit typecasting is rare.
MultiVector
s will have integer coefficients if you instantiate them that way. Dividing them by Python integers will have the same consequences as normal integer division. Public outcry will convince me to add the explicit casts if this becomes a problem.
Acknowledgements¶
Konrad Hinsen fixed a few bugs in the conversion to numpy and adding some unit tests.
ChangeLog¶
Changes in 1.1.0¶
Restores
layout.gmt
,Layout.omt
,Layout.imt
, andLayout.lcmt
. A few releases ago, these existed but were dense. For memory reasons, they were then removed entirely. They have now been reinstated assparse.COO
matrix objects, which behave much the same as the original dense arrays.
MultiVector
s preserve their data type in addition, subtraction, and products. This means that integers remain integers until combined with floats. Note that this means in principle integer overflow is possible, so working with floats is still recommended. This also adds support for floating point types of other precision, such asnp.float32
.
setup.py
is now configured such thatpip2 install clifford
will not attempt to download this version, since it does not work at all on python 2.Documentation now includes examples of
pyganja
visualizations.
Compatibility notes¶
Bugs fixed¶
mv[(i, j)]
would sometimes fail if the indices were not in canonical order.
mv == None
andlayout == None
would crash rather than returnFalse
.
blade.isVersor()
would returnFalse
.
layout.blades_of_grade(0)
would not return the list it claimed to return.
Internal changes¶
Switch to
pytest
for testing.Enable code coverage.
Split into smaller files.
Remove python 2 compatibility code, which already no longer worked.
Changes 0.6-0.7¶
Added a real license.
Convert to NumPy instead of Numeric.
Changes 0.5-0.6¶
join()
andmeet()
actually work now, but have numerical accuracy problemsadded
clean()
toMultiVector
added
leftInv()
andrightInv()
toMultiVector
moved
pseudoScalar()
andinvPS()
toMultiVector
(so we can derive new classes fromMultiVector
)changed all of the instances of creating a new MultiVector to create an instance of
self.__class__
for proper inheritancefixed bug in laInv()
fixed the massive confusion about how dot() works
added left-contraction
fixed embarrassing bug in gmt generation
added
normal()
andanticommutator()
methodsfixed dumb bug in
elements()
that limited it to 4 dimensions
Happy hacking!
Robert Kern