Unpacking Software Livestream

Join our monthly Unpacking Software livestream to hear about the latest news, chat and opinion on packaging, software deployment and lifecycle management!

Learn More

Chocolatey Product Spotlight

Join the Chocolatey Team on our regular monthly stream where we put a spotlight on the most recent Chocolatey product releases. You'll have a chance to have your questions answered in a live Ask Me Anything format.

Learn More

Chocolatey Coding Livestream

Join us for the Chocolatey Coding Livestream, where members of our team dive into the heart of open source development by coding live on various Chocolatey projects. Tune in to witness real-time coding, ask questions, and gain insights into the world of package management. Don't miss this opportunity to engage with our team and contribute to the future of Chocolatey!

Learn More

Calling All Chocolatiers! Whipping Up Windows Automation with Chocolatey Central Management

Webinar from
Wednesday, 17 January 2024

We are delighted to announce the release of Chocolatey Central Management v0.12.0, featuring seamless Deployment Plan creation, time-saving duplications, insightful Group Details, an upgraded Dashboard, bug fixes, user interface polishing, and refined documentation. As an added bonus we'll have members of our Solutions Engineering team on-hand to dive into some interesting ways you can leverage the new features available!

Watch On-Demand
Chocolatey Community Coffee Break

Join the Chocolatey Team as we discuss all things Community, what we do, how you can get involved and answer your Chocolatey questions.

Watch The Replays
Chocolatey and Intune Overview

Webinar Replay from
Wednesday, 30 March 2022

At Chocolatey Software we strive for simple, and teaching others. Let us teach you just how simple it could be to keep your 3rd party applications updated across your devices, all with Intune!

Watch On-Demand
Chocolatey For Business. In Azure. In One Click.

Livestream from
Thursday, 9 June 2022

Join James and Josh to show you how you can get the Chocolatey For Business recommended infrastructure and workflow, created, in Azure, in around 20 minutes.

Watch On-Demand
The Future of Chocolatey CLI

Livestream from
Thursday, 04 August 2022

Join Paul and Gary to hear more about the plans for the Chocolatey CLI in the not so distant future. We'll talk about some cool new features, long term asks from Customers and Community and how you can get involved!

Watch On-Demand
Hacktoberfest Tuesdays 2022

Livestreams from
October 2022

For Hacktoberfest, Chocolatey ran a livestream every Tuesday! Re-watch Cory, James, Gary, and Rain as they share knowledge on how to contribute to open-source projects such as Chocolatey CLI.

Watch On-Demand

Downloads:

49,247

Downloads of v 3.3.4.20210818:

3,962

Last Update:

18 Aug 2021

Package Maintainer(s):

Software Author(s):

  • Benoît Jacob (founder)
  • Gaël Guennebaud (guru)

Tags:

eigen

Eigen

This is not the latest version of Eigen available.

  • 1
  • 2
  • 3

3.3.4.20210818 | Updated: 18 Aug 2021

Downloads:

49,247

Downloads of v 3.3.4.20210818:

3,962

Maintainer(s):

Software Author(s):

  • Benoît Jacob (founder)
  • Gaël Guennebaud (guru)

Tags:

eigen

Eigen 3.3.4.20210818

This is not the latest version of Eigen available.

Legal Disclaimer: Neither this package nor Chocolatey Software, Inc. are affiliated with or endorsed by Benoît Jacob (founder), Gaël Guennebaud (guru). The inclusion of Benoît Jacob (founder), Gaël Guennebaud (guru) trademark(s), if any, upon this webpage is solely to identify Benoît Jacob (founder), Gaël Guennebaud (guru) goods or services and not for commercial purposes.

  • 1
  • 2
  • 3

All Checks are Passing

3 Passing Tests


Validation Testing Passed


Verification Testing Passed

Details

Scan Testing Successful:

No detections found in any package files

Details
Learn More

Deployment Method: Individual Install, Upgrade, & Uninstall

To install Eigen, run the following command from the command line or from PowerShell:

>

To upgrade Eigen, run the following command from the command line or from PowerShell:

>

To uninstall Eigen, run the following command from the command line or from PowerShell:

>

Deployment Method:

NOTE

This applies to both open source and commercial editions of Chocolatey.

1. Enter Your Internal Repository Url

(this should look similar to https://community.chocolatey.org/api/v2/)


2. Setup Your Environment

1. Ensure you are set for organizational deployment

Please see the organizational deployment guide

2. Get the package into your environment

  • Open Source or Commercial:
    • Proxy Repository - Create a proxy nuget repository on Nexus, Artifactory Pro, or a proxy Chocolatey repository on ProGet. Point your upstream to https://community.chocolatey.org/api/v2/. Packages cache on first access automatically. Make sure your choco clients are using your proxy repository as a source and NOT the default community repository. See source command for more information.
    • You can also just download the package and push it to a repository Download

3. Copy Your Script

choco upgrade eigen -y --source="'INTERNAL REPO URL'" --version="'3.3.4.20210818'" [other options]

See options you can pass to upgrade.

See best practices for scripting.

Add this to a PowerShell script or use a Batch script with tools and in places where you are calling directly to Chocolatey. If you are integrating, keep in mind enhanced exit codes.

If you do use a PowerShell script, use the following to ensure bad exit codes are shown as failures:


choco upgrade eigen -y --source="'INTERNAL REPO URL'" --version="'3.3.4.20210818'" 
$exitCode = $LASTEXITCODE

Write-Verbose "Exit code was $exitCode"
$validExitCodes = @(0, 1605, 1614, 1641, 3010)
if ($validExitCodes -contains $exitCode) {
  Exit 0
}

Exit $exitCode

- name: Install eigen
  win_chocolatey:
    name: eigen
    version: '3.3.4.20210818'
    source: INTERNAL REPO URL
    state: present

See docs at https://docs.ansible.com/ansible/latest/modules/win_chocolatey_module.html.


chocolatey_package 'eigen' do
  action    :install
  source   'INTERNAL REPO URL'
  version  '3.3.4.20210818'
end

See docs at https://docs.chef.io/resource_chocolatey_package.html.


cChocoPackageInstaller eigen
{
    Name     = "eigen"
    Version  = "3.3.4.20210818"
    Source   = "INTERNAL REPO URL"
}

Requires cChoco DSC Resource. See docs at https://github.com/chocolatey/cChoco.


package { 'eigen':
  ensure   => '3.3.4.20210818',
  provider => 'chocolatey',
  source   => 'INTERNAL REPO URL',
}

Requires Puppet Chocolatey Provider module. See docs at https://forge.puppet.com/puppetlabs/chocolatey.


4. If applicable - Chocolatey configuration/installation

See infrastructure management matrix for Chocolatey configuration elements and examples.

Package Approved

This package was approved by moderator Windos on 01 Sep 2021.

Description

Eigen provided for use in Windows projects.

This package is inspired from https://github.com/nuclearsandwich/eigen-choco and https://github.com/ros2/choco-packages, with minor modifications and the addition of the unsupported Eigen files. It does not use the CMake-native installer from upstream which appears to rely on the make utility.

Eigen is Free Software. Starting from the 3.1.1 version, it is licensed under the MPL2, which is a simple weak copyleft license. Common questions about the MPL2 are answered in the official MPL2 FAQ.
Earlier versions were licensed under the LGPL3+.
Note that currently, a few features rely on third-party code licensed under the LGPL: SimplicialCholesky, AMD ordering, and constrained_cg. Such features can be explicitly disabled by compiling with the EIGEN_MPL2_ONLY preprocessor symbol defined. Furthermore, Eigen provides interface classes for various third-party libraries (usually recognizable by the <Eigen/*Support> header name). Of course you have to mind the license of the so-included library when using them.
Virtually any software may use Eigen. For example, closed-source software may use Eigen without having to disclose its own source code. Many proprietary and closed-source software projects are using Eigen right now, as well as many BSD-licensed projects.


include\.gitkeep
 
include\Eigen\Cholesky
 
include\Eigen\CholmodSupport
 
include\Eigen\Core
 
include\Eigen\Dense
 
include\Eigen\Eigen
 
include\Eigen\Eigenvalues
 
include\Eigen\Geometry
 
include\Eigen\Householder
 
include\Eigen\IterativeLinearSolvers
 
include\Eigen\Jacobi
 
include\Eigen\LU
 
include\Eigen\MetisSupport
 
include\Eigen\OrderingMethods
 
include\Eigen\PardisoSupport
 
include\Eigen\PaStiXSupport
 
include\Eigen\QR
 
include\Eigen\QtAlignedMalloc
 
include\Eigen\Sparse
 
include\Eigen\SparseCholesky
 
include\Eigen\SparseCore
 
include\Eigen\SparseLU
 
include\Eigen\SparseQR
 
include\Eigen\SPQRSupport
 
include\Eigen\src\Cholesky\LDLT.h
 
include\Eigen\src\Cholesky\LLT.h
 
include\Eigen\src\Cholesky\LLT_LAPACKE.h
 
include\Eigen\src\CholmodSupport\CholmodSupport.h
 
include\Eigen\src\Core\arch\AltiVec\Complex.h
 
include\Eigen\src\Core\arch\AltiVec\MathFunctions.h
 
include\Eigen\src\Core\arch\AltiVec\PacketMath.h
 
include\Eigen\src\Core\arch\AVX\Complex.h
 
include\Eigen\src\Core\arch\AVX\MathFunctions.h
 
include\Eigen\src\Core\arch\AVX\PacketMath.h
 
include\Eigen\src\Core\arch\AVX\TypeCasting.h
 
include\Eigen\src\Core\arch\AVX512\MathFunctions.h
 
include\Eigen\src\Core\arch\AVX512\PacketMath.h
 
include\Eigen\src\Core\arch\CUDA\Complex.h
 
include\Eigen\src\Core\arch\CUDA\Half.h
 
include\Eigen\src\Core\arch\CUDA\MathFunctions.h
 
include\Eigen\src\Core\arch\CUDA\PacketMath.h
 
include\Eigen\src\Core\arch\CUDA\PacketMathHalf.h
 
include\Eigen\src\Core\arch\CUDA\TypeCasting.h
 
include\Eigen\src\Core\arch\Default\Settings.h
 
include\Eigen\src\Core\arch\NEON\Complex.h
 
include\Eigen\src\Core\arch\NEON\MathFunctions.h
 
include\Eigen\src\Core\arch\NEON\PacketMath.h
 
include\Eigen\src\Core\arch\SSE\Complex.h
 
include\Eigen\src\Core\arch\SSE\MathFunctions.h
 
include\Eigen\src\Core\arch\SSE\PacketMath.h
 
include\Eigen\src\Core\arch\SSE\TypeCasting.h
 
include\Eigen\src\Core\arch\ZVector\Complex.h
 
include\Eigen\src\Core\arch\ZVector\MathFunctions.h
 
include\Eigen\src\Core\arch\ZVector\PacketMath.h
 
include\Eigen\src\Core\Array.h
 
include\Eigen\src\Core\ArrayBase.h
 
include\Eigen\src\Core\ArrayWrapper.h
 
include\Eigen\src\Core\Assign.h
 
include\Eigen\src\Core\AssignEvaluator.h
 
include\Eigen\src\Core\Assign_MKL.h
 
include\Eigen\src\Core\BandMatrix.h
 
include\Eigen\src\Core\Block.h
 
include\Eigen\src\Core\BooleanRedux.h
 
include\Eigen\src\Core\CommaInitializer.h
 
include\Eigen\src\Core\ConditionEstimator.h
 
include\Eigen\src\Core\CoreEvaluators.h
 
include\Eigen\src\Core\CoreIterators.h
 
include\Eigen\src\Core\CwiseBinaryOp.h
 
include\Eigen\src\Core\CwiseNullaryOp.h
 
include\Eigen\src\Core\CwiseTernaryOp.h
 
include\Eigen\src\Core\CwiseUnaryOp.h
 
include\Eigen\src\Core\CwiseUnaryView.h
 
include\Eigen\src\Core\DenseBase.h
 
include\Eigen\src\Core\DenseCoeffsBase.h
 
include\Eigen\src\Core\DenseStorage.h
 
include\Eigen\src\Core\Diagonal.h
 
include\Eigen\src\Core\DiagonalMatrix.h
 
include\Eigen\src\Core\DiagonalProduct.h
 
include\Eigen\src\Core\Dot.h
 
include\Eigen\src\Core\EigenBase.h
 
include\Eigen\src\Core\ForceAlignedAccess.h
 
include\Eigen\src\Core\functors\AssignmentFunctors.h
 
include\Eigen\src\Core\functors\BinaryFunctors.h
 
include\Eigen\src\Core\functors\NullaryFunctors.h
 
include\Eigen\src\Core\functors\StlFunctors.h
 
include\Eigen\src\Core\functors\TernaryFunctors.h
 
include\Eigen\src\Core\functors\UnaryFunctors.h
 
include\Eigen\src\Core\Fuzzy.h
 
include\Eigen\src\Core\GeneralProduct.h
 
include\Eigen\src\Core\GenericPacketMath.h
 
include\Eigen\src\Core\GlobalFunctions.h
 
include\Eigen\src\Core\Inverse.h
 
include\Eigen\src\Core\IO.h
 
include\Eigen\src\Core\Map.h
 
include\Eigen\src\Core\MapBase.h
 
include\Eigen\src\Core\MathFunctions.h
 
include\Eigen\src\Core\MathFunctionsImpl.h
 
include\Eigen\src\Core\Matrix.h
 
include\Eigen\src\Core\MatrixBase.h
 
include\Eigen\src\Core\NestByValue.h
 
include\Eigen\src\Core\NoAlias.h
 
include\Eigen\src\Core\NumTraits.h
 
include\Eigen\src\Core\PermutationMatrix.h
 
include\Eigen\src\Core\PlainObjectBase.h
 
include\Eigen\src\Core\Product.h
 
include\Eigen\src\Core\ProductEvaluators.h
 
include\Eigen\src\Core\products\GeneralBlockPanelKernel.h
 
include\Eigen\src\Core\products\GeneralMatrixMatrix.h
 
include\Eigen\src\Core\products\GeneralMatrixMatrixTriangular.h
 
include\Eigen\src\Core\products\GeneralMatrixMatrixTriangular_BLAS.h
 
include\Eigen\src\Core\products\GeneralMatrixMatrix_BLAS.h
 
include\Eigen\src\Core\products\GeneralMatrixVector.h
 
include\Eigen\src\Core\products\GeneralMatrixVector_BLAS.h
 
include\Eigen\src\Core\products\Parallelizer.h
 
include\Eigen\src\Core\products\SelfadjointMatrixMatrix.h
 
include\Eigen\src\Core\products\SelfadjointMatrixMatrix_BLAS.h
 
include\Eigen\src\Core\products\SelfadjointMatrixVector.h
 
include\Eigen\src\Core\products\SelfadjointMatrixVector_BLAS.h
 
include\Eigen\src\Core\products\SelfadjointProduct.h
 
include\Eigen\src\Core\products\SelfadjointRank2Update.h
 
include\Eigen\src\Core\products\TriangularMatrixMatrix.h
 
include\Eigen\src\Core\products\TriangularMatrixMatrix_BLAS.h
 
include\Eigen\src\Core\products\TriangularMatrixVector.h
 
include\Eigen\src\Core\products\TriangularMatrixVector_BLAS.h
 
include\Eigen\src\Core\products\TriangularSolverMatrix.h
 
include\Eigen\src\Core\products\TriangularSolverMatrix_BLAS.h
 
include\Eigen\src\Core\products\TriangularSolverVector.h
 
include\Eigen\src\Core\Random.h
 
include\Eigen\src\Core\Redux.h
 
include\Eigen\src\Core\Ref.h
 
include\Eigen\src\Core\Replicate.h
 
include\Eigen\src\Core\ReturnByValue.h
 
include\Eigen\src\Core\Reverse.h
 
include\Eigen\src\Core\Select.h
 
include\Eigen\src\Core\SelfAdjointView.h
 
include\Eigen\src\Core\SelfCwiseBinaryOp.h
 
include\Eigen\src\Core\Solve.h
 
include\Eigen\src\Core\SolverBase.h
 
include\Eigen\src\Core\SolveTriangular.h
 
include\Eigen\src\Core\StableNorm.h
 
include\Eigen\src\Core\Stride.h
 
include\Eigen\src\Core\Swap.h
 
include\Eigen\src\Core\Transpose.h
 
include\Eigen\src\Core\Transpositions.h
 
include\Eigen\src\Core\TriangularMatrix.h
 
include\Eigen\src\Core\util\BlasUtil.h
 
include\Eigen\src\Core\util\Constants.h
 
include\Eigen\src\Core\util\DisableStupidWarnings.h
 
include\Eigen\src\Core\util\ForwardDeclarations.h
 
include\Eigen\src\Core\util\Macros.h
 
include\Eigen\src\Core\util\Memory.h
 
include\Eigen\src\Core\util\Meta.h
 
include\Eigen\src\Core\util\MKL_support.h
 
include\Eigen\src\Core\util\NonMPL2.h
 
include\Eigen\src\Core\util\ReenableStupidWarnings.h
 
include\Eigen\src\Core\util\StaticAssert.h
 
include\Eigen\src\Core\util\XprHelper.h
 
include\Eigen\src\Core\VectorBlock.h
 
include\Eigen\src\Core\VectorwiseOp.h
 
include\Eigen\src\Core\Visitor.h
 
include\Eigen\src\Eigenvalues\ComplexEigenSolver.h
 
include\Eigen\src\Eigenvalues\ComplexSchur.h
 
include\Eigen\src\Eigenvalues\ComplexSchur_LAPACKE.h
 
include\Eigen\src\Eigenvalues\EigenSolver.h
 
include\Eigen\src\Eigenvalues\GeneralizedEigenSolver.h
 
include\Eigen\src\Eigenvalues\GeneralizedSelfAdjointEigenSolver.h
 
include\Eigen\src\Eigenvalues\HessenbergDecomposition.h
 
include\Eigen\src\Eigenvalues\MatrixBaseEigenvalues.h
 
include\Eigen\src\Eigenvalues\RealQZ.h
 
include\Eigen\src\Eigenvalues\RealSchur.h
 
include\Eigen\src\Eigenvalues\RealSchur_LAPACKE.h
 
include\Eigen\src\Eigenvalues\SelfAdjointEigenSolver.h
 
include\Eigen\src\Eigenvalues\SelfAdjointEigenSolver_LAPACKE.h
 
include\Eigen\src\Eigenvalues\Tridiagonalization.h
 
include\Eigen\src\Geometry\AlignedBox.h
 
include\Eigen\src\Geometry\AngleAxis.h
 
include\Eigen\src\Geometry\arch\Geometry_SSE.h
 
include\Eigen\src\Geometry\EulerAngles.h
 
include\Eigen\src\Geometry\Homogeneous.h
 
include\Eigen\src\Geometry\Hyperplane.h
 
include\Eigen\src\Geometry\OrthoMethods.h
 
include\Eigen\src\Geometry\ParametrizedLine.h
 
include\Eigen\src\Geometry\Quaternion.h
 
include\Eigen\src\Geometry\Rotation2D.h
 
include\Eigen\src\Geometry\RotationBase.h
 
include\Eigen\src\Geometry\Scaling.h
 
include\Eigen\src\Geometry\Transform.h
 
include\Eigen\src\Geometry\Translation.h
 
include\Eigen\src\Geometry\Umeyama.h
 
include\Eigen\src\Householder\BlockHouseholder.h
 
include\Eigen\src\Householder\Householder.h
 
include\Eigen\src\Householder\HouseholderSequence.h
 
include\Eigen\src\IterativeLinearSolvers\BasicPreconditioners.h
 
include\Eigen\src\IterativeLinearSolvers\BiCGSTAB.h
 
include\Eigen\src\IterativeLinearSolvers\ConjugateGradient.h
 
include\Eigen\src\IterativeLinearSolvers\IncompleteCholesky.h
 
include\Eigen\src\IterativeLinearSolvers\IncompleteLUT.h
 
include\Eigen\src\IterativeLinearSolvers\IterativeSolverBase.h
 
include\Eigen\src\IterativeLinearSolvers\LeastSquareConjugateGradient.h
 
include\Eigen\src\IterativeLinearSolvers\SolveWithGuess.h
 
include\Eigen\src\Jacobi\Jacobi.h
 
include\Eigen\src\LU\arch\Inverse_SSE.h
 
include\Eigen\src\LU\Determinant.h
 
include\Eigen\src\LU\FullPivLU.h
 
include\Eigen\src\LU\InverseImpl.h
 
include\Eigen\src\LU\PartialPivLU.h
 
include\Eigen\src\LU\PartialPivLU_LAPACKE.h
 
include\Eigen\src\MetisSupport\MetisSupport.h
 
include\Eigen\src\misc\blas.h
 
include\Eigen\src\misc\Image.h
 
include\Eigen\src\misc\Kernel.h
 
include\Eigen\src\misc\lapack.h
 
include\Eigen\src\misc\lapacke.h
 
include\Eigen\src\misc\lapacke_mangling.h
 
include\Eigen\src\misc\RealSvd2x2.h
 
include\Eigen\src\OrderingMethods\Amd.h
 
include\Eigen\src\OrderingMethods\Eigen_Colamd.h
 
include\Eigen\src\OrderingMethods\Ordering.h
 
include\Eigen\src\PardisoSupport\PardisoSupport.h
 
include\Eigen\src\PaStiXSupport\PaStiXSupport.h
 
include\Eigen\src\plugins\ArrayCwiseBinaryOps.h
 
include\Eigen\src\plugins\ArrayCwiseUnaryOps.h
 
include\Eigen\src\plugins\BlockMethods.h
 
include\Eigen\src\plugins\CommonCwiseBinaryOps.h
 
include\Eigen\src\plugins\CommonCwiseUnaryOps.h
 
include\Eigen\src\plugins\MatrixCwiseBinaryOps.h
 
include\Eigen\src\plugins\MatrixCwiseUnaryOps.h
 
include\Eigen\src\QR\ColPivHouseholderQR.h
 
include\Eigen\src\QR\ColPivHouseholderQR_LAPACKE.h
 
include\Eigen\src\QR\CompleteOrthogonalDecomposition.h
 
include\Eigen\src\QR\FullPivHouseholderQR.h
 
include\Eigen\src\QR\HouseholderQR.h
 
include\Eigen\src\QR\HouseholderQR_LAPACKE.h
 
include\Eigen\src\SparseCholesky\SimplicialCholesky.h
 
include\Eigen\src\SparseCholesky\SimplicialCholesky_impl.h
 
include\Eigen\src\SparseCore\AmbiVector.h
 
include\Eigen\src\SparseCore\CompressedStorage.h
 
include\Eigen\src\SparseCore\ConservativeSparseSparseProduct.h
 
include\Eigen\src\SparseCore\MappedSparseMatrix.h
 
include\Eigen\src\SparseCore\SparseAssign.h
 
include\Eigen\src\SparseCore\SparseBlock.h
 
include\Eigen\src\SparseCore\SparseColEtree.h
 
include\Eigen\src\SparseCore\SparseCompressedBase.h
 
include\Eigen\src\SparseCore\SparseCwiseBinaryOp.h
 
include\Eigen\src\SparseCore\SparseCwiseUnaryOp.h
 
include\Eigen\src\SparseCore\SparseDenseProduct.h
 
include\Eigen\src\SparseCore\SparseDiagonalProduct.h
 
include\Eigen\src\SparseCore\SparseDot.h
 
include\Eigen\src\SparseCore\SparseFuzzy.h
 
include\Eigen\src\SparseCore\SparseMap.h
 
include\Eigen\src\SparseCore\SparseMatrix.h
 
include\Eigen\src\SparseCore\SparseMatrixBase.h
 
include\Eigen\src\SparseCore\SparsePermutation.h
 
include\Eigen\src\SparseCore\SparseProduct.h
 
include\Eigen\src\SparseCore\SparseRedux.h
 
include\Eigen\src\SparseCore\SparseRef.h
 
include\Eigen\src\SparseCore\SparseSelfAdjointView.h
 
include\Eigen\src\SparseCore\SparseSolverBase.h
 
include\Eigen\src\SparseCore\SparseSparseProductWithPruning.h
 
include\Eigen\src\SparseCore\SparseTranspose.h
 
include\Eigen\src\SparseCore\SparseTriangularView.h
 
include\Eigen\src\SparseCore\SparseUtil.h
 
include\Eigen\src\SparseCore\SparseVector.h
 
include\Eigen\src\SparseCore\SparseView.h
 
include\Eigen\src\SparseCore\TriangularSolver.h
 
include\Eigen\src\SparseLU\SparseLU.h
 
include\Eigen\src\SparseLU\SparseLUImpl.h
 
include\Eigen\src\SparseLU\SparseLU_column_bmod.h
 
include\Eigen\src\SparseLU\SparseLU_column_dfs.h
 
include\Eigen\src\SparseLU\SparseLU_copy_to_ucol.h
 
include\Eigen\src\SparseLU\SparseLU_gemm_kernel.h
 
include\Eigen\src\SparseLU\SparseLU_heap_relax_snode.h
 
include\Eigen\src\SparseLU\SparseLU_kernel_bmod.h
 
include\Eigen\src\SparseLU\SparseLU_Memory.h
 
include\Eigen\src\SparseLU\SparseLU_panel_bmod.h
 
include\Eigen\src\SparseLU\SparseLU_panel_dfs.h
 
include\Eigen\src\SparseLU\SparseLU_pivotL.h
 
include\Eigen\src\SparseLU\SparseLU_pruneL.h
 
include\Eigen\src\SparseLU\SparseLU_relax_snode.h
 
include\Eigen\src\SparseLU\SparseLU_Structs.h
 
include\Eigen\src\SparseLU\SparseLU_SupernodalMatrix.h
 
include\Eigen\src\SparseLU\SparseLU_Utils.h
 
include\Eigen\src\SparseQR\SparseQR.h
 
include\Eigen\src\SPQRSupport\SuiteSparseQRSupport.h
 
include\Eigen\src\StlSupport\details.h
 
include\Eigen\src\StlSupport\StdDeque.h
 
include\Eigen\src\StlSupport\StdList.h
 
include\Eigen\src\StlSupport\StdVector.h
 
include\Eigen\src\SuperLUSupport\SuperLUSupport.h
 
include\Eigen\src\SVD\BDCSVD.h
 
include\Eigen\src\SVD\JacobiSVD.h
 
include\Eigen\src\SVD\JacobiSVD_LAPACKE.h
 
include\Eigen\src\SVD\SVDBase.h
 
include\Eigen\src\SVD\UpperBidiagonalization.h
 
include\Eigen\src\UmfPackSupport\UmfPackSupport.h
 
include\Eigen\StdDeque
 
include\Eigen\StdList
 
include\Eigen\StdVector
 
include\Eigen\SuperLUSupport
 
include\Eigen\SVD
 
include\Eigen\UmfPackSupport
 
include\unsupported\Eigen\AdolcForward
 
include\unsupported\Eigen\AlignedVector3
 
include\unsupported\Eigen\ArpackSupport
 
include\unsupported\Eigen\AutoDiff
 
include\unsupported\Eigen\BVH
 
include\unsupported\Eigen\CXX11\src\Tensor\README.md
# Eigen Tensors

Tensors are multidimensional arrays of elements. Elements are typically scalars,
but more complex types such as strings are also supported.

[TOC]

## Tensor Classes

You can manipulate a tensor with one of the following classes.  They all are in
the namespace ```::Eigen.```


### Class Tensor<data_type, rank>

This is the class to use to create a tensor and allocate memory for it.  The
class is templatized with the tensor datatype, such as float or int, and the
tensor rank.  The rank is the number of dimensions, for example rank 2 is a
matrix.

Tensors of this class are resizable.  For example, if you assign a tensor of a
different size to a Tensor, that tensor is resized to match its new value.

#### Constructor Tensor<data_type, rank>(size0, size1, ...)

Constructor for a Tensor.  The constructor must be passed ```rank``` integers
indicating the sizes of the instance along each of the the ```rank```
dimensions.

    // Create a tensor of rank 3 of sizes 2, 3, 4.  This tensor owns
    // memory to hold 24 floating point values (24 = 2 x 3 x 4).
    Tensor<float, 3> t_3d(2, 3, 4);

    // Resize t_3d by assigning a tensor of different sizes, but same rank.
    t_3d = Tensor<float, 3>(3, 4, 3);

#### Constructor Tensor<data_type, rank>(size_array)

Constructor where the sizes for the constructor are specified as an array of
values instead of an explicitly list of parameters.  The array type to use is
```Eigen::array<Eigen::Index>```.  The array can be constructed automatically
from an initializer list.

    // Create a tensor of strings of rank 2 with sizes 5, 7.
    Tensor<string, 2> t_2d({5, 7});


### Class TensorFixedSize<data_type, Sizes<size0, size1, ...>>

Class to use for tensors of fixed size, where the size is known at compile
time.  Fixed sized tensors can provide very fast computations because all their
dimensions are known by the compiler.  FixedSize tensors are not resizable.

If the total number of elements in a fixed size tensor is small enough the
tensor data is held onto the stack and does not cause heap allocation and free.

    // Create a 4 x 3 tensor of floats.
    TensorFixedSize<float, Sizes<4, 3>> t_4x3;

### Class TensorMap<Tensor<data_type, rank>>

This is the class to use to create a tensor on top of memory allocated and
owned by another part of your code.  It allows to view any piece of allocated
memory as a Tensor.  Instances of this class do not own the memory where the
data are stored.

A TensorMap is not resizable because it does not own the memory where its data
are stored.

#### Constructor TensorMap<Tensor<data_type, rank>>(data, size0, size1, ...)

Constructor for a Tensor.  The constructor must be passed a pointer to the
storage for the data, and "rank" size attributes.  The storage has to be
large enough to hold all the data.

    // Map a tensor of ints on top of stack-allocated storage.
    int storage[128];  // 2 x 4 x 2 x 8 = 128
    TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);

    // The same storage can be viewed as a different tensor.
    // You can also pass the sizes as an array.
    TensorMap<Tensor<int, 2>> t_2d(storage, 16, 8);

    // You can also map fixed-size tensors.  Here we get a 1d view of
    // the 2d fixed-size tensor.
    Tensor<float, Sizes<4, 5>> t_4x3;
    TensorMap<Tensor<float, 1>> t_12(t_4x3, 12);


#### Class TensorRef

See Assigning to a TensorRef below.

## Accessing Tensor Elements

#### <data_type> tensor(index0, index1...)

Return the element at position ```(index0, index1...)``` in tensor
```tensor```.  You must pass as many parameters as the rank of ```tensor```.
The expression can be used as an l-value to set the value of the element at the
specified position.  The value returned is of the datatype of the tensor.

    // Set the value of the element at position (0, 1, 0);
    Tensor<float, 3> t_3d(2, 3, 4);
    t_3d(0, 1, 0) = 12.0f;

    // Initialize all elements to random values.
    for (int i = 0; i < 2; ++i) {
      for (int j = 0; j < 3; ++j) {
        for (int k = 0; k < 4; ++k) {
          t_3d(i, j, k) = ...some random value...;
        }
      }
    }

    // Print elements of a tensor.
    for (int i = 0; i < 2; ++i) {
      LOG(INFO) << t_3d(i, 0, 0);
    }


## TensorLayout

The tensor library supports 2 layouts: ```ColMajor``` (the default) and
```RowMajor```.  Only the default column major layout is currently fully
supported, and it is therefore not recommended to attempt to use the row major
layout at the moment.

The layout of a tensor is optionally specified as part of its type. If not
specified explicitly column major is assumed.

    Tensor<float, 3, ColMajor> col_major;  // equivalent to Tensor<float, 3>
    TensorMap<Tensor<float, 3, RowMajor> > row_major(data, ...);

All the arguments to an expression must use the same layout. Attempting to mix
different layouts will result in a compilation error.

It is possible to change the layout of a tensor or an expression using the
```swap_layout()``` method.  Note that this will also reverse the order of the
dimensions.

    Tensor<float, 2, ColMajor> col_major(2, 4);
    Tensor<float, 2, RowMajor> row_major(2, 4);

    Tensor<float, 2> col_major_result = col_major;  // ok, layouts match
    Tensor<float, 2> col_major_result = row_major;  // will not compile

    // Simple layout swap
    col_major_result = row_major.swap_layout();
    eigen_assert(col_major_result.dimension(0) == 4);
    eigen_assert(col_major_result.dimension(1) == 2);

    // Swap the layout and preserve the order of the dimensions
    array<int, 2> shuffle(1, 0);
    col_major_result = row_major.swap_layout().shuffle(shuffle);
    eigen_assert(col_major_result.dimension(0) == 2);
    eigen_assert(col_major_result.dimension(1) == 4);


## Tensor Operations

The Eigen Tensor library provides a vast library of operations on Tensors:
numerical operations such as addition and multiplication, geometry operations
such as slicing and shuffling, etc.  These operations are available as methods
of the Tensor classes, and in some cases as operator overloads.  For example
the following code computes the elementwise addition of two tensors:

    Tensor<float, 3> t1(2, 3, 4);
    ...set some values in t1...
    Tensor<float, 3> t2(2, 3, 4);
    ...set some values in t2...
    // Set t3 to the element wise sum of t1 and t2
    Tensor<float, 3> t3 = t1 + t2;

While the code above looks easy enough, it is important to understand that the
expression ```t1 + t2``` is not actually adding the values of the tensors.  The
expression instead constructs a "tensor operator" object of the class
TensorCwiseBinaryOp<scalar_sum>, which has references to the tensors
```t1``` and ```t2```.  This is a small C++ object that knows how to add
```t1``` and ```t2```.  It is only when the value of the expression is assigned
to the tensor ```t3``` that the addition is actually performed.  Technically,
this happens through the overloading of ```operator=()``` in the Tensor class.

This mechanism for computing tensor expressions allows for lazy evaluation and
optimizations which are what make the tensor library very fast.

Of course, the tensor operators do nest, and the expression ```t1 + t2 *
0.3f``` is actually represented with the (approximate) tree of operators:

    TensorCwiseBinaryOp<scalar_sum>(t1, TensorCwiseUnaryOp<scalar_mul>(t2, 0.3f))


### Tensor Operations and C++ "auto"

Because Tensor operations create tensor operators, the C++ ```auto``` keyword
does not have its intuitive meaning.  Consider these 2 lines of code:

    Tensor<float, 3> t3 = t1 + t2;
    auto t4 = t1 + t2;

In the first line we allocate the tensor ```t3``` and it will contain the
result of the addition of ```t1``` and ```t2```.  In the second line, ```t4```
is actually the tree of tensor operators that will compute the addition of
```t1``` and ```t2```.  In fact, ```t4``` is *not* a tensor and you cannot get
the values of its elements:

    Tensor<float, 3> t3 = t1 + t2;
    cout << t3(0, 0, 0);  // OK prints the value of t1(0, 0, 0) + t2(0, 0, 0)

    auto t4 = t1 + t2;
    cout << t4(0, 0, 0);  // Compilation error!

When you use ```auto``` you do not get a Tensor as a result but instead a
non-evaluated expression.  So only use ```auto``` to delay evaluation.

Unfortunately, there is no single underlying concrete type for holding
non-evaluated expressions, hence you have to use auto in the case when you do
want to hold non-evaluated expressions.

When you need the results of set of tensor computations you have to assign the
result to a Tensor that will be capable of holding onto them.  This can be
either a normal Tensor, a fixed size Tensor, or a TensorMap on an existing
piece of memory.  All the following will work:

    auto t4 = t1 + t2;

    Tensor<float, 3> result = t4;  // Could also be: result(t4);
    cout << result(0, 0, 0);

    TensorMap<float, 4> result(<a float* with enough space>, <size0>, ...) = t4;
    cout << result(0, 0, 0);

    TensorFixedSize<float, Sizes<size0, ...>> result = t4;
    cout << result(0, 0, 0);

Until you need the results, you can keep the operation around, and even reuse
it for additional operations.  As long as you keep the expression as an
operation, no computation is performed.

    // One way to compute exp((t1 + t2) * 0.2f);
    auto t3 = t1 + t2;
    auto t4 = t3 * 0.2f;
    auto t5 = t4.exp();
    Tensor<float, 3> result = t5;

    // Another way, exactly as efficient as the previous one:
    Tensor<float, 3> result = ((t1 + t2) * 0.2f).exp();

### Controlling When Expression are Evaluated

There are several ways to control when expressions are evaluated:

*   Assignment to a Tensor, TensorFixedSize, or TensorMap.
*   Use of the eval() method.
*   Assignment to a TensorRef.

#### Assigning to a Tensor, TensorFixedSize, or TensorMap.

The most common way to evaluate an expression is to assign it to a Tensor.  In
the example below, the ```auto``` declarations make the intermediate values
"Operations", not Tensors, and do not cause the expressions to be evaluated.
The assignment to the Tensor ```result``` causes the evaluation of all the
operations.

    auto t3 = t1 + t2;             // t3 is an Operation.
    auto t4 = t3 * 0.2f;           // t4 is an Operation.
    auto t5 = t4.exp();            // t5 is an Operation.
    Tensor<float, 3> result = t5;  // The operations are evaluated.

If you know the ranks and sizes of the Operation value you can assign the
Operation to a TensorFixedSize instead of a Tensor, which is a bit more
efficient.

    // We know that the result is a 4x4x2 tensor!
    TensorFixedSize<float, 4, 4, 2> result = t5;

Simiarly, assigning an expression to a TensorMap causes its evaluation.  Like
tensors of type TensorFixedSize, TensorMaps cannot be resized so they have to
have the rank and sizes of the expression that are assigned to them.

#### Calling eval().

When you compute large composite expressions, you sometimes want to tell Eigen
that an intermediate value in the expression tree is worth evaluating ahead of
time.  This is done by inserting a call to the ```eval()``` method of the
expression Operation.

    // The previous example could have been written:
    Tensor<float, 3> result = ((t1 + t2) * 0.2f).exp();

    // If you want to compute (t1 + t2) once ahead of time you can write:
    Tensor<float, 3> result = ((t1 + t2).eval() * 0.2f).exp();

Semantically, calling ```eval()``` is equivalent to materializing the value of
the expression in a temporary Tensor of the right size.  The code above in
effect does:

    // .eval() knows the size!
    TensorFixedSize<float, 4, 4, 2> tmp = t1 + t2;
    Tensor<float, 3> result = (tmp * 0.2f).exp();

Note that the return value of ```eval()``` is itself an Operation, so the
following code does not do what you may think:

    // Here t3 is an evaluation Operation.  t3 has not been evaluated yet.
    auto t3 = (t1 + t2).eval();

    // You can use t3 in another expression.  Still no evaluation.
    auto t4 = (t3 * 0.2f).exp();

    // The value is evaluated when you assign the Operation to a Tensor, using
    // an intermediate tensor to represent t3.x
    Tensor<float, 3> result = t4;

While in the examples above calling ```eval()``` does not make a difference in
performance, in other cases it can make a huge difference.  In the expression
below the ```broadcast()``` expression causes the ```X.maximum()``` expression
to be evaluated many times:

    Tensor<...> X ...;
    Tensor<...> Y = ((X - X.maximum(depth_dim).reshape(dims2d).broadcast(bcast))
                     * beta).exp();

Inserting a call to ```eval()``` between the ```maximum()``` and
```reshape()``` calls guarantees that maximum() is only computed once and
greatly speeds-up execution:

    Tensor<...> Y =
      ((X - X.maximum(depth_dim).eval().reshape(dims2d).broadcast(bcast))
        * beta).exp();

In the other example below, the tensor ```Y``` is both used in the expression
and its assignment.  This is an aliasing problem and if the evaluation is not
done in the right order Y will be updated incrementally during the evaluation
resulting in bogus results:

     Tensor<...> Y ...;
     Y = Y / (Y.sum(depth_dim).reshape(dims2d).broadcast(bcast));

Inserting a call to ```eval()``` between the ```sum()``` and ```reshape()```
expressions ensures that the sum is computed before any updates to ```Y``` are
done.

     Y = Y / (Y.sum(depth_dim).eval().reshape(dims2d).broadcast(bcast));

Note that an eval around the full right hand side expression is not needed
because the generated has to compute the i-th value of the right hand side
before assigning it to the left hand side.

However, if you were assigning the expression value to a shuffle of ```Y```
then you would need to force an eval for correctness by adding an ```eval()```
call for the right hand side:

     Y.shuffle(...) =
        (Y / (Y.sum(depth_dim).eval().reshape(dims2d).broadcast(bcast))).eval();


#### Assigning to a TensorRef.

If you need to access only a few elements from the value of an expression you
can avoid materializing the value in a full tensor by using a TensorRef.

A TensorRef is a small wrapper class for any Eigen Operation.  It provides
overloads for the ```()``` operator that let you access individual values in
the expression.  TensorRef is convenient, because the Operation themselves do
not provide a way to access individual elements.

    // Create a TensorRef for the expression.  The expression is not
    // evaluated yet.
    TensorRef<Tensor<float, 3> > ref = ((t1 + t2) * 0.2f).exp();

    // Use "ref" to access individual elements.  The expression is evaluated
    // on the fly.
    float at_0 = ref(0, 0, 0);
    cout << ref(0, 1, 0);

Only use TensorRef when you need a subset of the values of the expression.
TensorRef only computes the values you access.  However note that if you are
going to access all the values it will be much faster to materialize the
results in a Tensor first.

In some cases, if the full Tensor result would be very large, you may save
memory by accessing it as a TensorRef.  But not always.  So don't count on it.


### Controlling How Expressions Are Evaluated

The tensor library provides several implementations of the various operations
such as contractions and convolutions.  The implementations are optimized for
different environments: single threaded on CPU, multi threaded on CPU, or on a
GPU using cuda.  Additional implementations may be added later.

You can choose which implementation to use with the ```device()``` call.  If
you do not choose an implementation explicitly the default implementation that
uses a single thread on the CPU is used.

The default implementation has been optimized for recent Intel CPUs, taking
advantage of SSE, AVX, and FMA instructions.  Work is ongoing to tune the
library on ARM CPUs.  Note that you need to pass compiler-dependent flags
to enable the use of SSE, AVX, and other instructions.

For example, the following code adds two tensors using the default
single-threaded CPU implementation:

    Tensor<float, 2> a(30, 40);
    Tensor<float, 2> b(30, 40);
    Tensor<float, 2> c = a + b;

To choose a different implementation you have to insert a ```device()``` call
before the assignment of the result.  For technical C++ reasons this requires
that the Tensor for the result be declared on its own.  This means that you
have to know the size of the result.

    Eigen::Tensor<float, 2> c(30, 40);
    c.device(...) = a + b;

The call to ```device()``` must be the last call on the left of the operator=.

You must pass to the ```device()``` call an Eigen device object.  There are
presently three devices you can use: DefaultDevice, ThreadPoolDevice and
GpuDevice.


#### Evaluating With the DefaultDevice

This is exactly the same as not inserting a ```device()``` call.

    DefaultDevice my_device;
    c.device(my_device) = a + b;

#### Evaluating with a Thread Pool

    // Create the Eigen ThreadPoolDevice.
    Eigen::ThreadPoolDevice my_device(4 /* number of threads to use */);

    // Now just use the device when evaluating expressions.
    Eigen::Tensor<float, 2> c(30, 50);
    c.device(my_device) = a.contract(b, dot_product_dims);


#### Evaluating On GPU

This is presently a bit more complicated than just using a thread pool device.
You need to create a GPU device but you also need to explicitly allocate the
memory for tensors with cuda.


## API Reference

### Datatypes

In the documentation of the tensor methods and Operation we mention datatypes
that are tensor-type specific:

#### <Tensor-Type>::Dimensions

Acts like an array of ints.  Has an ```int size``` attribute, and can be
indexed like an array to access individual values.  Used to represent the
dimensions of a tensor.  See ```dimensions()```.

#### <Tensor-Type>::Index

Acts like an ```int```.  Used for indexing tensors along their dimensions.  See
```operator()```, ```dimension()```, and ```size()```.

#### <Tensor-Type>::Scalar

Represents the datatype of individual tensor elements.  For example, for a
```Tensor<float>```, ```Scalar``` is the type ```float```.  See
```setConstant()```.

#### <Operation>

We use this pseudo type to indicate that a tensor Operation is returned by a
method.  We indicate in the text the type and dimensions of the tensor that the
Operation returns after evaluation.

The Operation will have to be evaluated, for example by assigning it to a
tensor, before you can access the values of the resulting tensor.  You can also
access the values through a TensorRef.


## Built-in Tensor Methods

These are usual C++ methods that act on tensors immediately.  They are not
Operations which provide delayed evaluation of their results.  Unless specified
otherwise, all the methods listed below are available on all tensor classes:
Tensor, TensorFixedSize, and TensorMap.

## Metadata

### int NumDimensions

Constant value indicating the number of dimensions of a Tensor.  This is also
known as the tensor "rank".

      Eigen::Tensor<float, 2> a(3, 4);
      cout << "Dims " << a.NumDimensions;
      => Dims 2

### Dimensions dimensions()

Returns an array-like object representing the dimensions of the tensor.
The actual type of the dimensions() result is <Tensor-Type>::Dimensions.

    Eigen::Tensor<float, 2> a(3, 4);
    const Eigen::Tensor<float, 2>::Dimensions& d = a.dimensions();
    cout << "Dim size: " << d.size << ", dim 0: " << d[0]
         << ", dim 1: " << d[1];
    => Dim size: 2, dim 0: 3, dim 1: 4

If you use a C++11 compiler, you can use ```auto``` to simplify the code:

    const auto& d = a.dimensions();
    cout << "Dim size: " << d.size << ", dim 0: " << d[0]
         << ", dim 1: " << d[1];
    => Dim size: 2, dim 0: 3, dim 1: 4

### Index dimension(Index n)

Returns the n-th dimension of the tensor.  The actual type of the
```dimension()``` result is ```<Tensor-Type>::Index```, but you can
always use it like an int.

      Eigen::Tensor<float, 2> a(3, 4);
      int dim1 = a.dimension(1);
      cout << "Dim 1: " << dim1;
      => Dim 1: 4

### Index size()

Returns the total number of elements in the tensor.  This is the product of all
the tensor dimensions.  The actual type of the ```size()``` result is
```<Tensor-Type>::Index```, but you can always use it like an int.

    Eigen::Tensor<float, 2> a(3, 4);
    cout << "Size: " << a.size();
    => Size: 12


### Getting Dimensions From An Operation

A few operations provide ```dimensions()``` directly,
e.g. ```TensorReslicingOp```.  Most operations defer calculating dimensions
until the operation is being evaluated.  If you need access to the dimensions
of a deferred operation, you can wrap it in a TensorRef (see Assigning to a
TensorRef above), which provides ```dimensions()``` and ```dimension()``` as
above.

TensorRef can also wrap the plain Tensor types, so this is a useful idiom in
templated contexts where the underlying object could be either a raw Tensor
or some deferred operation (e.g. a slice of a Tensor).  In this case, the
template code can wrap the object in a TensorRef and reason about its
dimensionality while remaining agnostic to the underlying type.


## Constructors

### Tensor

Creates a tensor of the specified size. The number of arguments must be equal
to the rank of the tensor. The content of the tensor is not initialized.

    Eigen::Tensor<float, 2> a(3, 4);
    cout << "NumRows: " << a.dimension(0) << " NumCols: " << a.dimension(1) << endl;
    => NumRows: 3 NumCols: 4

### TensorFixedSize

Creates a tensor of the specified size. The number of arguments in the Size<>
template parameter determines the rank of the tensor. The content of the tensor
is not initialized.

    Eigen::TensorFixedSize<float, Size<3, 4>> a;
    cout << "Rank: " << a.rank() << endl;
    => Rank: 2
    cout << "NumRows: " << a.dimension(0) << " NumCols: " << a.dimension(1) << endl;
    => NumRows: 3 NumCols: 4

### TensorMap

Creates a tensor mapping an existing array of data. The data must not be freed
until the TensorMap is discarded, and the size of the data must be large enough
to accomodate of the coefficients of the tensor.

    float data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11};
    Eigen::TensorMap<float, 2> a(data, 3, 4);
    cout << "NumRows: " << a.dimension(0) << " NumCols: " << a.dimension(1) << endl;
    => NumRows: 3 NumCols: 4
    cout << "a(1, 2): " << a(1, 2) << endl;
    => a(1, 2): 9


## Contents Initialization

When a new Tensor or a new TensorFixedSize are created, memory is allocated to
hold all the tensor elements, but the memory is not initialized.  Similarly,
when a new TensorMap is created on top of non-initialized memory the memory its
contents are not initialized.

You can use one of the methods below to initialize the tensor memory.  These
have an immediate effect on the tensor and return the tensor itself as a
result.  These are not tensor Operations which delay evaluation.

### <Tensor-Type> setConstant(const Scalar& val)

Sets all elements of the tensor to the constant value ```val```.  ```Scalar```
is the type of data stored in the tensor.  You can pass any value that is
convertible to that type.

Returns the tensor itself in case you want to chain another call.

    a.setConstant(12.3f);
    cout << "Constant: " << endl << a << endl << endl;
    =>
    Constant:
    12.3 12.3 12.3 12.3
    12.3 12.3 12.3 12.3
    12.3 12.3 12.3 12.3

Note that ```setConstant()``` can be used on any tensor where the element type
has a copy constructor and an ```operator=()```:

    Eigen::Tensor<string, 2> a(2, 3);
    a.setConstant("yolo");
    cout << "String tensor: " << endl << a << endl << endl;
    =>
    String tensor:
    yolo yolo yolo
    yolo yolo yolo


### <Tensor-Type> setZero()

Fills the tensor with zeros.  Equivalent to ```setConstant(Scalar(0))```.
Returns the tensor itself in case you want to chain another call.

    a.setZero();
    cout << "Zeros: " << endl << a << endl << endl;
    =>
    Zeros:
    0 0 0 0
    0 0 0 0
    0 0 0 0


### <Tensor-Type> setValues({..initializer_list})

Fills the tensor with explicit values specified in a std::initializer_list.
The type of the initializer list depends on the type and rank of the tensor.

If the tensor has rank N, the initializer list must be nested N times.  The
most deeply nested lists must contains P scalars of the Tensor type where P is
the size of the last dimension of the Tensor.

For example, for a ```TensorFixedSize<float, 2, 3>``` the initializer list must
contains 2 lists of 3 floats each.

```setValues()``` returns the tensor itself in case you want to chain another
call.

    Eigen::Tensor<float, 2> a(2, 3);
    a.setValues({{0.0f, 1.0f, 2.0f}, {3.0f, 4.0f, 5.0f}});
    cout << "a" << endl << a << endl << endl;
    =>
    a
    0 1 2
    3 4 5

If a list is too short, the corresponding elements of the tensor will not be
changed.  This is valid at each level of nesting.  For example the following
code only sets the values of the first row of the tensor.

    Eigen::Tensor<int, 2> a(2, 3);
    a.setConstant(1000);
    a.setValues({{10, 20, 30}});
    cout << "a" << endl << a << endl << endl;
    =>
    a
    10   20   30
    1000 1000 1000

### <Tensor-Type> setRandom()

Fills the tensor with random values.  Returns the tensor itself in case you
want to chain another call.

    a.setRandom();
    cout << "Random: " << endl << a << endl << endl;
    =>
    Random:
      0.680375    0.59688  -0.329554    0.10794
     -0.211234   0.823295   0.536459 -0.0452059
      0.566198  -0.604897  -0.444451   0.257742

You can customize ```setRandom()``` by providing your own random number
generator as a template argument:

    a.setRandom<MyRandomGenerator>();

Here, ```MyRandomGenerator``` must be a struct with the following member
functions, where Scalar and Index are the same as ```<Tensor-Type>::Scalar```
and ```<Tensor-Type>::Index```.

See ```struct UniformRandomGenerator``` in TensorFunctors.h for an example.

    // Custom number generator for use with setRandom().
    struct MyRandomGenerator {
      // Default and copy constructors. Both are needed
      MyRandomGenerator() { }
      MyRandomGenerator(const MyRandomGenerator& ) { }

      // Return a random value to be used.  "element_location" is the
      // location of the entry to set in the tensor, it can typically
      // be ignored.
      Scalar operator()(Eigen::DenseIndex element_location,
                        Eigen::DenseIndex /*unused*/ = 0) const {
        return <randomly generated value of type T>;
      }

      // Same as above but generates several numbers at a time.
      typename internal::packet_traits<Scalar>::type packetOp(
          Eigen::DenseIndex packet_location, Eigen::DenseIndex /*unused*/ = 0) const {
        return <a packet of randomly generated values>;
      }
    };

You can also use one of the 2 random number generators that are part of the
tensor library:
*   UniformRandomGenerator
*   NormalRandomGenerator


## Data Access

The Tensor, TensorFixedSize, and TensorRef classes provide the following
accessors to access the tensor coefficients:

    const Scalar& operator()(const array<Index, NumIndices>& indices)
    const Scalar& operator()(Index firstIndex, IndexTypes... otherIndices)
    Scalar& operator()(const array<Index, NumIndices>& indices)
    Scalar& operator()(Index firstIndex, IndexTypes... otherIndices)

The number of indices must be equal to the rank of the tensor. Moreover, these
accessors are not available on tensor expressions. In order to access the
values of a tensor expression, the expression must either be evaluated or
wrapped in a TensorRef.


### Scalar* data() and const Scalar* data() const

Returns a pointer to the storage for the tensor.  The pointer is const if the
tensor was const.  This allows direct access to the data.  The layout of the
data depends on the tensor layout: RowMajor or ColMajor.

This access is usually only needed for special cases, for example when mixing
Eigen Tensor code with other libraries.

Scalar is the type of data stored in the tensor.

    Eigen::Tensor<float, 2> a(3, 4);
    float* a_data = a.data();
    a_data[0] = 123.45f;
    cout << "a(0, 0): " << a(0, 0);
    => a(0, 0): 123.45


## Tensor Operations

All the methods documented below return non evaluated tensor ```Operations```.
These can be chained: you can apply another Tensor Operation to the value
returned by the method.

The chain of Operation is evaluated lazily, typically when it is assigned to a
tensor.  See "Controlling when Expression are Evaluated" for more details about
their evaluation.

### <Operation> constant(const Scalar& val)

Returns a tensor of the same type and dimensions as the original tensor but
where all elements have the value ```val```.

This is useful, for example, when you want to add or subtract a constant from a
tensor, or multiply every element of a tensor by a scalar.

    Eigen::Tensor<float, 2> a(2, 3);
    a.setConstant(1.0f);
    Eigen::Tensor<float, 2> b = a + a.constant(2.0f);
    Eigen::Tensor<float, 2> c = b * b.constant(0.2f);
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    cout << "c" << endl << c << endl << endl;
    =>
    a
    1 1 1
    1 1 1

    b
    3 3 3
    3 3 3

    c
    0.6 0.6 0.6
    0.6 0.6 0.6

### <Operation> random()

Returns a tensor of the same type and dimensions as the current tensor
but where all elements have random values.

This is for example useful to add random values to an existing tensor.
The generation of random values can be customized in the same manner
as for ```setRandom()```.

    Eigen::Tensor<float, 2> a(2, 3);
    a.setConstant(1.0f);
    Eigen::Tensor<float, 2> b = a + a.random();
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    1 1 1
    1 1 1

    b
    1.68038   1.5662  1.82329
    0.788766  1.59688 0.395103


## Unary Element Wise Operations

All these operations take a single input tensor as argument and return a tensor
of the same type and dimensions as the tensor to which they are applied.  The
requested operations are applied to each element independently.

### <Operation> operator-()

Returns a tensor of the same type and dimensions as the original tensor
containing the opposite values of the original tensor.

    Eigen::Tensor<float, 2> a(2, 3);
    a.setConstant(1.0f);
    Eigen::Tensor<float, 2> b = -a;
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    1 1 1
    1 1 1

    b
    -1 -1 -1
    -1 -1 -1

### <Operation> sqrt()

Returns a tensor of the same type and dimensions as the original tensor
containing the square roots of the original tensor.

### <Operation> rsqrt()

Returns a tensor of the same type and dimensions as the original tensor
containing the inverse square roots of the original tensor.

### <Operation> square()

Returns a tensor of the same type and dimensions as the original tensor
containing the squares of the original tensor values.

### <Operation> inverse()

Returns a tensor of the same type and dimensions as the original tensor
containing the inverse of the original tensor values.

### <Operation> exp()

Returns a tensor of the same type and dimensions as the original tensor
containing the exponential of the original tensor.

### <Operation> log()

Returns a tensor of the same type and dimensions as the original tensor
containing the natural logarithms of the original tensor.

### <Operation> abs()

Returns a tensor of the same type and dimensions as the original tensor
containing the absolute values of the original tensor.

### <Operation> pow(Scalar exponent)

Returns a tensor of the same type and dimensions as the original tensor
containing the coefficients of the original tensor to the power of the
exponent.

The type of the exponent, Scalar, is always the same as the type of the
tensor coefficients.  For example, only integer exponents can be used in
conjuntion with tensors of integer values.

You can use cast() to lift this restriction.  For example this computes
cubic roots of an int Tensor:

    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{0, 1, 8}, {27, 64, 125}});
    Eigen::Tensor<double, 2> b = a.cast<double>().pow(1.0 / 3.0);
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    0   1   8
    27  64 125

    b
    0 1 2
    3 4 5

### <Operation>  operator * (Scalar scale)

Multiplies all the coefficients of the input tensor by the provided scale.

### <Operation>  cwiseMax(Scalar threshold)
TODO

### <Operation>  cwiseMin(Scalar threshold)
TODO

### <Operation>  unaryExpr(const CustomUnaryOp& func)
TODO


## Binary Element Wise Operations

These operations take two input tensors as arguments. The 2 input tensors should
be of the same type and dimensions. The result is a tensor of the same
dimensions as the tensors to which they are applied, and unless otherwise
specified it is also of the same type. The requested operations are applied to
each pair of elements independently.

### <Operation> operator+(const OtherDerived& other)

Returns a tensor of the same type and dimensions as the input tensors
containing the coefficient wise sums of the inputs.

### <Operation> operator-(const OtherDerived& other)

Returns a tensor of the same type and dimensions as the input tensors
containing the coefficient wise differences of the inputs.

### <Operation> operator*(const OtherDerived& other)

Returns a tensor of the same type and dimensions as the input tensors
containing the coefficient wise products of the inputs.

### <Operation> operator/(const OtherDerived& other)

Returns a tensor of the same type and dimensions as the input tensors
containing the coefficient wise quotients of the inputs.

This operator is not supported for integer types.

### <Operation> cwiseMax(const OtherDerived& other)

Returns a tensor of the same type and dimensions as the input tensors
containing the coefficient wise maximums of the inputs.

### <Operation> cwiseMin(const OtherDerived& other)

Returns a tensor of the same type and dimensions as the input tensors
containing the coefficient wise mimimums of the inputs.

### <Operation> Logical operators

The following logical operators are supported as well:

*   operator&&(const OtherDerived& other)
*   operator||(const OtherDerived& other)
*   operator<(const OtherDerived& other)
*   operator<=(const OtherDerived& other)
*   operator>(const OtherDerived& other)
*   operator>=(const OtherDerived& other)
*   operator==(const OtherDerived& other)
*   operator!=(const OtherDerived& other)

They all return a tensor of boolean values.


## Selection (select(const ThenDerived& thenTensor, const ElseDerived& elseTensor)

Selection is a coefficient-wise ternary operator that is the tensor equivalent
to the if-then-else operation.

    Tensor<bool, 3> if = ...;
    Tensor<float, 3> then = ...;
    Tensor<float, 3> else = ...;
    Tensor<float, 3> result = if.select(then, else);

The 3 arguments must be of the same dimensions, which will also be the dimension
of the result.  The 'if' tensor must be of type boolean, the 'then' and the
'else' tensor must be of the same type, which will also be the type of the
result.

Each coefficient in the result is equal to the corresponding coefficient in the
'then' tensor if the corresponding value in the 'if' tensor is true. If not, the
resulting coefficient will come from the 'else' tensor.


## Contraction

Tensor *contractions* are a generalization of the matrix product to the
multidimensional case.

    // Create 2 matrices using tensors of rank 2
    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{1, 2, 3}, {6, 5, 4}});
    Eigen::Tensor<int, 2> b(3, 2);
    a.setValues({{1, 2}, {4, 5}, {5, 6}});

    // Compute the traditional matrix product
    array<IndexPair<int>, 1> product_dims = { IndexPair(1, 0) };
    Eigen::Tensor<int, 2> AB = a.contract(b, product_dims);

    // Compute the product of the transpose of the matrices
    array<IndexPair<int>, 1> transpose_product_dims = { IndexPair(0, 1) };
    Eigen::Tensor<int, 2> AtBt = a.contract(b, transposed_product_dims);


## Reduction Operations

A *Reduction* operation returns a tensor with fewer dimensions than the
original tensor.  The values in the returned tensor are computed by applying a
*reduction operator* to slices of values from the original tensor.  You specify
the dimensions along which the slices are made.

The Eigen Tensor library provides a set of predefined reduction operators such
as ```maximum()``` and ```sum()``` and lets you define additional operators by
implementing a few methods from a reductor template.

### Reduction Dimensions

All reduction operations take a single parameter of type
```<TensorType>::Dimensions``` which can always be specified as an array of
ints.  These are called the "reduction dimensions."  The values are the indices
of the dimensions of the input tensor over which the reduction is done.  The
parameter can have at most as many element as the rank of the input tensor;
each element must be less than the tensor rank, as it indicates one of the
dimensions to reduce.

Each dimension of the input tensor should occur at most once in the reduction
dimensions as the implementation does not remove duplicates.

The order of the values in the reduction dimensions does not affect the
results, but the code may execute faster if you list the dimensions in
increasing order.

Example: Reduction along one dimension.

    // Create a tensor of 2 dimensions
    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{1, 2, 3}, {6, 5, 4}});
    // Reduce it along the second dimension (1)...
    Eigen::array<int, 1> dims({1 /* dimension to reduce */});
    // ...using the "maximum" operator.
    // The result is a tensor with one dimension.  The size of
    // that dimension is the same as the first (non-reduced) dimension of a.
    Eigen::Tensor<int, 1> b = a.maximum(dims);
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    1 2 3
    6 5 4

    b
    3
    6

Example: Reduction along two dimensions.

    Eigen::Tensor<float, 3, Eigen::ColMajor> a(2, 3, 4);
    a.setValues({{{0.0f, 1.0f, 2.0f, 3.0f},
                  {7.0f, 6.0f, 5.0f, 4.0f},
                  {8.0f, 9.0f, 10.0f, 11.0f}},
                 {{12.0f, 13.0f, 14.0f, 15.0f},
                  {19.0f, 18.0f, 17.0f, 16.0f},
                  {20.0f, 21.0f, 22.0f, 23.0f}}});
    // The tensor a has 3 dimensions.  We reduce along the
    // first 2, resulting in a tensor with a single dimension
    // of size 4 (the last dimension of a.)
    // Note that we pass the array of reduction dimensions
    // directly to the maximum() call.
    Eigen::Tensor<float, 1, Eigen::ColMajor> b =
        a.maximum(Eigen::array<int, 2>({0, 1}));
    cout << "b" << endl << b << endl << endl;
    =>
    b
    20
    21
    22
    23

#### Reduction along all dimensions

As a special case, if you pass no parameter to a reduction operation the
original tensor is reduced along *all* its dimensions.  The result is a
scalar, represented as a zero-dimension tensor.

    Eigen::Tensor<float, 3> a(2, 3, 4);
    a.setValues({{{0.0f, 1.0f, 2.0f, 3.0f},
                  {7.0f, 6.0f, 5.0f, 4.0f},
                  {8.0f, 9.0f, 10.0f, 11.0f}},
                 {{12.0f, 13.0f, 14.0f, 15.0f},
                  {19.0f, 18.0f, 17.0f, 16.0f},
                  {20.0f, 21.0f, 22.0f, 23.0f}}});
    // Reduce along all dimensions using the sum() operator.
    Eigen::Tensor<float, 0> b = a.sum();
    cout << "b" << endl << b << endl << endl;
    =>
    b
    276


### <Operation> sum(const Dimensions& new_dims)
### <Operation> sum()

Reduce a tensor using the sum() operator.  The resulting values
are the sum of the reduced values.

### <Operation> mean(const Dimensions& new_dims)
### <Operation> mean()

Reduce a tensor using the mean() operator.  The resulting values
are the mean of the reduced values.

### <Operation> maximum(const Dimensions& new_dims)
### <Operation> maximum()

Reduce a tensor using the maximum() operator.  The resulting values are the
largest of the reduced values.

### <Operation> minimum(const Dimensions& new_dims)
### <Operation> minimum()

Reduce a tensor using the minimum() operator.  The resulting values
are the smallest of the reduced values.

### <Operation> prod(const Dimensions& new_dims)
### <Operation> prod()

Reduce a tensor using the prod() operator.  The resulting values
are the product of the reduced values.

### <Operation> all(const Dimensions& new_dims)
### <Operation> all()
Reduce a tensor using the all() operator.  Casts tensor to bool and then checks
whether all elements are true.  Runs through all elements rather than
short-circuiting, so may be significantly inefficient.

### <Operation> any(const Dimensions& new_dims)
### <Operation> any()
Reduce a tensor using the any() operator.  Casts tensor to bool and then checks
whether any element is true.  Runs through all elements rather than
short-circuiting, so may be significantly inefficient.


### <Operation> reduce(const Dimensions& new_dims, const Reducer& reducer)

Reduce a tensor using a user-defined reduction operator.  See ```SumReducer```
in TensorFunctors.h for information on how to implement a reduction operator.


## Scan Operations

A *Scan* operation returns a tensor with the same dimensions as the original
tensor. The operation performs an inclusive scan along the specified
axis, which means it computes a running total along the axis for a given
reduction operation.
If the reduction operation corresponds to summation, then this computes the
prefix sum of the tensor along the given axis.

Example:
dd a comment to this line

    // Create a tensor of 2 dimensions
    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{1, 2, 3}, {4, 5, 6}});
    // Scan it along the second dimension (1) using summation
    Eigen::Tensor<int, 2> b = a.cumsum(1);
    // The result is a tensor with the same size as the input
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    1 2 3
    6 5 4

    b
    1  3  6
    4  9 15

### <Operation> cumsum(const Index& axis)

Perform a scan by summing consecutive entries.

### <Operation> cumprod(const Index& axis)

Perform a scan by multiplying consecutive entries.


## Convolutions

### <Operation> convolve(const Kernel& kernel, const Dimensions& dims)

Returns a tensor that is the output of the convolution of the input tensor with the kernel,
along the specified dimensions of the input tensor. The dimension size for dimensions of the output tensor
which were part of the convolution will be reduced by the formula:
output_dim_size = input_dim_size - kernel_dim_size + 1 (requires: input_dim_size >= kernel_dim_size).
The dimension sizes for dimensions that were not part of the convolution will remain the same.
Performance of the convolution can depend on the length of the stride(s) of the input tensor dimension(s) along which the
convolution is computed (the first dimension has the shortest stride for ColMajor, whereas RowMajor's shortest stride is
for the last dimension).

    // Compute convolution along the second and third dimension.
    Tensor<float, 4, DataLayout> input(3, 3, 7, 11);
    Tensor<float, 2, DataLayout> kernel(2, 2);
    Tensor<float, 4, DataLayout> output(3, 2, 6, 11);
    input.setRandom();
    kernel.setRandom();

    Eigen::array<ptrdiff_t, 2> dims({1, 2});  // Specify second and third dimension for convolution.
    output = input.convolve(kernel, dims);

    for (int i = 0; i < 3; ++i) {
      for (int j = 0; j < 2; ++j) {
        for (int k = 0; k < 6; ++k) {
          for (int l = 0; l < 11; ++l) {
            const float result = output(i,j,k,l);
            const float expected = input(i,j+0,k+0,l) * kernel(0,0) +
                                   input(i,j+1,k+0,l) * kernel(1,0) +
                                   input(i,j+0,k+1,l) * kernel(0,1) +
                                   input(i,j+1,k+1,l) * kernel(1,1);
            VERIFY_IS_APPROX(result, expected);
          }
        }
      }
    }


## Geometrical Operations

These operations return a Tensor with different dimensions than the original
Tensor.  They can be used to access slices of tensors, see them with different
dimensions, or pad tensors with additional data.

### <Operation> reshape(const Dimensions& new_dims)

Returns a view of the input tensor that has been reshaped to the specified
new dimensions.  The argument new_dims is an array of Index values.  The
rank of the resulting tensor is equal to the number of elements in new_dims.

The product of all the sizes in the new dimension array must be equal to
the number of elements in the input tensor.

    // Increase the rank of the input tensor by introducing a new dimension
    // of size 1.
    Tensor<float, 2> input(7, 11);
    array<int, 3> three_dims{{7, 11, 1}};
    Tensor<float, 3> result = input.reshape(three_dims);

    // Decrease the rank of the input tensor by merging 2 dimensions;
    array<int, 1> one_dim{{7 * 11}};
    Tensor<float, 1> result = input.reshape(one_dim);

This operation does not move any data in the input tensor, so the resulting
contents of a reshaped Tensor depend on the data layout of the original Tensor.

For example this is what happens when you ```reshape()``` a 2D ColMajor tensor
to one dimension:

    Eigen::Tensor<float, 2, Eigen::ColMajor> a(2, 3);
    a.setValues({{0.0f, 100.0f, 200.0f}, {300.0f, 400.0f, 500.0f}});
    Eigen::array<Eigen::DenseIndex, 1> one_dim({3 * 2});
    Eigen::Tensor<float, 1, Eigen::ColMajor> b = a.reshape(one_dim);
    cout << "b" << endl << b << endl;
    =>
    b
      0
    300
    100
    400
    200
    500

This is what happens when the 2D Tensor is RowMajor:

    Eigen::Tensor<float, 2, Eigen::RowMajor> a(2, 3);
    a.setValues({{0.0f, 100.0f, 200.0f}, {300.0f, 400.0f, 500.0f}});
    Eigen::array<Eigen::DenseIndex, 1> one_dim({3 * 2});
    Eigen::Tensor<float, 1, Eigen::RowMajor> b = a.reshape(one_dim);
    cout << "b" << endl << b << endl;
    =>
    b
      0
    100
    200
    300
    400
    500

The reshape operation is a lvalue. In other words, it can be used on the left
side of the assignment operator.

The previous example can be rewritten as follow:

    Eigen::Tensor<float, 2, Eigen::ColMajor> a(2, 3);
    a.setValues({{0.0f, 100.0f, 200.0f}, {300.0f, 400.0f, 500.0f}});
    Eigen::array<Eigen::DenseIndex, 2> two_dim({2, 3});
    Eigen::Tensor<float, 1, Eigen::ColMajor> b;
    b.reshape(two_dim) = a;
    cout << "b" << endl << b << endl;
    =>
    b
      0
    300
    100
    400
    200
    500

Note that "b" itself was not reshaped but that instead the assignment is done to
the reshape view of b.


### <Operation> shuffle(const Shuffle& shuffle)

Returns a copy of the input tensor whose dimensions have been
reordered according to the specified permutation. The argument shuffle
is an array of Index values. Its size is the rank of the input
tensor. It must contain a permutation of 0, 1, ..., rank - 1. The i-th
dimension of the output tensor equals to the size of the shuffle[i]-th
dimension of the input tensor. For example:

    // Shuffle all dimensions to the left by 1.
    Tensor<float, 3> input(20, 30, 50);
    // ... set some values in input.
    Tensor<float, 3> output = input.shuffle({1, 2, 0})

    eigen_assert(output.dimension(0) == 30);
    eigen_assert(output.dimension(1) == 50);
    eigen_assert(output.dimension(2) == 20);

Indices into the output tensor are shuffled accordingly to formulate
indices into the input tensor. For example, one can assert in the above
code snippet that:

    eigen_assert(output(3, 7, 11) == input(11, 3, 7));

In general, one can assert that

    eigen_assert(output(..., indices[shuffle[i]], ...) ==
                 input(..., indices[i], ...))

The shuffle operation results in a lvalue, which means that it can be assigned
to. In other words, it can be used on the left side of the assignment operator.

Let's rewrite the previous example to take advantage of this feature:

    // Shuffle all dimensions to the left by 1.
    Tensor<float, 3> input(20, 30, 50);
    // ... set some values in input.
    Tensor<float, 3> output(30, 50, 20);
    output.shuffle({2, 0, 1}) = input;


### <Operation> stride(const Strides& strides)

Returns a view of the input tensor that strides (skips stride-1
elements) along each of the dimensions.  The argument strides is an
array of Index values.  The dimensions of the resulting tensor are
ceil(input_dimensions[i] / strides[i]).

For example this is what happens when you ```stride()``` a 2D tensor:

    Eigen::Tensor<int, 2> a(4, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500}, {600, 700, 800}, {900, 1000, 1100}});
    Eigen::array<Eigen::DenseIndex, 2> strides({3, 2});
    Eigen::Tensor<int, 2> b = a.stride(strides);
    cout << "b" << endl << b << endl;
    =>
    b
       0   200
     900  1100

It is possible to assign a tensor to a stride:
    Tensor<float, 3> input(20, 30, 50);
    // ... set some values in input.
    Tensor<float, 3> output(40, 90, 200);
    output.stride({2, 3, 4}) = input;


### <Operation> slice(const StartIndices& offsets, const Sizes& extents)

Returns a sub-tensor of the given tensor. For each dimension i, the slice is
made of the coefficients stored between offset[i] and offset[i] + extents[i] in
the input tensor.

    Eigen::Tensor<int, 2> a(4, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500},
                 {600, 700, 800}, {900, 1000, 1100}});
    Eigen::array<int, 2> offsets = {1, 0};
    Eigen::array<int, 2> extents = {2, 2};
    Eigen::Tensor<int, 1> slice = a.slice(offsets, extents);
    cout << "a" << endl << a << endl;
    =>
    a
       0   100   200
     300   400   500
     600   700   800
     900  1000  1100
    cout << "slice" << endl << slice << endl;
    =>
    slice
     300   400
     600   700


### <Operation> chip(const Index offset, const Index dim)

A chip is a special kind of slice. It is the subtensor at the given offset in
the dimension dim. The returned tensor has one fewer dimension than the input
tensor: the dimension dim is removed.

For example, a matrix chip would be either a row or a column of the input
matrix.

    Eigen::Tensor<int, 2> a(4, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500},
                 {600, 700, 800}, {900, 1000, 1100}});
    Eigen::Tensor<int, 1> row_3 = a.chip(2, 0);
    Eigen::Tensor<int, 1> col_2 = a.chip(1, 1);
    cout << "a" << endl << a << endl;
    =>
    a
       0   100   200
     300   400   500
     600   700   800
     900  1000  1100
    cout << "row_3" << endl << row_3 << endl;
    =>
    row_3
       600   700   800
    cout << "col_2" << endl << col_2 << endl;
    =>
    col_2
       100   400   700    1000

It is possible to assign values to a tensor chip since the chip operation is a
lvalue. For example:

    Eigen::Tensor<int, 1> a(3);
    a.setValues({{100, 200, 300}});
    Eigen::Tensor<int, 2> b(2, 3);
    b.setZero();
    b.chip(0, 0) = a;
    cout << "a" << endl << a << endl;
    =>
    a
     100
     200
     300
    cout << "b" << endl << b << endl;
    =>
    b
       100   200   300
         0     0     0


### <Operation> reverse(const ReverseDimensions& reverse)

Returns a view of the input tensor that reverses the order of the coefficients
along a subset of the dimensions.  The argument reverse is an array of boolean
values that indicates whether or not the order of the coefficients should be
reversed along each of the dimensions.  This operation preserves the dimensions
of the input tensor.

For example this is what happens when you ```reverse()``` the first dimension
of a 2D tensor:

    Eigen::Tensor<int, 2> a(4, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500},
                {600, 700, 800}, {900, 1000, 1100}});
    Eigen::array<bool, 2> reverse({true, false});
    Eigen::Tensor<int, 2> b = a.reverse(reverse);
    cout << "a" << endl << a << endl << "b" << endl << b << endl;
    =>
    a
       0   100   200
     300   400   500
     600   700   800
     900  1000  1100
    b
     900  1000  1100
     600   700   800
     300   400   500
       0   100   200


### <Operation> broadcast(const Broadcast& broadcast)

Returns a view of the input tensor in which the input is replicated one to many
times.
The broadcast argument specifies how many copies of the input tensor need to be
made in each of the dimensions.

    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500}});
    Eigen::array<int, 2> bcast({3, 2});
    Eigen::Tensor<int, 2> b = a.broadcast(bcast);
    cout << "a" << endl << a << endl << "b" << endl << b << endl;
    =>
    a
       0   100   200
     300   400   500
    b
       0   100   200    0   100   200
     300   400   500  300   400   500
       0   100   200    0   100   200
     300   400   500  300   400   500
       0   100   200    0   100   200
     300   400   500  300   400   500

### <Operation> concatenate(const OtherDerived& other, Axis axis)

TODO

### <Operation>  pad(const PaddingDimensions& padding)

Returns a view of the input tensor in which the input is padded with zeros.

    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500}});
    Eigen::array<pair<int, int>, 2> paddings;
    paddings[0] = make_pair(0, 1);
    paddings[1] = make_pair(2, 3);
    Eigen::Tensor<int, 2> b = a.pad(paddings);
    cout << "a" << endl << a << endl << "b" << endl << b << endl;
    =>
    a
       0   100   200
     300   400   500
    b
       0     0     0    0
       0     0     0    0
       0   100   200    0
     300   400   500    0
       0     0     0    0
       0     0     0    0
       0     0     0    0


### <Operation>  extract_patches(const PatchDims& patch_dims)

Returns a tensor of coefficient patches extracted from the input tensor, where
each patch is of dimension specified by 'patch_dims'. The returned tensor has
one greater dimension than the input tensor, which is used to index each patch.
The patch index in the output tensor depends on the data layout of the input
tensor: the patch index is the last dimension ColMajor layout, and the first
dimension in RowMajor layout.

For example, given the following input tensor:

  Eigen::Tensor<float, 2, DataLayout> tensor(3,4);
  tensor.setValues({{0.0f, 1.0f, 2.0f, 3.0f},
                    {4.0f, 5.0f, 6.0f, 7.0f},
                    {8.0f, 9.0f, 10.0f, 11.0f}});

  cout << "tensor: " << endl << tensor << endl;
=>
tensor:
 0   1   2   3
 4   5   6   7
 8   9  10  11

Six 2x2 patches can be extracted and indexed using the following code:

  Eigen::Tensor<float, 3, DataLayout> patch;
  Eigen::array<ptrdiff_t, 2> patch_dims;
  patch_dims[0] = 2;
  patch_dims[1] = 2;
  patch = tensor.extract_patches(patch_dims);
  for (int k = 0; k < 6; ++k) {
    cout << "patch index: " << k << endl;
    for (int i = 0; i < 2; ++i) {
      for (int j = 0; j < 2; ++j) {
        if (DataLayout == ColMajor) {
          cout << patch(i, j, k) << " ";
        } else {
          cout << patch(k, i, j) << " ";
        }
      }
      cout << endl;
    }
  }

This code results in the following output when the data layout is ColMajor:

patch index: 0
0 1
4 5
patch index: 1
4 5
8 9
patch index: 2
1 2
5 6
patch index: 3
5 6
9 10
patch index: 4
2 3
6 7
patch index: 5
6 7
10 11

This code results in the following output when the data layout is RowMajor:
(NOTE: the set of patches is the same as in ColMajor, but are indexed differently).

patch index: 0
0 1
4 5
patch index: 1
1 2
5 6
patch index: 2
2 3
6 7
patch index: 3
4 5
8 9
patch index: 4
5 6
9 10
patch index: 5
6 7
10 11

### <Operation>  extract_image_patches(const Index patch_rows, const Index patch_cols,
                          const Index row_stride, const Index col_stride,
                          const PaddingType padding_type)

Returns a tensor of coefficient image patches extracted from the input tensor,
which is expected to have dimensions ordered as follows (depending on the data
layout of the input tensor, and the number of additional dimensions 'N'):

*) ColMajor
1st dimension: channels (of size d)
2nd dimension: rows (of size r)
3rd dimension: columns (of size c)
4th-Nth dimension: time (for video) or batch (for bulk processing).

*) RowMajor (reverse order of ColMajor)
1st-Nth dimension: time (for video) or batch (for bulk processing).
N+1'th dimension: columns (of size c)
N+2'th dimension: rows (of size r)
N+3'th dimension: channels (of size d)

The returned tensor has one greater dimension than the input tensor, which is
used to index each patch. The patch index in the output tensor depends on the
data layout of the input tensor: the patch index is the 4'th dimension in
ColMajor layout, and the 4'th from the last dimension in RowMajor layout.

For example, given the following input tensor with the following dimension
sizes:
 *) depth:   2
 *) rows:    3
 *) columns: 5
 *) batch:   7

  Tensor<float, 4> tensor(2,3,5,7);
  Tensor<float, 4, RowMajor> tensor_row_major = tensor.swap_layout();

2x2 image patches can be extracted and indexed using the following code:

*) 2D patch: ColMajor (patch indexed by second-to-last dimension)
  Tensor<float, 5> twod_patch;
  twod_patch = tensor.extract_image_patches<2, 2>();
  // twod_patch.dimension(0) == 2
  // twod_patch.dimension(1) == 2
  // twod_patch.dimension(2) == 2
  // twod_patch.dimension(3) == 3*5
  // twod_patch.dimension(4) == 7

*) 2D patch: RowMajor (patch indexed by the second dimension)
  Tensor<float, 5, RowMajor> twod_patch_row_major;
  twod_patch_row_major = tensor_row_major.extract_image_patches<2, 2>();
  // twod_patch_row_major.dimension(0) == 7
  // twod_patch_row_major.dimension(1) == 3*5
  // twod_patch_row_major.dimension(2) == 2
  // twod_patch_row_major.dimension(3) == 2
  // twod_patch_row_major.dimension(4) == 2

## Special Operations

### <Operation> cast<T>()

Returns a tensor of type T with the same dimensions as the original tensor.
The returned tensor contains the values of the original tensor converted to
type T.

    Eigen::Tensor<float, 2> a(2, 3);
    Eigen::Tensor<int, 2> b = a.cast<int>();

This can be useful for example if you need to do element-wise division of
Tensors of integers.  This is not currently supported by the Tensor library
but you can easily cast the tensors to floats to do the division:

    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{0, 1, 2}, {3, 4, 5}});
    Eigen::Tensor<int, 2> b =
        (a.cast<float>() / a.constant(2).cast<float>()).cast<int>();
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    0 1 2
    3 4 5

    b
    0 0 1
    1 2 2


### <Operation>     eval()

TODO


## Representation of scalar values

Scalar values are often represented by tensors of size 1 and rank 1. It would be
more logical and user friendly to use tensors of rank 0 instead. For example
Tensor<T, N>::maximum() currently returns a Tensor<T, 1>. Similarly, the inner
product of 2 1d tensors (through contractions) returns a 1d tensor. In the
future these operations might be updated to return 0d tensors instead.

## Limitations

*   The number of tensor dimensions is currently limited to 250 when using a
    compiler that supports cxx11. It is limited to only 5 for older compilers.
*   The IndexList class requires a cxx11 compliant compiler. You can use an
    array of indices instead if you don't have access to a modern compiler.
*   On GPUs only floating point values are properly tested and optimized for.
*   Complex and integer values are known to be broken on GPUs. If you try to use
    them you'll most likely end up triggering a static assertion failure such as
    EIGEN_STATIC_ASSERT(packetSize > 1, YOU_MADE_A_PROGRAMMING_MISTAKE)


include\unsupported\Eigen\CXX11\src\Tensor\Tensor.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorArgMax.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorAssign.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorBase.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorBroadcasting.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorChipping.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorConcatenation.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorContraction.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorContractionBlocking.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorContractionCuda.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorContractionMapper.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorContractionThreadPool.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorConversion.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorConvolution.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorCostModel.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorCustomOp.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorDevice.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorDeviceCuda.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorDeviceDefault.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorDeviceSycl.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorDeviceThreadPool.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorDimensionList.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorDimensions.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorEvalTo.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorEvaluator.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorExecutor.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorExpr.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorFFT.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorFixedSize.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorForcedEval.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorForwardDeclarations.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorFunctors.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorGenerator.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorGlobalFunctions.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorImagePatch.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorIndexList.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorInflation.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorInitializer.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorIntDiv.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorIO.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorLayoutSwap.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorMacros.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorMap.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorMeta.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorMorphing.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorPadding.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorPatch.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorRandom.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorReduction.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorReductionCuda.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorReductionSycl.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorRef.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorReverse.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorScan.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorShuffling.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorStorage.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorStriding.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSycl.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSyclConvertToDeviceExpression.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSyclExprConstructor.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSyclExtractAccessor.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSyclExtractFunctors.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSyclLeafCount.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSyclPlaceHolderExpr.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSyclRun.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorSyclTuple.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorTraits.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorUInt128.h
 
include\unsupported\Eigen\CXX11\src\Tensor\TensorVolumePatch.h
 
include\unsupported\Eigen\CXX11\src\TensorSymmetry\DynamicSymmetry.h
 
include\unsupported\Eigen\CXX11\src\TensorSymmetry\StaticSymmetry.h
 
include\unsupported\Eigen\CXX11\src\TensorSymmetry\Symmetry.h
 
include\unsupported\Eigen\CXX11\src\TensorSymmetry\util\TemplateGroupTheory.h
 
include\unsupported\Eigen\CXX11\src\ThreadPool\EventCount.h
 
include\unsupported\Eigen\CXX11\src\ThreadPool\NonBlockingThreadPool.h
 
include\unsupported\Eigen\CXX11\src\ThreadPool\RunQueue.h
 
include\unsupported\Eigen\CXX11\src\ThreadPool\SimpleThreadPool.h
 
include\unsupported\Eigen\CXX11\src\ThreadPool\ThreadEnvironment.h
 
include\unsupported\Eigen\CXX11\src\ThreadPool\ThreadLocal.h
 
include\unsupported\Eigen\CXX11\src\ThreadPool\ThreadPoolInterface.h
 
include\unsupported\Eigen\CXX11\src\ThreadPool\ThreadYield.h
 
include\unsupported\Eigen\CXX11\src\util\CXX11Meta.h
 
include\unsupported\Eigen\CXX11\src\util\CXX11Workarounds.h
 
include\unsupported\Eigen\CXX11\src\util\EmulateArray.h
 
include\unsupported\Eigen\CXX11\src\util\EmulateCXX11Meta.h
 
include\unsupported\Eigen\CXX11\src\util\MaxSizeVector.h
 
include\unsupported\Eigen\CXX11\Tensor
 
include\unsupported\Eigen\CXX11\TensorSymmetry
 
include\unsupported\Eigen\CXX11\ThreadPool
 
include\unsupported\Eigen\EulerAngles
 
include\unsupported\Eigen\FFT
 
include\unsupported\Eigen\IterativeSolvers
 
include\unsupported\Eigen\KroneckerProduct
 
include\unsupported\Eigen\LevenbergMarquardt
 
include\unsupported\Eigen\MatrixFunctions
 
include\unsupported\Eigen\MoreVectorization
 
include\unsupported\Eigen\MPRealSupport
 
include\unsupported\Eigen\NonLinearOptimization
 
include\unsupported\Eigen\NumericalDiff
 
include\unsupported\Eigen\OpenGLSupport
 
include\unsupported\Eigen\Polynomials
 
include\unsupported\Eigen\Skyline
 
include\unsupported\Eigen\SparseExtra
 
include\unsupported\Eigen\SpecialFunctions
 
include\unsupported\Eigen\Splines
 
include\unsupported\Eigen\src\AutoDiff\AutoDiffJacobian.h
 
include\unsupported\Eigen\src\AutoDiff\AutoDiffScalar.h
 
include\unsupported\Eigen\src\AutoDiff\AutoDiffVector.h
 
include\unsupported\Eigen\src\BVH\BVAlgorithms.h
 
include\unsupported\Eigen\src\BVH\KdBVH.h
 
include\unsupported\Eigen\src\Eigenvalues\ArpackSelfAdjointEigenSolver.h
 
include\unsupported\Eigen\src\EulerAngles\EulerAngles.h
 
include\unsupported\Eigen\src\EulerAngles\EulerSystem.h
 
include\unsupported\Eigen\src\FFT\ei_fftw_impl.h
 
include\unsupported\Eigen\src\FFT\ei_kissfft_impl.h
 
include\unsupported\Eigen\src\IterativeSolvers\ConstrainedConjGrad.h
 
include\unsupported\Eigen\src\IterativeSolvers\DGMRES.h
 
include\unsupported\Eigen\src\IterativeSolvers\GMRES.h
 
include\unsupported\Eigen\src\IterativeSolvers\IncompleteLU.h
 
include\unsupported\Eigen\src\IterativeSolvers\IterationController.h
 
include\unsupported\Eigen\src\IterativeSolvers\MINRES.h
 
include\unsupported\Eigen\src\IterativeSolvers\Scaling.h
 
include\unsupported\Eigen\src\KroneckerProduct\KroneckerTensorProduct.h
 
include\unsupported\Eigen\src\LevenbergMarquardt\CopyrightMINPACK.txt
Minpack Copyright Notice (1999) University of Chicago.  All rights reserved

Redistribution and use in source and binary forms, with or
without modification, are permitted provided that the
following conditions are met:

1. Redistributions of source code must retain the above
copyright notice, this list of conditions and the following
disclaimer.

2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.

3. The end-user documentation included with the
redistribution, if any, must include the following
acknowledgment:

   "This product includes software developed by the
   University of Chicago, as Operator of Argonne National
   Laboratory.

Alternately, this acknowledgment may appear in the software
itself, if and wherever such third-party acknowledgments
normally appear.

4. WARRANTY DISCLAIMER. THE SOFTWARE IS SUPPLIED "AS IS"
WITHOUT WARRANTY OF ANY KIND. THE COPYRIGHT HOLDER, THE
UNITED STATES, THE UNITED STATES DEPARTMENT OF ENERGY, AND
THEIR EMPLOYEES: (1) DISCLAIM ANY WARRANTIES, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO ANY IMPLIED WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE
OR NON-INFRINGEMENT, (2) DO NOT ASSUME ANY LEGAL LIABILITY
OR RESPONSIBILITY FOR THE ACCURACY, COMPLETENESS, OR
USEFULNESS OF THE SOFTWARE, (3) DO NOT REPRESENT THAT USE OF
THE SOFTWARE WOULD NOT INFRINGE PRIVATELY OWNED RIGHTS, (4)
DO NOT WARRANT THAT THE SOFTWARE WILL FUNCTION
UNINTERRUPTED, THAT IT IS ERROR-FREE OR THAT ANY ERRORS WILL
BE CORRECTED.

5. LIMITATION OF LIABILITY. IN NO EVENT WILL THE COPYRIGHT
HOLDER, THE UNITED STATES, THE UNITED STATES DEPARTMENT OF
ENERGY, OR THEIR EMPLOYEES: BE LIABLE FOR ANY INDIRECT,
INCIDENTAL, CONSEQUENTIAL, SPECIAL OR PUNITIVE DAMAGES OF
ANY KIND OR NATURE, INCLUDING BUT NOT LIMITED TO LOSS OF
PROFITS OR LOSS OF DATA, FOR ANY REASON WHATSOEVER, WHETHER
SUCH LIABILITY IS ASSERTED ON THE BASIS OF CONTRACT, TORT
(INCLUDING NEGLIGENCE OR STRICT LIABILITY), OR OTHERWISE,
EVEN IF ANY OF SAID PARTIES HAS BEEN WARNED OF THE
POSSIBILITY OF SUCH LOSS OR DAMAGES.

include\unsupported\Eigen\src\LevenbergMarquardt\LevenbergMarquardt.h
 
include\unsupported\Eigen\src\LevenbergMarquardt\LMcovar.h
 
include\unsupported\Eigen\src\LevenbergMarquardt\LMonestep.h
 
include\unsupported\Eigen\src\LevenbergMarquardt\LMpar.h
 
include\unsupported\Eigen\src\LevenbergMarquardt\LMqrsolv.h
 
include\unsupported\Eigen\src\MatrixFunctions\MatrixExponential.h
 
include\unsupported\Eigen\src\MatrixFunctions\MatrixFunction.h
 
include\unsupported\Eigen\src\MatrixFunctions\MatrixLogarithm.h
 
include\unsupported\Eigen\src\MatrixFunctions\MatrixPower.h
 
include\unsupported\Eigen\src\MatrixFunctions\MatrixSquareRoot.h
 
include\unsupported\Eigen\src\MatrixFunctions\StemFunction.h
 
include\unsupported\Eigen\src\MoreVectorization\MathFunctions.h
 
include\unsupported\Eigen\src\NonLinearOptimization\chkder.h
 
include\unsupported\Eigen\src\NonLinearOptimization\covar.h
 
include\unsupported\Eigen\src\NonLinearOptimization\dogleg.h
 
include\unsupported\Eigen\src\NonLinearOptimization\fdjac1.h
 
include\unsupported\Eigen\src\NonLinearOptimization\HybridNonLinearSolver.h
 
include\unsupported\Eigen\src\NonLinearOptimization\LevenbergMarquardt.h
 
include\unsupported\Eigen\src\NonLinearOptimization\lmpar.h
 
include\unsupported\Eigen\src\NonLinearOptimization\qrsolv.h
 
include\unsupported\Eigen\src\NonLinearOptimization\r1mpyq.h
 
include\unsupported\Eigen\src\NonLinearOptimization\r1updt.h
 
include\unsupported\Eigen\src\NonLinearOptimization\rwupdt.h
 
include\unsupported\Eigen\src\NumericalDiff\NumericalDiff.h
 
include\unsupported\Eigen\src\Polynomials\Companion.h
 
include\unsupported\Eigen\src\Polynomials\PolynomialSolver.h
 
include\unsupported\Eigen\src\Polynomials\PolynomialUtils.h
 
include\unsupported\Eigen\src\Skyline\SkylineInplaceLU.h
 
include\unsupported\Eigen\src\Skyline\SkylineMatrix.h
 
include\unsupported\Eigen\src\Skyline\SkylineMatrixBase.h
 
include\unsupported\Eigen\src\Skyline\SkylineProduct.h
 
include\unsupported\Eigen\src\Skyline\SkylineStorage.h
 
include\unsupported\Eigen\src\Skyline\SkylineUtil.h
 
include\unsupported\Eigen\src\SparseExtra\BlockOfDynamicSparseMatrix.h
 
include\unsupported\Eigen\src\SparseExtra\BlockSparseMatrix.h
 
include\unsupported\Eigen\src\SparseExtra\DynamicSparseMatrix.h
 
include\unsupported\Eigen\src\SparseExtra\MarketIO.h
 
include\unsupported\Eigen\src\SparseExtra\MatrixMarketIterator.h
 
include\unsupported\Eigen\src\SparseExtra\RandomSetter.h
 
include\unsupported\Eigen\src\SpecialFunctions\arch\CUDA\CudaSpecialFunctions.h
 
include\unsupported\Eigen\src\SpecialFunctions\SpecialFunctionsArrayAPI.h
 
include\unsupported\Eigen\src\SpecialFunctions\SpecialFunctionsFunctors.h
 
include\unsupported\Eigen\src\SpecialFunctions\SpecialFunctionsHalf.h
 
include\unsupported\Eigen\src\SpecialFunctions\SpecialFunctionsImpl.h
 
include\unsupported\Eigen\src\SpecialFunctions\SpecialFunctionsPacketMath.h
 
include\unsupported\Eigen\src\Splines\Spline.h
 
include\unsupported\Eigen\src\Splines\SplineFitting.h
 
include\unsupported\Eigen\src\Splines\SplineFwd.h
 
include\unsupported\README.txt
This directory contains contributions from various users.
They are provided "as is", without any support. Nevertheless,
most of them are subject to be included in Eigen in the future.

In order to use an unsupported module you have to do either:

 - add the path_to_eigen/unsupported directory to your include path and do:
   #include <Eigen/ModuleHeader>

 - or directly do:
   #include <unsupported/Eigen/ModuleHeader>


If you are interested in contributing to one of them, or have other stuff
you would like to share, feel free to contact us:
http://eigen.tuxfamily.org/index.php?title=Main_Page#Mailing_list

Any kind of contributions are much appreciated, even very preliminary ones.
However, it:
 - must rely on Eigen,
 - must be highly related to math,
 - should have some general purpose in the sense that it could
   potentially become an offical Eigen module (or be merged into another one).

In doubt feel free to contact us. For instance, if your addons is very too specific
but it shows an interesting way of using Eigen, then it could be a nice demo.


This directory is organized as follow:

unsupported/Eigen/ModuleHeader1
unsupported/Eigen/ModuleHeader2
unsupported/Eigen/...
unsupported/Eigen/src/Module1/SourceFile1.h
unsupported/Eigen/src/Module1/SourceFile2.h
unsupported/Eigen/src/Module1/...
unsupported/Eigen/src/Module2/SourceFile1.h
unsupported/Eigen/src/Module2/SourceFile2.h
unsupported/Eigen/src/Module2/...
unsupported/Eigen/src/...
unsupported/doc/snippets/.cpp   <- code snippets for the doc
unsupported/doc/examples/.cpp   <- examples for the doc
unsupported/doc/TutorialModule1.dox
unsupported/doc/TutorialModule2.dox
unsupported/doc/...
unsupported/test/.cpp           <- unit test files

The documentation is generated at the same time than the main Eigen documentation.
The .html files are generated in: build_dir/doc/html/unsupported/

share\cmake\Eigen3Config.cmake
 
share\cmake\Eigen3ConfigVersion.cmake
 
tools\chocolateyinstall.ps1
$ErrorActionPreference = 'Stop'; # Stop on all errors.

# Source registry key values which are shared between install and uninstall.
. $PSScriptRoot\regKeys.ps1

New-Item "$CMakeSystemRepositoryPath\$CMakePackageName" -ItemType directory -Force
New-ItemProperty -Name "CMakePackageDir" -PropertyType String -Value "$env:ChocolateyPackageFolder\share\cmake" -Path "$CMakeSystemRepositoryPath\$CMakePackageName" -Force
tools\chocolateyuninstall.ps1
$ErrorActionPreference = 'Stop'; # Stop on all errors.

# Source registry key values which are shared between install and uninstall.
. $PSScriptRoot\regKeys.ps1

if (Test-Path $CMakeRegistryPath) {
  if (Test-Path $CMakeSystemRepositoryPath) {
      Remove-Item "$CMakeSystemRepositoryPath\$CMakePackageName"
  }
}
tools\LICENSE.txt

From: https://www.mozilla.org/en-US/MPL/2.0/

LICENSE

Mozilla Public License
Version 2.0
1. Definitions

1.1. “Contributor”

    means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software.
1.2. “Contributor Version”

    means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor’s Contribution.
1.3. “Contribution”

    means Covered Software of a particular Contributor.
1.4. “Covered Software”

    means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof.
1.5. “Incompatible With Secondary Licenses”

    means

        that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or

        that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License.

1.6. “Executable Form”

    means any form of the work other than Source Code Form.
1.7. “Larger Work”

    means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software.
1.8. “License”

    means this document.
1.9. “Licensable”

    means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License.
1.10. “Modifications”

    means any of the following:

        any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or

        any new file in Source Code Form that contains any Covered Software.

1.11. “Patent Claims” of a Contributor

    means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version.
1.12. “Secondary License”

    means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”

    means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)

    means an individual or a legal entity exercising rights under this License. For legal entities, “You” includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control” means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity.

2. License Grants and Conditions
2.1. Grants

Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:

    under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and

    under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version.

2.2. Effective Date

The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution.
2.3. Limitations on Grant Scope

The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor:

    for any code that a Contributor has removed from Covered Software; or

    for infringements caused by: (i) Your and any other third party’s modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or

    under Patent Claims infringed by Covered Software in the absence of its Contributions.

This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4).
2.4. Subsequent Licenses

No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3).
2.5. Representation

Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use

This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions

Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form

All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients’ rights in the Source Code Form.
3.2. Distribution of Executable Form

If You distribute Covered Software in Executable Form then:

    such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and

    You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients’ rights in the Source Code Form under this License.

3.3. Distribution of a Larger Work

You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s).
3.4. Notices

You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms

You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction.
4. Inability to Comply Due to Statute or Regulation

If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it.
5. Termination

5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice.

5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate.

5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination.
6. Disclaimer of Warranty

Covered Software is provided under this License on an “as is” basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer.
7. Limitation of Liability

Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party’s negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation

Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party’s ability to bring cross-claims or counter-claims.
9. Miscellaneous

This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions

Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number.
10.2. Effect of New Versions

You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward.
10.3. Modified Versions

If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses

If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice

    This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/.

If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice.

You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice

    This Source Code Form is “Incompatible With Secondary Licenses”, as defined by the Mozilla Public License, v. 2.0.

tools\regKeys.ps1
$CMakeRegistryPath = "HKCU:\SOFTWARE\Kitware\CMake"
$CMakeSystemRepositoryPath = "HKLM:\SOFTWARE\Kitware\CMake\Packages"
$CMakePackageName = "Eigen3"
tools\VERIFICATION.txt

VERIFICATION
Verification is intended to assist the Chocolatey moderators and community
in verifying that this package's contents are trustworthy.
 
This package is inspired from https://github.com/nuclearsandwich/eigen-choco and https://github.com/ros2/choco-packages, with minor modifications. The original source provided by the software authors is available on https://gitlab.com/libeigen/eigen/-/archive/3.3.4/eigen-3.3.4.zip. It can be checked e.g. using WinMerge that eigen\include\Eigen and eigen\include\unsupported from this package are identical to respective subfolders in eigen-3.3.4 from the software authors. The remaining files in eigen\share\cmake can be easily checked by opening them in a text editor (they set paths and version information for CMake).

Log in or click on link to see number of positives.

In cases where actual malware is found, the packages are subject to removal. Software sometimes has false positives. Moderators do not necessarily validate the safety of the underlying software, only that a package retrieves software from the official distribution point and/or validate embedded software against official distribution point (where distribution rights allow redistribution).

Chocolatey Pro provides runtime protection from possible malware.

Add to Builder Version Downloads Last Updated Status
Eigen 3.4.0.20240211 72 Saturday, February 24, 2024 Approved
Eigen 3.4.0 16583 Saturday, March 12, 2022 Approved
Eigen 3.3.4.20210818 3962 Wednesday, August 18, 2021 Approved
Eigen 3.3.4 726 Tuesday, May 4, 2021 Approved

This package has no dependencies.

Discussion for the Eigen Package

Ground Rules:

  • This discussion is only about Eigen and the Eigen package. If you have feedback for Chocolatey, please contact the Google Group.
  • This discussion will carry over multiple versions. If you have a comment about a particular version, please note that in your comments.
  • The maintainers of this Chocolatey Package will be notified about new comments that are posted to this Disqus thread, however, it is NOT a guarantee that you will get a response. If you do not hear back from the maintainers after posting a message below, please follow up by using the link on the left side of this page or follow this link to contact maintainers. If you still hear nothing back, please follow the package triage process.
  • Tell us what you love about the package or Eigen, or tell us what needs improvement.
  • Share your experiences with the package, or extra configuration or gotchas that you've found.
  • If you use a url, the comment will be flagged for moderation until you've been whitelisted. Disqus moderated comments are approved on a weekly schedule if not sooner. It could take between 1-5 days for your comment to show up.
comments powered by Disqus