Python memory error: compute inverse of large sized matrix The Next CEO of Stack Overflow2019 Community Moderator ElectionPython + Neural Nets + Large dimension Large DatasetMemory error - Hierarchical Dirichlet Process, HDP gensimReproducing randomForest Proximity Matrix from R package in PythonPython giving memory error while fitting the text into vector using various vectorizerOut of Memory Error when Selecting Data from Redshift TableHandling large word embedding matrix in PythonMemory error when using more layers in CNN modelFaster 3D Matrix Operation - PythonFast way of computing covariance matrix of nonstationary kernel in PythonBuilding a large distance matrix
Measuring resistivity of dielectric liquid
Inappropriate reference requests from Journal reviewers
I want to make a picture in physics with TikZ. Can you help me?
Why isn't the Mueller report being released completely and unredacted?
Combine columns from several files into one
What connection does MS Office have to Netscape Navigator?
Do I need to write [sic] when a number is less than 10 but isn't written out?
Why is my new battery behaving weirdly?
Reference request: Grassmannian and Plucker coordinates in type B, C, D
How to count occurrences of text in a file?
Why didn't Khan get resurrected in the Genesis Explosion?
How to avoid supervisors with prejudiced views?
Is there a way to bypass a component in series in a circuit if that component fails?
Beveled cylinder cutout
Would this house-rule that treats advantage as a +1 to the roll instead (and disadvantage as -1) and allows them to stack be balanced?
Display a text message if the shortcode is not found?
Is it professional to write unrelated content in an almost-empty email?
Are police here, aren't itthey?
Running a General Election and the European Elections together
Does soap repel water?
Why this way of making earth uninhabitable in Interstellar?
What did we know about the Kessel run before the prologues?
Decomposition of product of two Plucker coordinates
Should I tutor a student who I know has cheated on their homework?
Python memory error: compute inverse of large sized matrix
The Next CEO of Stack Overflow2019 Community Moderator ElectionPython + Neural Nets + Large dimension Large DatasetMemory error - Hierarchical Dirichlet Process, HDP gensimReproducing randomForest Proximity Matrix from R package in PythonPython giving memory error while fitting the text into vector using various vectorizerOut of Memory Error when Selecting Data from Redshift TableHandling large word embedding matrix in PythonMemory error when using more layers in CNN modelFaster 3D Matrix Operation - PythonFast way of computing covariance matrix of nonstationary kernel in PythonBuilding a large distance matrix
$begingroup$
I have a big matrix of size 200000 x 200000. I need to compute its inverse. But it gives out of memory error on using numpy.linalg.inv. Is there any way to compute the inverse of a large sized matrix.
python matrix
$endgroup$
|
show 4 more comments
$begingroup$
I have a big matrix of size 200000 x 200000. I need to compute its inverse. But it gives out of memory error on using numpy.linalg.inv. Is there any way to compute the inverse of a large sized matrix.
python matrix
$endgroup$
$begingroup$
You may want to see here. It may help you.
$endgroup$
– Media
Mar 23 at 9:28
$begingroup$
That is a large matrix to compute an inverse. If the data elements are floats then there is fair amount of floating point operations in progress. That needs memory. Try increasing your RAM for such bigger operations. Suggestion by @Media is also helpful
$endgroup$
– Savinay_
Mar 23 at 9:37
$begingroup$
@Savinay_ Yes the data elements are floats. Some values are even of the range of 10^-6
$endgroup$
– shaifali Gupta
Mar 23 at 9:40
$begingroup$
Another approach can be using another programming language. I guess in java there are some supports for that even if the RAM is full. I don't know the inner implementation but I guess they use paging capabilities.
$endgroup$
– Media
Mar 23 at 9:49
$begingroup$
@shaifaliGupta You need to increase your memory for sure.Floating point operations such as matrix inversions take up to O(n^3) complexity (time and memory expensive). You have a very big matrix to inverse. That needs memory.
$endgroup$
– Savinay_
Mar 23 at 9:52
|
show 4 more comments
$begingroup$
I have a big matrix of size 200000 x 200000. I need to compute its inverse. But it gives out of memory error on using numpy.linalg.inv. Is there any way to compute the inverse of a large sized matrix.
python matrix
$endgroup$
I have a big matrix of size 200000 x 200000. I need to compute its inverse. But it gives out of memory error on using numpy.linalg.inv. Is there any way to compute the inverse of a large sized matrix.
python matrix
python matrix
asked Mar 23 at 9:17
shaifali Guptashaifali Gupta
7610
7610
$begingroup$
You may want to see here. It may help you.
$endgroup$
– Media
Mar 23 at 9:28
$begingroup$
That is a large matrix to compute an inverse. If the data elements are floats then there is fair amount of floating point operations in progress. That needs memory. Try increasing your RAM for such bigger operations. Suggestion by @Media is also helpful
$endgroup$
– Savinay_
Mar 23 at 9:37
$begingroup$
@Savinay_ Yes the data elements are floats. Some values are even of the range of 10^-6
$endgroup$
– shaifali Gupta
Mar 23 at 9:40
$begingroup$
Another approach can be using another programming language. I guess in java there are some supports for that even if the RAM is full. I don't know the inner implementation but I guess they use paging capabilities.
$endgroup$
– Media
Mar 23 at 9:49
$begingroup$
@shaifaliGupta You need to increase your memory for sure.Floating point operations such as matrix inversions take up to O(n^3) complexity (time and memory expensive). You have a very big matrix to inverse. That needs memory.
$endgroup$
– Savinay_
Mar 23 at 9:52
|
show 4 more comments
$begingroup$
You may want to see here. It may help you.
$endgroup$
– Media
Mar 23 at 9:28
$begingroup$
That is a large matrix to compute an inverse. If the data elements are floats then there is fair amount of floating point operations in progress. That needs memory. Try increasing your RAM for such bigger operations. Suggestion by @Media is also helpful
$endgroup$
– Savinay_
Mar 23 at 9:37
$begingroup$
@Savinay_ Yes the data elements are floats. Some values are even of the range of 10^-6
$endgroup$
– shaifali Gupta
Mar 23 at 9:40
$begingroup$
Another approach can be using another programming language. I guess in java there are some supports for that even if the RAM is full. I don't know the inner implementation but I guess they use paging capabilities.
$endgroup$
– Media
Mar 23 at 9:49
$begingroup$
@shaifaliGupta You need to increase your memory for sure.Floating point operations such as matrix inversions take up to O(n^3) complexity (time and memory expensive). You have a very big matrix to inverse. That needs memory.
$endgroup$
– Savinay_
Mar 23 at 9:52
$begingroup$
You may want to see here. It may help you.
$endgroup$
– Media
Mar 23 at 9:28
$begingroup$
You may want to see here. It may help you.
$endgroup$
– Media
Mar 23 at 9:28
$begingroup$
That is a large matrix to compute an inverse. If the data elements are floats then there is fair amount of floating point operations in progress. That needs memory. Try increasing your RAM for such bigger operations. Suggestion by @Media is also helpful
$endgroup$
– Savinay_
Mar 23 at 9:37
$begingroup$
That is a large matrix to compute an inverse. If the data elements are floats then there is fair amount of floating point operations in progress. That needs memory. Try increasing your RAM for such bigger operations. Suggestion by @Media is also helpful
$endgroup$
– Savinay_
Mar 23 at 9:37
$begingroup$
@Savinay_ Yes the data elements are floats. Some values are even of the range of 10^-6
$endgroup$
– shaifali Gupta
Mar 23 at 9:40
$begingroup$
@Savinay_ Yes the data elements are floats. Some values are even of the range of 10^-6
$endgroup$
– shaifali Gupta
Mar 23 at 9:40
$begingroup$
Another approach can be using another programming language. I guess in java there are some supports for that even if the RAM is full. I don't know the inner implementation but I guess they use paging capabilities.
$endgroup$
– Media
Mar 23 at 9:49
$begingroup$
Another approach can be using another programming language. I guess in java there are some supports for that even if the RAM is full. I don't know the inner implementation but I guess they use paging capabilities.
$endgroup$
– Media
Mar 23 at 9:49
$begingroup$
@shaifaliGupta You need to increase your memory for sure.Floating point operations such as matrix inversions take up to O(n^3) complexity (time and memory expensive). You have a very big matrix to inverse. That needs memory.
$endgroup$
– Savinay_
Mar 23 at 9:52
$begingroup$
@shaifaliGupta You need to increase your memory for sure.Floating point operations such as matrix inversions take up to O(n^3) complexity (time and memory expensive). You have a very big matrix to inverse. That needs memory.
$endgroup$
– Savinay_
Mar 23 at 9:52
|
show 4 more comments
1 Answer
1
active
oldest
votes
$begingroup$
Well it does not matter the language you will use, I am surprised you can even store that matrice in memory... I had a similar problem while researching on kernel methods.
- First, to put in perspective how huge that matrix is, I will assume each element is 32 bit word: $ 200,000 times 200,000 times 32 = 1.28 times 10^12 $ bits or 149 GB
The bitlenght is variable but even for a boolean matrix that is about 4,65 GB and you said your data is a bit more complex than that.
Solution 1
If you trying to solve a linear system there are many iterative solutions that might help you computing an 200,000 x 1 array aproximation of your system answer without having to store that absurdly large matrix in memory. But you should be aware that this might take a bit long because you might have to load information from the disk.
Solution 2
Another possible solution, if you really need that inverse is if that matrix can be expressed as sumation of outer products of vectors. If so, you can do this:
$$
M = sum v_i.u_i^T
$$
Get the first set element of the sum and invert it so you have $M^-1_bad aproximation$ matrix by using pseudoinverse, instead of doing the outerproduct you can invert $u_1$ and $v_1$ and use Moore-Penrose Inverse property:
$$
M^-1_bad aproximation = (v_1.u_1)^dagger = v_1^dagger . u_1^dagger
$$
Combine this with Sherman-Morrison formula to add the other $v_i$ and $u_i$ vectors.
Sherman-Morrison formula gives:
$$
(A + uv^T)^-1 = A^-1 - dfracA^-1uv^TA^-11 + v^TA^-1u
$$
Solution 3
You might try using Schur Complement which allows you to do the following:
Given a matrix $M$, it can be expressed as
$$
M =
beginbmatrix
A & B \
C & D \
endbmatrix
$$
where $A$ and $B$ must be square you can solve the inverse of $M$ blockwise as:
$$
M^-1 =
beginbmatrix
(M/D)^-1 & (M/D)^-1BD \
-D^-1C(M/D)^-1 & (M/A)^-1 \
endbmatrix
$$
where $(M/D)$ denotes the schur complement of block D of the Matrix M and is defines as
$$
(M/D) = A - BD^-1C
$$
also
$$
(M/A) = D - BA^-1C
$$
If $A$ or $D$ are singular, the matrice $M$ is singular and a pseudo inverse might be used.
This can give a reduction from inverting a $200,000 times 200,000$ matrice to inverting 4 $50,000 times 50,000$ matrices and some matrice multiplication, also this can be implement in parallel applications and there is way to do this recursively given certain conditions (enabling a reduction greater than 4 in size)
The $50,000 times 50,000$ matrices store is only 9.3 GB each and will be easier to invert
Hope it helps. Also you might want to search answers in Math Stack or in Stack Overflow and even ask there.
Edit: Added another solution with Schur Complement
New contributor
$endgroup$
$begingroup$
The first solution also works for matrix multiplication, i.e. finding $A^-1B$ without the knowledge of $A^-1$
$endgroup$
– Pedro Henrique Monforte
Mar 26 at 11:04
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47830%2fpython-memory-error-compute-inverse-of-large-sized-matrix%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Well it does not matter the language you will use, I am surprised you can even store that matrice in memory... I had a similar problem while researching on kernel methods.
- First, to put in perspective how huge that matrix is, I will assume each element is 32 bit word: $ 200,000 times 200,000 times 32 = 1.28 times 10^12 $ bits or 149 GB
The bitlenght is variable but even for a boolean matrix that is about 4,65 GB and you said your data is a bit more complex than that.
Solution 1
If you trying to solve a linear system there are many iterative solutions that might help you computing an 200,000 x 1 array aproximation of your system answer without having to store that absurdly large matrix in memory. But you should be aware that this might take a bit long because you might have to load information from the disk.
Solution 2
Another possible solution, if you really need that inverse is if that matrix can be expressed as sumation of outer products of vectors. If so, you can do this:
$$
M = sum v_i.u_i^T
$$
Get the first set element of the sum and invert it so you have $M^-1_bad aproximation$ matrix by using pseudoinverse, instead of doing the outerproduct you can invert $u_1$ and $v_1$ and use Moore-Penrose Inverse property:
$$
M^-1_bad aproximation = (v_1.u_1)^dagger = v_1^dagger . u_1^dagger
$$
Combine this with Sherman-Morrison formula to add the other $v_i$ and $u_i$ vectors.
Sherman-Morrison formula gives:
$$
(A + uv^T)^-1 = A^-1 - dfracA^-1uv^TA^-11 + v^TA^-1u
$$
Solution 3
You might try using Schur Complement which allows you to do the following:
Given a matrix $M$, it can be expressed as
$$
M =
beginbmatrix
A & B \
C & D \
endbmatrix
$$
where $A$ and $B$ must be square you can solve the inverse of $M$ blockwise as:
$$
M^-1 =
beginbmatrix
(M/D)^-1 & (M/D)^-1BD \
-D^-1C(M/D)^-1 & (M/A)^-1 \
endbmatrix
$$
where $(M/D)$ denotes the schur complement of block D of the Matrix M and is defines as
$$
(M/D) = A - BD^-1C
$$
also
$$
(M/A) = D - BA^-1C
$$
If $A$ or $D$ are singular, the matrice $M$ is singular and a pseudo inverse might be used.
This can give a reduction from inverting a $200,000 times 200,000$ matrice to inverting 4 $50,000 times 50,000$ matrices and some matrice multiplication, also this can be implement in parallel applications and there is way to do this recursively given certain conditions (enabling a reduction greater than 4 in size)
The $50,000 times 50,000$ matrices store is only 9.3 GB each and will be easier to invert
Hope it helps. Also you might want to search answers in Math Stack or in Stack Overflow and even ask there.
Edit: Added another solution with Schur Complement
New contributor
$endgroup$
$begingroup$
The first solution also works for matrix multiplication, i.e. finding $A^-1B$ without the knowledge of $A^-1$
$endgroup$
– Pedro Henrique Monforte
Mar 26 at 11:04
add a comment |
$begingroup$
Well it does not matter the language you will use, I am surprised you can even store that matrice in memory... I had a similar problem while researching on kernel methods.
- First, to put in perspective how huge that matrix is, I will assume each element is 32 bit word: $ 200,000 times 200,000 times 32 = 1.28 times 10^12 $ bits or 149 GB
The bitlenght is variable but even for a boolean matrix that is about 4,65 GB and you said your data is a bit more complex than that.
Solution 1
If you trying to solve a linear system there are many iterative solutions that might help you computing an 200,000 x 1 array aproximation of your system answer without having to store that absurdly large matrix in memory. But you should be aware that this might take a bit long because you might have to load information from the disk.
Solution 2
Another possible solution, if you really need that inverse is if that matrix can be expressed as sumation of outer products of vectors. If so, you can do this:
$$
M = sum v_i.u_i^T
$$
Get the first set element of the sum and invert it so you have $M^-1_bad aproximation$ matrix by using pseudoinverse, instead of doing the outerproduct you can invert $u_1$ and $v_1$ and use Moore-Penrose Inverse property:
$$
M^-1_bad aproximation = (v_1.u_1)^dagger = v_1^dagger . u_1^dagger
$$
Combine this with Sherman-Morrison formula to add the other $v_i$ and $u_i$ vectors.
Sherman-Morrison formula gives:
$$
(A + uv^T)^-1 = A^-1 - dfracA^-1uv^TA^-11 + v^TA^-1u
$$
Solution 3
You might try using Schur Complement which allows you to do the following:
Given a matrix $M$, it can be expressed as
$$
M =
beginbmatrix
A & B \
C & D \
endbmatrix
$$
where $A$ and $B$ must be square you can solve the inverse of $M$ blockwise as:
$$
M^-1 =
beginbmatrix
(M/D)^-1 & (M/D)^-1BD \
-D^-1C(M/D)^-1 & (M/A)^-1 \
endbmatrix
$$
where $(M/D)$ denotes the schur complement of block D of the Matrix M and is defines as
$$
(M/D) = A - BD^-1C
$$
also
$$
(M/A) = D - BA^-1C
$$
If $A$ or $D$ are singular, the matrice $M$ is singular and a pseudo inverse might be used.
This can give a reduction from inverting a $200,000 times 200,000$ matrice to inverting 4 $50,000 times 50,000$ matrices and some matrice multiplication, also this can be implement in parallel applications and there is way to do this recursively given certain conditions (enabling a reduction greater than 4 in size)
The $50,000 times 50,000$ matrices store is only 9.3 GB each and will be easier to invert
Hope it helps. Also you might want to search answers in Math Stack or in Stack Overflow and even ask there.
Edit: Added another solution with Schur Complement
New contributor
$endgroup$
$begingroup$
The first solution also works for matrix multiplication, i.e. finding $A^-1B$ without the knowledge of $A^-1$
$endgroup$
– Pedro Henrique Monforte
Mar 26 at 11:04
add a comment |
$begingroup$
Well it does not matter the language you will use, I am surprised you can even store that matrice in memory... I had a similar problem while researching on kernel methods.
- First, to put in perspective how huge that matrix is, I will assume each element is 32 bit word: $ 200,000 times 200,000 times 32 = 1.28 times 10^12 $ bits or 149 GB
The bitlenght is variable but even for a boolean matrix that is about 4,65 GB and you said your data is a bit more complex than that.
Solution 1
If you trying to solve a linear system there are many iterative solutions that might help you computing an 200,000 x 1 array aproximation of your system answer without having to store that absurdly large matrix in memory. But you should be aware that this might take a bit long because you might have to load information from the disk.
Solution 2
Another possible solution, if you really need that inverse is if that matrix can be expressed as sumation of outer products of vectors. If so, you can do this:
$$
M = sum v_i.u_i^T
$$
Get the first set element of the sum and invert it so you have $M^-1_bad aproximation$ matrix by using pseudoinverse, instead of doing the outerproduct you can invert $u_1$ and $v_1$ and use Moore-Penrose Inverse property:
$$
M^-1_bad aproximation = (v_1.u_1)^dagger = v_1^dagger . u_1^dagger
$$
Combine this with Sherman-Morrison formula to add the other $v_i$ and $u_i$ vectors.
Sherman-Morrison formula gives:
$$
(A + uv^T)^-1 = A^-1 - dfracA^-1uv^TA^-11 + v^TA^-1u
$$
Solution 3
You might try using Schur Complement which allows you to do the following:
Given a matrix $M$, it can be expressed as
$$
M =
beginbmatrix
A & B \
C & D \
endbmatrix
$$
where $A$ and $B$ must be square you can solve the inverse of $M$ blockwise as:
$$
M^-1 =
beginbmatrix
(M/D)^-1 & (M/D)^-1BD \
-D^-1C(M/D)^-1 & (M/A)^-1 \
endbmatrix
$$
where $(M/D)$ denotes the schur complement of block D of the Matrix M and is defines as
$$
(M/D) = A - BD^-1C
$$
also
$$
(M/A) = D - BA^-1C
$$
If $A$ or $D$ are singular, the matrice $M$ is singular and a pseudo inverse might be used.
This can give a reduction from inverting a $200,000 times 200,000$ matrice to inverting 4 $50,000 times 50,000$ matrices and some matrice multiplication, also this can be implement in parallel applications and there is way to do this recursively given certain conditions (enabling a reduction greater than 4 in size)
The $50,000 times 50,000$ matrices store is only 9.3 GB each and will be easier to invert
Hope it helps. Also you might want to search answers in Math Stack or in Stack Overflow and even ask there.
Edit: Added another solution with Schur Complement
New contributor
$endgroup$
Well it does not matter the language you will use, I am surprised you can even store that matrice in memory... I had a similar problem while researching on kernel methods.
- First, to put in perspective how huge that matrix is, I will assume each element is 32 bit word: $ 200,000 times 200,000 times 32 = 1.28 times 10^12 $ bits or 149 GB
The bitlenght is variable but even for a boolean matrix that is about 4,65 GB and you said your data is a bit more complex than that.
Solution 1
If you trying to solve a linear system there are many iterative solutions that might help you computing an 200,000 x 1 array aproximation of your system answer without having to store that absurdly large matrix in memory. But you should be aware that this might take a bit long because you might have to load information from the disk.
Solution 2
Another possible solution, if you really need that inverse is if that matrix can be expressed as sumation of outer products of vectors. If so, you can do this:
$$
M = sum v_i.u_i^T
$$
Get the first set element of the sum and invert it so you have $M^-1_bad aproximation$ matrix by using pseudoinverse, instead of doing the outerproduct you can invert $u_1$ and $v_1$ and use Moore-Penrose Inverse property:
$$
M^-1_bad aproximation = (v_1.u_1)^dagger = v_1^dagger . u_1^dagger
$$
Combine this with Sherman-Morrison formula to add the other $v_i$ and $u_i$ vectors.
Sherman-Morrison formula gives:
$$
(A + uv^T)^-1 = A^-1 - dfracA^-1uv^TA^-11 + v^TA^-1u
$$
Solution 3
You might try using Schur Complement which allows you to do the following:
Given a matrix $M$, it can be expressed as
$$
M =
beginbmatrix
A & B \
C & D \
endbmatrix
$$
where $A$ and $B$ must be square you can solve the inverse of $M$ blockwise as:
$$
M^-1 =
beginbmatrix
(M/D)^-1 & (M/D)^-1BD \
-D^-1C(M/D)^-1 & (M/A)^-1 \
endbmatrix
$$
where $(M/D)$ denotes the schur complement of block D of the Matrix M and is defines as
$$
(M/D) = A - BD^-1C
$$
also
$$
(M/A) = D - BA^-1C
$$
If $A$ or $D$ are singular, the matrice $M$ is singular and a pseudo inverse might be used.
This can give a reduction from inverting a $200,000 times 200,000$ matrice to inverting 4 $50,000 times 50,000$ matrices and some matrice multiplication, also this can be implement in parallel applications and there is way to do this recursively given certain conditions (enabling a reduction greater than 4 in size)
The $50,000 times 50,000$ matrices store is only 9.3 GB each and will be easier to invert
Hope it helps. Also you might want to search answers in Math Stack or in Stack Overflow and even ask there.
Edit: Added another solution with Schur Complement
New contributor
edited Mar 25 at 0:21
New contributor
answered Mar 24 at 5:44
Pedro Henrique MonfortePedro Henrique Monforte
885
885
New contributor
New contributor
$begingroup$
The first solution also works for matrix multiplication, i.e. finding $A^-1B$ without the knowledge of $A^-1$
$endgroup$
– Pedro Henrique Monforte
Mar 26 at 11:04
add a comment |
$begingroup$
The first solution also works for matrix multiplication, i.e. finding $A^-1B$ without the knowledge of $A^-1$
$endgroup$
– Pedro Henrique Monforte
Mar 26 at 11:04
$begingroup$
The first solution also works for matrix multiplication, i.e. finding $A^-1B$ without the knowledge of $A^-1$
$endgroup$
– Pedro Henrique Monforte
Mar 26 at 11:04
$begingroup$
The first solution also works for matrix multiplication, i.e. finding $A^-1B$ without the knowledge of $A^-1$
$endgroup$
– Pedro Henrique Monforte
Mar 26 at 11:04
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47830%2fpython-memory-error-compute-inverse-of-large-sized-matrix%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
You may want to see here. It may help you.
$endgroup$
– Media
Mar 23 at 9:28
$begingroup$
That is a large matrix to compute an inverse. If the data elements are floats then there is fair amount of floating point operations in progress. That needs memory. Try increasing your RAM for such bigger operations. Suggestion by @Media is also helpful
$endgroup$
– Savinay_
Mar 23 at 9:37
$begingroup$
@Savinay_ Yes the data elements are floats. Some values are even of the range of 10^-6
$endgroup$
– shaifali Gupta
Mar 23 at 9:40
$begingroup$
Another approach can be using another programming language. I guess in java there are some supports for that even if the RAM is full. I don't know the inner implementation but I guess they use paging capabilities.
$endgroup$
– Media
Mar 23 at 9:49
$begingroup$
@shaifaliGupta You need to increase your memory for sure.Floating point operations such as matrix inversions take up to O(n^3) complexity (time and memory expensive). You have a very big matrix to inverse. That needs memory.
$endgroup$
– Savinay_
Mar 23 at 9:52