What is the fastest integer factorization to break RSA? Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?Largest integer factored by Shor's algorithm?Are there asymmetric cryptographic algorithms that are not based on integer factorization and discrete logarithm?RSA security assumptions - does breaking the DLP also break RSA?Is there an algorithm for factoring N, which is just as simple as this one, but faster?Integer factorization via geometric mean problemHow can I create an RSA modulus for which no one knows the factors?Effect of $L_n[1/4,c]$ integer factorization on RSA-2048Understanding the Hidden Subgroup Problem specific to Integer FactorizationMore Knowledge Integer FactorizationWhat are some of the best prime factorization algorithms and their effecitvityFermat's factorization method on weak RSA modulus

How to react to hostile behavior from a senior developer?

Can family of EU Blue Card holder travel freely in the Schengen Area with a German Aufenthaltstitel?

Sum letters are not two different

Is it fair for a professor to grade us on the possession of past papers?

What are the out-of-universe reasons for the references to Toby Maguire-era Spider-Man in Into the Spider-Verse?

The logistics of corpse disposal

Time to Settle Down!

How do I find out the mythology and history of my Fortress?

Chinese Seal on silk painting - what does it mean?

How do I make this wiring inside cabinet safer?

Why are the trig functions versine, haversine, exsecant, etc, rarely used in modern mathematics?

How do living politicians protect their readily obtainable signatures from misuse?

Performance gap between bool std:vector and array

Why wasn't DOSKEY integrated with COMMAND.COM?

Trademark violation for app?

How to convince students of the implication truth values?

Where are Serre’s lectures at Collège de France to be found?

Generate an RGB colour grid

What do you call the main part of a joke?

Is it ethical to give a final exam after the professor has quit before teaching the remaining chapters of the course?

Can anything be seen from the center of the Boötes void? How dark would it be?

Does the Weapon Master feat grant you a fighting style?

What is the meaning of the simile “quick as silk”?

As a beginner, should I get a Squier Strat with a SSS config or a HSS?



What is the fastest integer factorization to break RSA?



Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
Announcing the arrival of Valued Associate #679: Cesar Manara
Unicorn Meta Zoo #1: Why another podcast?Largest integer factored by Shor's algorithm?Are there asymmetric cryptographic algorithms that are not based on integer factorization and discrete logarithm?RSA security assumptions - does breaking the DLP also break RSA?Is there an algorithm for factoring N, which is just as simple as this one, but faster?Integer factorization via geometric mean problemHow can I create an RSA modulus for which no one knows the factors?Effect of $L_n[1/4,c]$ integer factorization on RSA-2048Understanding the Hidden Subgroup Problem specific to Integer FactorizationMore Knowledge Integer FactorizationWhat are some of the best prime factorization algorithms and their effecitvityFermat's factorization method on weak RSA modulus










8












$begingroup$


I read on Wikipedia, the fastest Algorithm for breaking RSA is GNFS.



And in one IEEE paper (MVFactor: A method to decrease processing time for factorization algorithm), I read the fastest algorithms are TDM, FFM and VFactor.



Which of these is actually right?










share|improve this question











$endgroup$







  • 1




    $begingroup$
    This conference looks like a paper mill… IEEE is a big organization; its name alone means very little, and it is well-known that many of its publications are essentially academic scams. Except for a single (unused!) citation about the NFS, the authors of this paper appear to be completely unaware of any developments in integer factorization in the past thirty years. Throw it away; ignore the conference; nothing is to be learned here except a lesson about perverse incentives in publish-or-perish academic culture and profiteering academic publishers.
    $endgroup$
    – Squeamish Ossifrage
    Apr 4 at 1:47
















8












$begingroup$


I read on Wikipedia, the fastest Algorithm for breaking RSA is GNFS.



And in one IEEE paper (MVFactor: A method to decrease processing time for factorization algorithm), I read the fastest algorithms are TDM, FFM and VFactor.



Which of these is actually right?










share|improve this question











$endgroup$







  • 1




    $begingroup$
    This conference looks like a paper mill… IEEE is a big organization; its name alone means very little, and it is well-known that many of its publications are essentially academic scams. Except for a single (unused!) citation about the NFS, the authors of this paper appear to be completely unaware of any developments in integer factorization in the past thirty years. Throw it away; ignore the conference; nothing is to be learned here except a lesson about perverse incentives in publish-or-perish academic culture and profiteering academic publishers.
    $endgroup$
    – Squeamish Ossifrage
    Apr 4 at 1:47














8












8








8


2



$begingroup$


I read on Wikipedia, the fastest Algorithm for breaking RSA is GNFS.



And in one IEEE paper (MVFactor: A method to decrease processing time for factorization algorithm), I read the fastest algorithms are TDM, FFM and VFactor.



Which of these is actually right?










share|improve this question











$endgroup$




I read on Wikipedia, the fastest Algorithm for breaking RSA is GNFS.



And in one IEEE paper (MVFactor: A method to decrease processing time for factorization algorithm), I read the fastest algorithms are TDM, FFM and VFactor.



Which of these is actually right?







factoring






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 2 at 15:51









kelalaka

8,87532351




8,87532351










asked Apr 2 at 14:34









user56036user56036

412




412







  • 1




    $begingroup$
    This conference looks like a paper mill… IEEE is a big organization; its name alone means very little, and it is well-known that many of its publications are essentially academic scams. Except for a single (unused!) citation about the NFS, the authors of this paper appear to be completely unaware of any developments in integer factorization in the past thirty years. Throw it away; ignore the conference; nothing is to be learned here except a lesson about perverse incentives in publish-or-perish academic culture and profiteering academic publishers.
    $endgroup$
    – Squeamish Ossifrage
    Apr 4 at 1:47













  • 1




    $begingroup$
    This conference looks like a paper mill… IEEE is a big organization; its name alone means very little, and it is well-known that many of its publications are essentially academic scams. Except for a single (unused!) citation about the NFS, the authors of this paper appear to be completely unaware of any developments in integer factorization in the past thirty years. Throw it away; ignore the conference; nothing is to be learned here except a lesson about perverse incentives in publish-or-perish academic culture and profiteering academic publishers.
    $endgroup$
    – Squeamish Ossifrage
    Apr 4 at 1:47








1




1




$begingroup$
This conference looks like a paper mill… IEEE is a big organization; its name alone means very little, and it is well-known that many of its publications are essentially academic scams. Except for a single (unused!) citation about the NFS, the authors of this paper appear to be completely unaware of any developments in integer factorization in the past thirty years. Throw it away; ignore the conference; nothing is to be learned here except a lesson about perverse incentives in publish-or-perish academic culture and profiteering academic publishers.
$endgroup$
– Squeamish Ossifrage
Apr 4 at 1:47





$begingroup$
This conference looks like a paper mill… IEEE is a big organization; its name alone means very little, and it is well-known that many of its publications are essentially academic scams. Except for a single (unused!) citation about the NFS, the authors of this paper appear to be completely unaware of any developments in integer factorization in the past thirty years. Throw it away; ignore the conference; nothing is to be learned here except a lesson about perverse incentives in publish-or-perish academic culture and profiteering academic publishers.
$endgroup$
– Squeamish Ossifrage
Apr 4 at 1:47











3 Answers
3






active

oldest

votes


















11












$begingroup$

The IEEE paper is silly.



The factorization method they give is quite slow, except for rare cases. For example, in their table 1, where they proudly show that their improved algorithm takes 653.14 seconds to factor a 67 bit number; well, I just tried it using a more conventional algorithm, and it took 6msec; yes, that's 100,000 times as fast...






share|improve this answer









$endgroup$








  • 3




    $begingroup$
    Well I think the point of the paper is to improve upon Fermat-Factoring class algorithms, so it is expected that the given algorithm(s) get beaten by the more standard ones for small sizes, but excel on large inputs with (relatively small) prime differences?
    $endgroup$
    – SEJPM
    Apr 2 at 15:04






  • 3




    $begingroup$
    @SEJPM: if that's the case, then they probably shouldn't go on so much about RSA (where the probability of having a sufficiently small difference is tiny)
    $endgroup$
    – poncho
    Apr 2 at 15:11


















8












$begingroup$


Which of these is actually right?




Both. From reading the abstract it appears the papper doesn't claim that "VFactor" or Fermat Factorization ("FFM") or Trial Division ("TDM") are the best methods in general. However, if the difference between primes $p,q$ with $n=pq$ is really small, like $ll2^100$$;dagger$, then FFM (and probably the VFactor variants as well) will be a lot faster.



Though in general the difference between two same-length random primes is about $sqrtn/2$ which is about $2^1024$ for realistically sized moduli, so these attacks don't work there. Even with 400-bit moduli, which are somewhat easily crackable using a home desktop using the GNFS, this difference is still about $2^200$ and thus way too large.



Of course the implementation of the key generation may be faulty and emit primes in a too small interval and it's in these cases where these specialized algorithms really shine.



$dagger$: "$ll$" meaning "a lot less" here






share|improve this answer









$endgroup$












  • $begingroup$
    Actually, the claim in the paper is wrong. It states it is superior to Fermat in general, which is not true. Both VFactor and MVFactor are modified trial divisions, while Fermat covers a lot of potential trial divisions in 1 step. For a given maximum difference, there is a length, s.t. Fermat finds the result in 1 step (although it gets worse for later steps and has the same complexity in O-notation). The tested results don't even state the number of trials and thus are probably 1 each.
    $endgroup$
    – tylo
    Apr 11 at 15:03



















5












$begingroup$

Quantum algorithms



There is of course Shor's algorithm, but as this algorithm only runs on quantum computers with a lot of qubits it's not capable to factor larger numbers than $21$ (reference).



There are multiple apparent new records using adiabatic quantum computation, although some are apparently stunts: See fgrieu's answer on a related question.



Classical algorithms



The general number field sieve is the fastest known classical algorithm for factoring numbers over $10^100$.



The Quadratic sieve algorithm is the fastest known classical algorithm for factoring numbers under $10^100$.






share|improve this answer











$endgroup$








  • 5




    $begingroup$
    Actually, the factorization of 56153 was a stunt; the factors were deliberately chosen to have a special relation (differed in only 2 bits) and it's easy to factor when the factors have a known relation. AFAIK, the largest number that has been factored to date using a generic quantum factorization algorithm is 21.
    $endgroup$
    – poncho
    Apr 2 at 14:55











  • $begingroup$
    I've always wondered why QS is (at least, consensually said to be) faster than GNFS below a certain thresold (not so consensual), and how much of that is due to lack of work on optimizing GNFS for smaller values.
    $endgroup$
    – fgrieu
    Apr 2 at 15:42










  • $begingroup$
    @poncho As far as I know, all quantum factorization claims to date are stunts, including the 15 and 21 claims. They do a trivial calculation on a tiny quantum computer and then find a tortured way to argue that it factored a prime since that sounds better in the press release. That was the point of the 56153-factorization paper (Quantum factorization of 56153 with only 4 qubits by Dattani and Bryans).
    $endgroup$
    – benrg
    Apr 3 at 1:44






  • 1




    $begingroup$
    @poncho The paper with the 21-factoring claim is Experimental realisation of Shor's quantum factoring algorithm using qubit recycling by Martin-Lopez et al. I just skimmed it, and as far as I can tell, their actual experiment used a single qubit and a single qutrit. Can a machine with $1 + log_2 3$ qubits run Shor's algorithm on the input 21? They say yes in the title, but I would say no. Dattani and Bryans agree that the factorizations of 15 and 21 "were not genuine implementations of Shor’s algorithm".
    $endgroup$
    – benrg
    Apr 3 at 1:45






  • 1




    $begingroup$
    Er, factored a composite.
    $endgroup$
    – benrg
    Apr 3 at 1:53












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "281"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcrypto.stackexchange.com%2fquestions%2f68480%2fwhat-is-the-fastest-integer-factorization-to-break-rsa%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









11












$begingroup$

The IEEE paper is silly.



The factorization method they give is quite slow, except for rare cases. For example, in their table 1, where they proudly show that their improved algorithm takes 653.14 seconds to factor a 67 bit number; well, I just tried it using a more conventional algorithm, and it took 6msec; yes, that's 100,000 times as fast...






share|improve this answer









$endgroup$








  • 3




    $begingroup$
    Well I think the point of the paper is to improve upon Fermat-Factoring class algorithms, so it is expected that the given algorithm(s) get beaten by the more standard ones for small sizes, but excel on large inputs with (relatively small) prime differences?
    $endgroup$
    – SEJPM
    Apr 2 at 15:04






  • 3




    $begingroup$
    @SEJPM: if that's the case, then they probably shouldn't go on so much about RSA (where the probability of having a sufficiently small difference is tiny)
    $endgroup$
    – poncho
    Apr 2 at 15:11















11












$begingroup$

The IEEE paper is silly.



The factorization method they give is quite slow, except for rare cases. For example, in their table 1, where they proudly show that their improved algorithm takes 653.14 seconds to factor a 67 bit number; well, I just tried it using a more conventional algorithm, and it took 6msec; yes, that's 100,000 times as fast...






share|improve this answer









$endgroup$








  • 3




    $begingroup$
    Well I think the point of the paper is to improve upon Fermat-Factoring class algorithms, so it is expected that the given algorithm(s) get beaten by the more standard ones for small sizes, but excel on large inputs with (relatively small) prime differences?
    $endgroup$
    – SEJPM
    Apr 2 at 15:04






  • 3




    $begingroup$
    @SEJPM: if that's the case, then they probably shouldn't go on so much about RSA (where the probability of having a sufficiently small difference is tiny)
    $endgroup$
    – poncho
    Apr 2 at 15:11













11












11








11





$begingroup$

The IEEE paper is silly.



The factorization method they give is quite slow, except for rare cases. For example, in their table 1, where they proudly show that their improved algorithm takes 653.14 seconds to factor a 67 bit number; well, I just tried it using a more conventional algorithm, and it took 6msec; yes, that's 100,000 times as fast...






share|improve this answer









$endgroup$



The IEEE paper is silly.



The factorization method they give is quite slow, except for rare cases. For example, in their table 1, where they proudly show that their improved algorithm takes 653.14 seconds to factor a 67 bit number; well, I just tried it using a more conventional algorithm, and it took 6msec; yes, that's 100,000 times as fast...







share|improve this answer












share|improve this answer



share|improve this answer










answered Apr 2 at 14:53









ponchoponcho

94.5k2151248




94.5k2151248







  • 3




    $begingroup$
    Well I think the point of the paper is to improve upon Fermat-Factoring class algorithms, so it is expected that the given algorithm(s) get beaten by the more standard ones for small sizes, but excel on large inputs with (relatively small) prime differences?
    $endgroup$
    – SEJPM
    Apr 2 at 15:04






  • 3




    $begingroup$
    @SEJPM: if that's the case, then they probably shouldn't go on so much about RSA (where the probability of having a sufficiently small difference is tiny)
    $endgroup$
    – poncho
    Apr 2 at 15:11












  • 3




    $begingroup$
    Well I think the point of the paper is to improve upon Fermat-Factoring class algorithms, so it is expected that the given algorithm(s) get beaten by the more standard ones for small sizes, but excel on large inputs with (relatively small) prime differences?
    $endgroup$
    – SEJPM
    Apr 2 at 15:04






  • 3




    $begingroup$
    @SEJPM: if that's the case, then they probably shouldn't go on so much about RSA (where the probability of having a sufficiently small difference is tiny)
    $endgroup$
    – poncho
    Apr 2 at 15:11







3




3




$begingroup$
Well I think the point of the paper is to improve upon Fermat-Factoring class algorithms, so it is expected that the given algorithm(s) get beaten by the more standard ones for small sizes, but excel on large inputs with (relatively small) prime differences?
$endgroup$
– SEJPM
Apr 2 at 15:04




$begingroup$
Well I think the point of the paper is to improve upon Fermat-Factoring class algorithms, so it is expected that the given algorithm(s) get beaten by the more standard ones for small sizes, but excel on large inputs with (relatively small) prime differences?
$endgroup$
– SEJPM
Apr 2 at 15:04




3




3




$begingroup$
@SEJPM: if that's the case, then they probably shouldn't go on so much about RSA (where the probability of having a sufficiently small difference is tiny)
$endgroup$
– poncho
Apr 2 at 15:11




$begingroup$
@SEJPM: if that's the case, then they probably shouldn't go on so much about RSA (where the probability of having a sufficiently small difference is tiny)
$endgroup$
– poncho
Apr 2 at 15:11











8












$begingroup$


Which of these is actually right?




Both. From reading the abstract it appears the papper doesn't claim that "VFactor" or Fermat Factorization ("FFM") or Trial Division ("TDM") are the best methods in general. However, if the difference between primes $p,q$ with $n=pq$ is really small, like $ll2^100$$;dagger$, then FFM (and probably the VFactor variants as well) will be a lot faster.



Though in general the difference between two same-length random primes is about $sqrtn/2$ which is about $2^1024$ for realistically sized moduli, so these attacks don't work there. Even with 400-bit moduli, which are somewhat easily crackable using a home desktop using the GNFS, this difference is still about $2^200$ and thus way too large.



Of course the implementation of the key generation may be faulty and emit primes in a too small interval and it's in these cases where these specialized algorithms really shine.



$dagger$: "$ll$" meaning "a lot less" here






share|improve this answer









$endgroup$












  • $begingroup$
    Actually, the claim in the paper is wrong. It states it is superior to Fermat in general, which is not true. Both VFactor and MVFactor are modified trial divisions, while Fermat covers a lot of potential trial divisions in 1 step. For a given maximum difference, there is a length, s.t. Fermat finds the result in 1 step (although it gets worse for later steps and has the same complexity in O-notation). The tested results don't even state the number of trials and thus are probably 1 each.
    $endgroup$
    – tylo
    Apr 11 at 15:03
















8












$begingroup$


Which of these is actually right?




Both. From reading the abstract it appears the papper doesn't claim that "VFactor" or Fermat Factorization ("FFM") or Trial Division ("TDM") are the best methods in general. However, if the difference between primes $p,q$ with $n=pq$ is really small, like $ll2^100$$;dagger$, then FFM (and probably the VFactor variants as well) will be a lot faster.



Though in general the difference between two same-length random primes is about $sqrtn/2$ which is about $2^1024$ for realistically sized moduli, so these attacks don't work there. Even with 400-bit moduli, which are somewhat easily crackable using a home desktop using the GNFS, this difference is still about $2^200$ and thus way too large.



Of course the implementation of the key generation may be faulty and emit primes in a too small interval and it's in these cases where these specialized algorithms really shine.



$dagger$: "$ll$" meaning "a lot less" here






share|improve this answer









$endgroup$












  • $begingroup$
    Actually, the claim in the paper is wrong. It states it is superior to Fermat in general, which is not true. Both VFactor and MVFactor are modified trial divisions, while Fermat covers a lot of potential trial divisions in 1 step. For a given maximum difference, there is a length, s.t. Fermat finds the result in 1 step (although it gets worse for later steps and has the same complexity in O-notation). The tested results don't even state the number of trials and thus are probably 1 each.
    $endgroup$
    – tylo
    Apr 11 at 15:03














8












8








8





$begingroup$


Which of these is actually right?




Both. From reading the abstract it appears the papper doesn't claim that "VFactor" or Fermat Factorization ("FFM") or Trial Division ("TDM") are the best methods in general. However, if the difference between primes $p,q$ with $n=pq$ is really small, like $ll2^100$$;dagger$, then FFM (and probably the VFactor variants as well) will be a lot faster.



Though in general the difference between two same-length random primes is about $sqrtn/2$ which is about $2^1024$ for realistically sized moduli, so these attacks don't work there. Even with 400-bit moduli, which are somewhat easily crackable using a home desktop using the GNFS, this difference is still about $2^200$ and thus way too large.



Of course the implementation of the key generation may be faulty and emit primes in a too small interval and it's in these cases where these specialized algorithms really shine.



$dagger$: "$ll$" meaning "a lot less" here






share|improve this answer









$endgroup$




Which of these is actually right?




Both. From reading the abstract it appears the papper doesn't claim that "VFactor" or Fermat Factorization ("FFM") or Trial Division ("TDM") are the best methods in general. However, if the difference between primes $p,q$ with $n=pq$ is really small, like $ll2^100$$;dagger$, then FFM (and probably the VFactor variants as well) will be a lot faster.



Though in general the difference between two same-length random primes is about $sqrtn/2$ which is about $2^1024$ for realistically sized moduli, so these attacks don't work there. Even with 400-bit moduli, which are somewhat easily crackable using a home desktop using the GNFS, this difference is still about $2^200$ and thus way too large.



Of course the implementation of the key generation may be faulty and emit primes in a too small interval and it's in these cases where these specialized algorithms really shine.



$dagger$: "$ll$" meaning "a lot less" here







share|improve this answer












share|improve this answer



share|improve this answer










answered Apr 2 at 15:02









SEJPMSEJPM

29.6k659141




29.6k659141











  • $begingroup$
    Actually, the claim in the paper is wrong. It states it is superior to Fermat in general, which is not true. Both VFactor and MVFactor are modified trial divisions, while Fermat covers a lot of potential trial divisions in 1 step. For a given maximum difference, there is a length, s.t. Fermat finds the result in 1 step (although it gets worse for later steps and has the same complexity in O-notation). The tested results don't even state the number of trials and thus are probably 1 each.
    $endgroup$
    – tylo
    Apr 11 at 15:03

















  • $begingroup$
    Actually, the claim in the paper is wrong. It states it is superior to Fermat in general, which is not true. Both VFactor and MVFactor are modified trial divisions, while Fermat covers a lot of potential trial divisions in 1 step. For a given maximum difference, there is a length, s.t. Fermat finds the result in 1 step (although it gets worse for later steps and has the same complexity in O-notation). The tested results don't even state the number of trials and thus are probably 1 each.
    $endgroup$
    – tylo
    Apr 11 at 15:03
















$begingroup$
Actually, the claim in the paper is wrong. It states it is superior to Fermat in general, which is not true. Both VFactor and MVFactor are modified trial divisions, while Fermat covers a lot of potential trial divisions in 1 step. For a given maximum difference, there is a length, s.t. Fermat finds the result in 1 step (although it gets worse for later steps and has the same complexity in O-notation). The tested results don't even state the number of trials and thus are probably 1 each.
$endgroup$
– tylo
Apr 11 at 15:03





$begingroup$
Actually, the claim in the paper is wrong. It states it is superior to Fermat in general, which is not true. Both VFactor and MVFactor are modified trial divisions, while Fermat covers a lot of potential trial divisions in 1 step. For a given maximum difference, there is a length, s.t. Fermat finds the result in 1 step (although it gets worse for later steps and has the same complexity in O-notation). The tested results don't even state the number of trials and thus are probably 1 each.
$endgroup$
– tylo
Apr 11 at 15:03












5












$begingroup$

Quantum algorithms



There is of course Shor's algorithm, but as this algorithm only runs on quantum computers with a lot of qubits it's not capable to factor larger numbers than $21$ (reference).



There are multiple apparent new records using adiabatic quantum computation, although some are apparently stunts: See fgrieu's answer on a related question.



Classical algorithms



The general number field sieve is the fastest known classical algorithm for factoring numbers over $10^100$.



The Quadratic sieve algorithm is the fastest known classical algorithm for factoring numbers under $10^100$.






share|improve this answer











$endgroup$








  • 5




    $begingroup$
    Actually, the factorization of 56153 was a stunt; the factors were deliberately chosen to have a special relation (differed in only 2 bits) and it's easy to factor when the factors have a known relation. AFAIK, the largest number that has been factored to date using a generic quantum factorization algorithm is 21.
    $endgroup$
    – poncho
    Apr 2 at 14:55











  • $begingroup$
    I've always wondered why QS is (at least, consensually said to be) faster than GNFS below a certain thresold (not so consensual), and how much of that is due to lack of work on optimizing GNFS for smaller values.
    $endgroup$
    – fgrieu
    Apr 2 at 15:42










  • $begingroup$
    @poncho As far as I know, all quantum factorization claims to date are stunts, including the 15 and 21 claims. They do a trivial calculation on a tiny quantum computer and then find a tortured way to argue that it factored a prime since that sounds better in the press release. That was the point of the 56153-factorization paper (Quantum factorization of 56153 with only 4 qubits by Dattani and Bryans).
    $endgroup$
    – benrg
    Apr 3 at 1:44






  • 1




    $begingroup$
    @poncho The paper with the 21-factoring claim is Experimental realisation of Shor's quantum factoring algorithm using qubit recycling by Martin-Lopez et al. I just skimmed it, and as far as I can tell, their actual experiment used a single qubit and a single qutrit. Can a machine with $1 + log_2 3$ qubits run Shor's algorithm on the input 21? They say yes in the title, but I would say no. Dattani and Bryans agree that the factorizations of 15 and 21 "were not genuine implementations of Shor’s algorithm".
    $endgroup$
    – benrg
    Apr 3 at 1:45






  • 1




    $begingroup$
    Er, factored a composite.
    $endgroup$
    – benrg
    Apr 3 at 1:53
















5












$begingroup$

Quantum algorithms



There is of course Shor's algorithm, but as this algorithm only runs on quantum computers with a lot of qubits it's not capable to factor larger numbers than $21$ (reference).



There are multiple apparent new records using adiabatic quantum computation, although some are apparently stunts: See fgrieu's answer on a related question.



Classical algorithms



The general number field sieve is the fastest known classical algorithm for factoring numbers over $10^100$.



The Quadratic sieve algorithm is the fastest known classical algorithm for factoring numbers under $10^100$.






share|improve this answer











$endgroup$








  • 5




    $begingroup$
    Actually, the factorization of 56153 was a stunt; the factors were deliberately chosen to have a special relation (differed in only 2 bits) and it's easy to factor when the factors have a known relation. AFAIK, the largest number that has been factored to date using a generic quantum factorization algorithm is 21.
    $endgroup$
    – poncho
    Apr 2 at 14:55











  • $begingroup$
    I've always wondered why QS is (at least, consensually said to be) faster than GNFS below a certain thresold (not so consensual), and how much of that is due to lack of work on optimizing GNFS for smaller values.
    $endgroup$
    – fgrieu
    Apr 2 at 15:42










  • $begingroup$
    @poncho As far as I know, all quantum factorization claims to date are stunts, including the 15 and 21 claims. They do a trivial calculation on a tiny quantum computer and then find a tortured way to argue that it factored a prime since that sounds better in the press release. That was the point of the 56153-factorization paper (Quantum factorization of 56153 with only 4 qubits by Dattani and Bryans).
    $endgroup$
    – benrg
    Apr 3 at 1:44






  • 1




    $begingroup$
    @poncho The paper with the 21-factoring claim is Experimental realisation of Shor's quantum factoring algorithm using qubit recycling by Martin-Lopez et al. I just skimmed it, and as far as I can tell, their actual experiment used a single qubit and a single qutrit. Can a machine with $1 + log_2 3$ qubits run Shor's algorithm on the input 21? They say yes in the title, but I would say no. Dattani and Bryans agree that the factorizations of 15 and 21 "were not genuine implementations of Shor’s algorithm".
    $endgroup$
    – benrg
    Apr 3 at 1:45






  • 1




    $begingroup$
    Er, factored a composite.
    $endgroup$
    – benrg
    Apr 3 at 1:53














5












5








5





$begingroup$

Quantum algorithms



There is of course Shor's algorithm, but as this algorithm only runs on quantum computers with a lot of qubits it's not capable to factor larger numbers than $21$ (reference).



There are multiple apparent new records using adiabatic quantum computation, although some are apparently stunts: See fgrieu's answer on a related question.



Classical algorithms



The general number field sieve is the fastest known classical algorithm for factoring numbers over $10^100$.



The Quadratic sieve algorithm is the fastest known classical algorithm for factoring numbers under $10^100$.






share|improve this answer











$endgroup$



Quantum algorithms



There is of course Shor's algorithm, but as this algorithm only runs on quantum computers with a lot of qubits it's not capable to factor larger numbers than $21$ (reference).



There are multiple apparent new records using adiabatic quantum computation, although some are apparently stunts: See fgrieu's answer on a related question.



Classical algorithms



The general number field sieve is the fastest known classical algorithm for factoring numbers over $10^100$.



The Quadratic sieve algorithm is the fastest known classical algorithm for factoring numbers under $10^100$.







share|improve this answer














share|improve this answer



share|improve this answer








edited Apr 2 at 15:03

























answered Apr 2 at 14:53









AleksanderRasAleksanderRas

3,0221937




3,0221937







  • 5




    $begingroup$
    Actually, the factorization of 56153 was a stunt; the factors were deliberately chosen to have a special relation (differed in only 2 bits) and it's easy to factor when the factors have a known relation. AFAIK, the largest number that has been factored to date using a generic quantum factorization algorithm is 21.
    $endgroup$
    – poncho
    Apr 2 at 14:55











  • $begingroup$
    I've always wondered why QS is (at least, consensually said to be) faster than GNFS below a certain thresold (not so consensual), and how much of that is due to lack of work on optimizing GNFS for smaller values.
    $endgroup$
    – fgrieu
    Apr 2 at 15:42










  • $begingroup$
    @poncho As far as I know, all quantum factorization claims to date are stunts, including the 15 and 21 claims. They do a trivial calculation on a tiny quantum computer and then find a tortured way to argue that it factored a prime since that sounds better in the press release. That was the point of the 56153-factorization paper (Quantum factorization of 56153 with only 4 qubits by Dattani and Bryans).
    $endgroup$
    – benrg
    Apr 3 at 1:44






  • 1




    $begingroup$
    @poncho The paper with the 21-factoring claim is Experimental realisation of Shor's quantum factoring algorithm using qubit recycling by Martin-Lopez et al. I just skimmed it, and as far as I can tell, their actual experiment used a single qubit and a single qutrit. Can a machine with $1 + log_2 3$ qubits run Shor's algorithm on the input 21? They say yes in the title, but I would say no. Dattani and Bryans agree that the factorizations of 15 and 21 "were not genuine implementations of Shor’s algorithm".
    $endgroup$
    – benrg
    Apr 3 at 1:45






  • 1




    $begingroup$
    Er, factored a composite.
    $endgroup$
    – benrg
    Apr 3 at 1:53













  • 5




    $begingroup$
    Actually, the factorization of 56153 was a stunt; the factors were deliberately chosen to have a special relation (differed in only 2 bits) and it's easy to factor when the factors have a known relation. AFAIK, the largest number that has been factored to date using a generic quantum factorization algorithm is 21.
    $endgroup$
    – poncho
    Apr 2 at 14:55











  • $begingroup$
    I've always wondered why QS is (at least, consensually said to be) faster than GNFS below a certain thresold (not so consensual), and how much of that is due to lack of work on optimizing GNFS for smaller values.
    $endgroup$
    – fgrieu
    Apr 2 at 15:42










  • $begingroup$
    @poncho As far as I know, all quantum factorization claims to date are stunts, including the 15 and 21 claims. They do a trivial calculation on a tiny quantum computer and then find a tortured way to argue that it factored a prime since that sounds better in the press release. That was the point of the 56153-factorization paper (Quantum factorization of 56153 with only 4 qubits by Dattani and Bryans).
    $endgroup$
    – benrg
    Apr 3 at 1:44






  • 1




    $begingroup$
    @poncho The paper with the 21-factoring claim is Experimental realisation of Shor's quantum factoring algorithm using qubit recycling by Martin-Lopez et al. I just skimmed it, and as far as I can tell, their actual experiment used a single qubit and a single qutrit. Can a machine with $1 + log_2 3$ qubits run Shor's algorithm on the input 21? They say yes in the title, but I would say no. Dattani and Bryans agree that the factorizations of 15 and 21 "were not genuine implementations of Shor’s algorithm".
    $endgroup$
    – benrg
    Apr 3 at 1:45






  • 1




    $begingroup$
    Er, factored a composite.
    $endgroup$
    – benrg
    Apr 3 at 1:53








5




5




$begingroup$
Actually, the factorization of 56153 was a stunt; the factors were deliberately chosen to have a special relation (differed in only 2 bits) and it's easy to factor when the factors have a known relation. AFAIK, the largest number that has been factored to date using a generic quantum factorization algorithm is 21.
$endgroup$
– poncho
Apr 2 at 14:55





$begingroup$
Actually, the factorization of 56153 was a stunt; the factors were deliberately chosen to have a special relation (differed in only 2 bits) and it's easy to factor when the factors have a known relation. AFAIK, the largest number that has been factored to date using a generic quantum factorization algorithm is 21.
$endgroup$
– poncho
Apr 2 at 14:55













$begingroup$
I've always wondered why QS is (at least, consensually said to be) faster than GNFS below a certain thresold (not so consensual), and how much of that is due to lack of work on optimizing GNFS for smaller values.
$endgroup$
– fgrieu
Apr 2 at 15:42




$begingroup$
I've always wondered why QS is (at least, consensually said to be) faster than GNFS below a certain thresold (not so consensual), and how much of that is due to lack of work on optimizing GNFS for smaller values.
$endgroup$
– fgrieu
Apr 2 at 15:42












$begingroup$
@poncho As far as I know, all quantum factorization claims to date are stunts, including the 15 and 21 claims. They do a trivial calculation on a tiny quantum computer and then find a tortured way to argue that it factored a prime since that sounds better in the press release. That was the point of the 56153-factorization paper (Quantum factorization of 56153 with only 4 qubits by Dattani and Bryans).
$endgroup$
– benrg
Apr 3 at 1:44




$begingroup$
@poncho As far as I know, all quantum factorization claims to date are stunts, including the 15 and 21 claims. They do a trivial calculation on a tiny quantum computer and then find a tortured way to argue that it factored a prime since that sounds better in the press release. That was the point of the 56153-factorization paper (Quantum factorization of 56153 with only 4 qubits by Dattani and Bryans).
$endgroup$
– benrg
Apr 3 at 1:44




1




1




$begingroup$
@poncho The paper with the 21-factoring claim is Experimental realisation of Shor's quantum factoring algorithm using qubit recycling by Martin-Lopez et al. I just skimmed it, and as far as I can tell, their actual experiment used a single qubit and a single qutrit. Can a machine with $1 + log_2 3$ qubits run Shor's algorithm on the input 21? They say yes in the title, but I would say no. Dattani and Bryans agree that the factorizations of 15 and 21 "were not genuine implementations of Shor’s algorithm".
$endgroup$
– benrg
Apr 3 at 1:45




$begingroup$
@poncho The paper with the 21-factoring claim is Experimental realisation of Shor's quantum factoring algorithm using qubit recycling by Martin-Lopez et al. I just skimmed it, and as far as I can tell, their actual experiment used a single qubit and a single qutrit. Can a machine with $1 + log_2 3$ qubits run Shor's algorithm on the input 21? They say yes in the title, but I would say no. Dattani and Bryans agree that the factorizations of 15 and 21 "were not genuine implementations of Shor’s algorithm".
$endgroup$
– benrg
Apr 3 at 1:45




1




1




$begingroup$
Er, factored a composite.
$endgroup$
– benrg
Apr 3 at 1:53





$begingroup$
Er, factored a composite.
$endgroup$
– benrg
Apr 3 at 1:53


















draft saved

draft discarded
















































Thanks for contributing an answer to Cryptography Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcrypto.stackexchange.com%2fquestions%2f68480%2fwhat-is-the-fastest-integer-factorization-to-break-rsa%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High